ChatGPT – A Look at the Potential of AI Technologies
Unless you’ve been living under a rock, you’ve probably heard about ChatGPT – the latest creation of OpenAI, a San Francisco-based artificial intelligence laboratory, utilizing advanced AI technologies. With striking naturalness, it’s capable of answering, explaining, writing, coding, or even joking about almost anything you might ask it.
I’ll start this discussion with a disclaimer: This article is written by a lowly human, and ChatGPT was not used to research or rephrase any part of it. It’s the truth.
But ChatGPT could do it.
And that’s precisely why it’s generated so much enthusiasm on social media, as the world’s population begins to contemplate what such technology means for our future.
What is ChatGPT?
According to its creator OpenAI, ChatGPT is a large language model that has been trained to interact with users conversationally. It can answer follow-up questions, admit when it has made a mistake, challenge incorrect premises, and even reject inappropriate requests.
The model was trained using Reinforcement Learning from Human Feedback, optimizing its responses based on the feedback from human AI trainers. That is to say, as ChatGPT responded to the training process, humans told the AI how its output could be improved, and through those interactions, it built a reward model that helps it conform to those preferences more frequently and more fully.
It has its limitations, of course. One is accuracy. While everyone is busy posting the amazing answers they’ve received to the strangest questions they’ve posed to the bot, to be clear, ChatGPT doesn’t nail it 100% of the time. At its core, it’s predicting the words that come next in a sentence, not scouring the web or fact-checking itself. So while some answers may seem correct, it’s still best to get a second opinion.
It also has certain tendencies due to its training. It is sensitive to the way questions are phrased; if you phrase a question a certain way, it may claim not to know the answer. After rephrasing the question, often it will answer correctly. It tends to give lengthy answers, at least partially due to its human trainers, who prefer longer, more complete answers. And, while ideally if it has incomplete information for a request it should ask clarifying questions, because of its nature it tends to try guessing what the user intended.
There are also some things it won’t do – it refuses inappropriate requests. For example, it won’t answer questions leading to racial or sexual bias, nor will it explain how to accomplish a dangerous activity, such as building a bomb. Of course, bad actors are everywhere, and some have found creative ways of posing questions that work around ChatGPT’s protective algorithms. Yet, OpenAI continues to work to close these loopholes to prevent misuse of their technology.
What are some potential uses of ChatGPT, and their pitfalls?
AI-assisted software engineering. A gallery photo at the top of one of OpenAI’s articles showcases a conversation with ChatGPT about a block of code that the user says “is not working as I expect.” After asking for more context, the bot gives a thorough treatment of the problem along with its suggested solution. This has the potential to be of great assistance to developers, helping to speed up the debugging process. And, of course, there’s no small concern that one day such AI could even replace developers altogether in some roles.
Writing. Professors are sounding the death knell for college essays and homework assignments. With a simple request, ChatGPT can generate an article on almost any topic, in any style you dictate, and at any length. And it does a pretty good job of it.
Professional writers are sweating a little as well, wondering how many may turn to AI technology rather than paying a human. However, here is where ChatGPT’s limitations become more apparent. Having been trained on data from 2021, it’s not up to date on current events. It’s also biased – in a good way. Designed to be helpful, truthful, and harmless, it tends to lean toward the positive in its writing. While that’s generally a good thing, an article intended to take a neutral stand on a subject could turn out biased toward what it perceives to be the upbeat side of the issue.
Using an AI application to produce written content also comes with ethical and legal concerns. Autogenerated content violates Google’s guidelines. As a result, there’s discussion at OpenAI and the community at large of introducing “watermarking” – a subtle way to ensure the bot’s output can be identified, thereby curbing misuse.
Where do we go from here?
It’s an exciting time as we see new and amazing things accomplished with the helping hand of technology. It has already enabled millions of people to accomplish their work more efficiently, handle repetitive tasks automatically, and even cut through the noise of multi-channel communication to reveal the most important information.
There are always concerns over the misuse of new technologies and the effect that these will have on the future job market. It will be up to business and government leaders to ensure the proper safeguards are in place so that society can fully benefit from these innovations for years to come.
By Chandra Subramanian, CogentNext Technologies