Who made ChatGPT?
OpenAI is an AI research and deployment company with a mission to ensure that artificial general intelligence benefits all of humanity. It’s famous for creating Artificial Intelligence models such as GPT and DALL·E 2.
On 30 November 2022, they released ChatGPT
What is ChatGPT?
GPT stands for Generative Pre-trained Transformer, and ChatGPT is a model which interacts conversationally. It’s fine-tuned from a model in the GPT-3.5 series, and it answers follow-up questions, admits its mistakes, challenges incorrect premises, and rejects inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow the instruction in a prompt and provide a detailed response.
An article by Aivo explained the technology behind ChatGPT and its evolution:
“What is ChatGPT? To understand it we have to go back to the beginning of the GPT family of models. GPT stands for “Generative Pretrained Transformer”. The original paper was written in 2018, and it generated a significant change, mainly in the subfield of transfer learning. This model could be retrained with relatively little data and achieve SotA (State of the Art) results in multiple benchmarks.
GPT-2 was the next evolution, launched in 2019. This model used an architecture similar to that of its predecessor but with some updates: It was considerably larger, with 10x parameters, so retraining the model was already a complex task due to the infrastructure it requires. It also changed the data with which it was trained, that is, “WebText” (data from the Web, for example, Reddit).
These changes generated certain emerging capabilities in the model: first, the ability to generate “coherent” text, and second, the ability to do “few-shot learning”. That is, learn without retraining the model, using a few examples in tasks that the model never saw in its initial training. At the application level, GPT-2 could generate very realistic news titles, it was also adopted to generate images (feedback loop between NLP and Computer Vision). However, when it tried to be adapted to create conversational tools, it failed.
And this brings us to GPT-3, released in 2020 and with multiple improvements made until 2022 (including ChatGPT). Again, the architecture was not changed much, but the number of model parameters increased from 1.5 billion to 175 billion, and also changed the dataset. They added a lot more data from the Web such as CommonCrawl, Wikipedia, and updated WebText. Like its predecessor, GPT-3 sets new SotA in multiple benchmarks and greatly improved its zero-shot learning capabilities. Like all Foundational Models (so far), it showed biases (religious, gender, etc) and demonstrated its virtues but also its flaws.”
How was it developed?
According to OpenAI:
“We trained this model using Reinforcement Learning from Human Feedback (RLHF), using the same methods as InstructGPT, but with slight differences in the data collection setup. We trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides—the user and an AI assistant. We gave the trainers access to model-written suggestions to help them compose their responses. We mixed this new dialogue dataset with the InstructGPT dataset, which we transformed into a dialogue format.
To create a reward model for reinforcement learning, we needed to collect comparison data, which consisted of two or more model responses ranked by quality. To collect this data, we took conversations that AI trainers had with the chatbot. We randomly selected a model-written message, sampled several alternative completions, and had AI trainers rank them. Using these reward models, we can fine-tune the model using Proximal Policy Optimization. We performed several iterations of this process.”
According to Stanford University:
“GPT-3 has 175 billion parameters and was trained on 570 gigabytes of text. For comparison, its predecessor, GPT-2, was over 100 times smaller at 1.5 billion parameters.
This increase in scale drastically changes the behavior of the model — GPT-3 is able to perform tasks it was not explicitly trained on, like translating sentences from English to French, with few to no training examples.
This behavior was mostly absent in GPT-2. Furthermore, for some tasks, GPT-3 outperforms models that were explicitly trained to solve those tasks, although in other tasks it falls short.”
Is ChatGPT replacing Google?
Having companies like Microsoft invest $1 billion in the company in 2019 with plans to integrate it with their Bing search engine makes you wonder at the potential such technologies have. However, currently, ChatGPT is:
“A natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a chatbot. The language model can answer questions, assist you with tasks such as composing emails, essays and code.”
ChatGPT doesn’t have access to the Internet and uses data it was trained on to predict answers; even its creators are raising awareness against using it as the sole source of information. So, in short, we don’t believe it will replace Google, but we may see an integration using its conversational style of replies.
ChatGPT uses and applications:
ChatGPT has numerous uses, in a Forbes magazine article they shared a few:
- Generating responses in a chatbot or virtual assistant, to provide more natural and engaging interactions with users.
- Brainstorming content ideas on keywords or topics
- Creating personalized communication, such as email responses or product recommendations
- Creating marketing content like blog posts or social media updates
- Translating text from one language to another
- Recapping long documents by providing the full text and asking ChatGPT to generate a shorter summary
- Using chatbot-generated answers to create automated customer service tools
ChatGPT limitations
According to OpenAI, ChatGPT limitations are:
- ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
- ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
- The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.12
- Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.
- While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.
Another limitation is that ChatGPT is unaware of anything that happened after 2021 because the data it’s trained with doesn’t go beyond 2021.
ChatGPT privacy
An article by Beincrypto raise the matter of user privacy in ChatGPT:
“We recommend that all users read through OpenAI’s privacy policy and terms of use before using ChatGPT. OpenAI may review all conversations you have with the AI chatbot. The company says these reviews are important to ensure safety and compliance with relevant laws and regulations. Furthermore, the conversations also help the company improve its systems. ChatGBT is free at the point of use. Remember the saying, “if something is free, then you’re the product?” Well, that might well be the case here. Especially as some estimates expect the program to cost OpenAI up to $100,000 per day to run, or $3 million a month.
With that in mind, you shouldn’t share any sensitive information with the bot, as it may be visible to the company’s AI trainers. You may choose to delete your account and all associated data at any point by following the steps outlined here. However, you can not view your conversation history, and neither can you delete specific prompts.”
Wrap-up
ChatGPT is still in its testing and development phase, where its 1 million users are giving feedback to its developers to work on enhancing its features and fixing its shortcomings accordingly.
There’s always going to be an ethical debate on these types of AI-driven technologies and their constant threat to replacing humans; however, we believe that no matter how advanced the technology is, it’s a tool that humans can use and integrate into their workflow, but we believe that the creativity and authenticity of humans are irreplaceable!