In our actual use and communication with many users, everyone is not very clear about the capabilities of GPT4. Therefore, we will give a brief explanation of the capabilities of GPT4, try to avoid misunderstanding and misuse, reduce some understanding problems, and better play the value of the model.
Let me talk about the summary first
In informal conversation, the difference between GPT-3.5 and GPT-4 may not be obvious. When the complexity of the task reaches a sufficient threshold, the differences emerge—GPT-4 is more reliable and creative than GPT-3.5, and can handle more subtle instructions than GPT-3.5.
To understand the differences between the two models, we tested them on various benchmarks, including simulating exams originally designed for humans. We used the most recent publicly available test (in the case of Olympiad and AP free-response questions), or purchased the 2022-2023 version of the practice exam. We do not provide specific training for these exams. In the exam, the model saw only a small number of questions, but we consider the results to be representative - see our technical report for details.
It can be seen that if you want GPT4 to exert sufficient capabilities, the questions you ask are very important. All the AIs currently on the market use prompt words as input parameters, and we often ridicule them as "spells", because most of the current AIs are based on neural network models, just like human thinking. If you ask a question, you will Find the relevant points first, then screen and assemble step by step, and finally complete the delivery of the entire content. If the question is very broad, it is actually of little value, such as asking about a recipe or a description of a medicine, simple and clear information will not make much difference. This kind of knowledge-based learning still needs to use GPT3.5 to have more obvious advantages. If you want to assume a bunch of conditions and let AI perform a task, this logical ability must be stronger than GPT4.
The official has been upgrading the speed of GPT-3.5, getting faster and faster. At present, in our actual test, GPT3.5 is already very fast, and can complete the output of a whole paragraph of text in a few seconds, while GPT-4 It takes a long time to output. For simple and clear tasks, such as the simple and clear knowledge questions mentioned above, it is recommended to ask GPT-3.5, otherwise it will take a long time, and the effect is not much different. Another piece is transactional, such as helping to extract some content from a pile of text. These are relatively simple and clear tasks. GPT-3.5 is recommended for these scenarios.
GPT-4 is currently ½ to ⅓ slower overall than GPT-3.5, so if it is not a very logical thing, GPT4 is not recommended.
In some tests, one would say that the more and more detailed the better, in fact, there is such a cognition in the communication between people. In fact, this is not efficient enough. It is obvious that a few words can be said clearly, but after listening to a cross-talk, the time and output it consumes is not cost-effective from the perspective of AI. This also shows from another level that GPT-4 is more logical because it can better assemble the language, concise and concise, just get what you want, there is no need to listen to a long explanation
About Image Capabilities
According to the official statement, GPT-4 has image capabilities. Unfortunately, there is no way to use this capability in any channel, whether it is the official ChatGPT or the API. We can only continue to wait. If the API can support it, we will first timing support.
GPT4 Questions About Lunabot
Many users of Lunabot often go to GPT-4 and ask: "Are you GPT-3.5 or GPT-4", and also question whether Lunabot uses GPT-4. We have answered this question positively many times. GPT-4 will never be false, and we also have related tests. Why this is the case, we can only guess that when the GPT-4 model of the API was launched, the name was not modified, or the information of the name was not enough for the neural network to recognize itself as GPT-4. From this point of view, The official website should be a newer GPT-4 model.
As a client tool, we can only provide services based on OPENAI's API. At present, there is not much difference with the GPT-4 model on ChatGPT's official website, but the official website should be updated a little bit. But some students said that the official website will definitely answer GPT-4, and the screenshots are provided.
Finally, in line with the original intention of improving service capabilities, we do not want to involve in the processing and modification of GPT service content, nor do we want to add a system-level prompt to let Lunabot answer you that it is GPT-4. How about making some superficial modifications? That’s how it is, students who still have doubts, you can use the API Key to test it yourself, but please don’t think that this is fooling users based on your own “experience”, we really can’t bear this responsibility.
Our goal is to make Lunabot based on the existing platform based on AI capabilities to improve efficiency and process workflow more conveniently and quickly.