GPT-4o is the latest improvement to the LLM model that can accept multi-modal inputs.
It can understand voice, text, or image commands with ease.
OpenAI is integrating GPT-4o inChatGPTand not restricting the new LLM behind a paywall.
Table Of Contents
How Good Is GPT-4o?
Compared to previous models, GPT-4o is significantly faster.
According to animage shared by Sam, the model is better at all routine tasks, especially coding.
The model scored an overall rating of 1310 compared to other older GPT models and LLMs from different companies.
It scored 1369 in the coding category, leading other models by a mile.
Sam went on to demonstrate the new models in all departments.
The video clip shows the presenter interacting with ChatGPT powered by GPT-4o using the voice feature.
These numbers indicate that the new model is very close to a humans response rate in a conversation.
The audio interaction didnt sound robotic or monotonous like most voice assistants.
Sam pointed out the massive difference and said, Real-time voice and video feels so natural.
The presenter then switched to video mode and wrote an equation that the new model recognized and solved quickly.
The coding demo interpreted the chuck of code snippets the presenters pointed out without difficulty.
But GPT-4o is available to all the existing ChatGPT members (both free and paid) for free.
Developers can get hands-on experience with the new LLMs API for half the price of GPT 4.
Moreover, OpenAI claims the API is two times faster with five times rate limits.
source: www.techworm.net