The big news in AI today is Microsoft’s confirmation that GPT-4 will be released next week.
Andreas Braun, Chief Technology Officer at Microsoft Germany discussed GPT-4 at an event called AI in Focus – Digital Kickoff, confirming that the new release will arrive the week of March 13.
Developed by OpenAI, GPT-4 will advance the technology used by ChatGPT, which is currently based on GPT-3.5. While OpenAI has not confirmed any details about GPT-4, Braun also noted that version 4 will be a multimodal model – an upgrade from the current ChatGPT/Bing integration function.
GPT-4: Defining Multimodal Models
A multimodal model in GPT refers to a type of model that can process and generate text while also incorporating other types of data or input modalities such as images, audio, or video. This means that the model can take in information from multiple sources and use it to generate more comprehensive and contextually relevant text.
For example, a multimodal GPT model could be trained to generate captions for images by analyzing both the visual content of the image and the text associated with it. This would allow the model to generate more accurate and descriptive captions that take into account both the visual and textual information available.
Multimodal GPT models have the potential to improve the accuracy and contextual relevance of natural language generation tasks, making them more useful in a variety of applications such as chatbots, content creation, and automated customer service.
Altman On GPT-4: Don’t Believe The Hype?
OpenAI CEO Sam Altman spoke about the new iteration of GPT earlier this year, telling StrictlyVC : “The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from. People are begging to be disappointed and they will be.”
We’ll find out next week.
Follow The Dept for more on GPT-4 and all things AI.
Leave a Reply