OpenAI Unveils GPT-4 Turbo With Vision Capabilities in API and ChatGPT

[ad_1]

OpenAI announced a major improvement to its latest artificial intelligence (AI) model GPT-4 Turbo on Tuesday. The AI model now comes with computer vision capabilities, allowing it to process and analyse multimedia inputs. It can answer questions about an image, video, and more. The company also highlighted several AI tools which are powered by GPT-4 Turbo with Vision including the AI coding assistant Devin and Healthify’s Snap feature. Last week, the AI firm introduced a new feature that would allow users to edit DALL-E 3 generated images within ChatGPT.

The announcement was made by the official account of OpenAI Developers, which said in an X (formerly known as Twitter) post, “GPT-4 Turbo with Vision is now generally available in the API. Vision requests can now also use JSON mode and function calling.” Later, the X account of OpenAI also revealed that the feature is now available in API and it is being rolled out in ChatGPT.

GPT-4 Turbo with Vision is essentially the GPT-4 foundation model with the higher token outputs introduced with the Turbo model, and it now comes with improved computer vision to analyse multimedia files. The vision capabilities can be used in a variety of methods. The end user, for instance, can use this capability by uploading an image of the Taj Mahal on ChatGPT, and asking it to explain what material the building is made up of. Developers can take this a step ahead and fine-tune the capability in their tools for specific purposes.

OpenAI highlighted some of these use cases in the post. Cognition AI’s Devin chatbot, which is an AI-powered coding assistant, uses GPT-4 Turbo with Vision to see the complex coding tasks and its sandbox environment to create programmes.

Similarly, the Indian calorie tracking and nutrition feedback platform Healthify has a feature called Snap where users can click a picture of a food item or a cuisine, and the platform reveals the possible calories in it. With GPT-4 Turbo with Vision’s capabilities, it now also recommends what the user should do to burn the extra calories or ways to reduce calories in the meal.

Notably, this AI model has a context window of 1,28,000 tokens and its training data runs up to December 2023.


Affiliate links may be automatically generated – see our ethics statement for details.



[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *