In a groundbreaking announcement, OpenAI has set a new standard in the AI world with its latest ChatGPT developments. The company has unveiled a series of significant updates that are poised to redefine what we can expect from artificial intelligence platforms.
GPT-4 Turbo: A New Era of AI Models
The star of the show is the new GPT-4 Turbo, a model that not only surpasses its predecessors in capability but also in affordability. With a 128K context window, this powerhouse can handle over 300 pages of text at once, understanding and generating responses with unprecedented context and depth. The pricing? A game-changer—3x cheaper for input tokens and 2x cheaper for outputs compared to GPT-4.
Updated Knowledge Cutoff Date
The knowledge cutoff date has been updated, meaning GPT-4 Turbo is now equipped with information up to April 2023, keeping AI interactions relevant and timely.
Function Calling Updates
Function Calling has received a major upgrade. This feature now allows for multiple functions to be executed in a single command, streamlining interactions and enhancing the developer experience.
Improved Instruction Following and JSON Mode
GPT-4 Turbo sets a new benchmark in instruction following and introduces JSON mode for those who need syntactically correct JSON outputs. The ‘seed’ parameter is a game-changer for reproducible outputs, perfect for debugging and creating consistent user experiences. And soon, developers will gain insight into the model’s decision-making with the launch of a feature that reveals the log probabilities of output tokens.
Updated GPT-3.5 Turbo
Not to be overshadowed, GPT-3.5 Turbo has received its own set of upgrades, including a default 16K context window and a significant 38% improvement in tasks like JSON, XML, and YAML generation.
Text to Speech
OpenAI’s new modalities break the text barrier. GPT-4 Turbo now has vision capabilities, while DALL·E 3 enables the integration of image generation into applications. The Text-to-Speech API brings six preset voices and quality options to choose from, making AI interactions more human-like.
Assistants API: Build AI-Powered Apps with Ease
OpenAI introduces the Assistants API, enabling the creation of AI-driven experiences that leverage advanced capabilities like Code Interpreter and Retrieval. This API represents a leap towards creating more dynamic and specialized AI tools.
Build Custom GPTs
For specialized needs, OpenAI presents the opportunity to build custom GPTs through an experimental access program, offering unparalleled customization.
The GPT Marketplace emerges as a hub for developers to access a variety of GPT-powered applications, fostering a community of innovation and collaboration.
OpenAI introduces Copyright Shield to defend and cover costs for customers facing copyright infringement claims, a move that reflects their commitment to user protection.
Whisper 3 promises improved performance in automatic speech recognition across languages. It uses the open source automatic speech recognition model (ASR). It was also announced that the support for Whisper V3 through API is coming later this year showcasing OpenAI’s dedication to multi-modal AI.
The release of Python SDK v1.0 streamlines OpenAI development by introducing auto-retry with backoff for improved error resilience and type annotations for better code reliability. It moves away from global defaults, allowing developers to instantiate clients directly for increased control and customization. Additionally, weights and biases functionality is now neatly packaged, simplifying performance tracking and analysis for developers. These enhancements promise a more robust and efficient coding environment.
As OpenAI continues to push the boundaries of what’s possible with AI, it’s clear that the future is now. Whether you’re a developer looking to build cutting-edge apps or a business aiming to leverage AI for growth, these updates offer the tools to bring your visions to life. Stay tuned as these features roll out, marking a new chapter in the era of artificial intelligence.
The original blog by OpenAI can be found here.