OpenAI's criteria for launching new features include meeting their high safety and reliability standards. They also focus on improving the ability to detect and refuse certain content. Exact timelines for releasing new features depend on meeting these criteria, ensuring a responsible and secure user experience.
At the event, OpenAI showcased a new model called GPT-4o, which offers greater responsiveness to voice prompts and improved vision capabilities. GPT-4o can reason across voice, text, and vision, and has the ability to harmonize with itself and offer real-time translations. Additionally, OpenAI demonstrated GPT-4o's capabilities in improving visual accessibility through the Be My Eyes platform and its potential as an accessibility device for those with poor sight or sight loss1.
The launch of the new voice mode for ChatGPT has been delayed by OpenAI due to technical issues. The company needs more time to improve the model's ability to detect and refuse certain content, as well as enhance the user experience and prepare its infrastructure to scale to millions of users while maintaining real-time responses.