SEARCH
SHARE IT
OpenAI introduced Sora, a new text-to-video Artificial Intelligence (AI) model capable of converting textual cues into minute-long films. The new model competes with Google's Lumiere, a text-to-video AI model.
The new Sora model, trained on "text-conditional diffusion models jointly on videos and images of variable durations" by OpenAI, produces hyper-realistic films from text-based stimuli. The business aims to train AI models to comprehend and recreate the physical environment in motion, enabling real-world problem-solving.
It also included a number of video displaying the model in both the announcement post and the study paper, with the company's CEO Sam Altman even responding to user requests to demonstrate the model's capabilities.
OpenAI also stated that the Sora model is not yet available to everyone since the business is still working with red teamers to evaluate the model's limits and assure safety.
The corporation is also consulting with policymakers "to understand their concerns" about the technology. While the model is not available to the general public, OpenAI has stated that it would make it available to a small group of artists in order to get input on the technology.
We are also granting access to a number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals.
We’re sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon.
There is no information on when OpenAI will release the new model to the public and how much it will cost end customers. OpenAI CEO Sam Altman is reportedly seeking a $7 trillion investment for his projects.
MORE NEWS FOR YOU