OpenAI's co-founder new project is about safe super-intellligent AI

OpenAI's co-founder new project is about safe super-intellligent AI

SHARE IT

21 June 2024

Ilya Sutskever, one of OpenAI's co-founders, has founded a new startup, Safe Superintelligence Inc. (SSI), just a month after quitting OpenAI. Sutskever, OpenAI's long-time head scientist, created SSI with former YC partner Daniel Gross and ex-OpenAI engineer Daniel Levy.

Sutskever was instrumental in OpenAI's efforts to increase AI safety with the rise of "superintelligent" AI systems, which he collaborated on with Jan Leike. Sutskever and Leike abruptly quit OpenAI in May due to disagreements with the company's leadership on AI safety. Leike currently leads a team at Anthropic.

Sutskever has been focusing on the more difficult parts of AI safety for quite some time. In a 2023 blog post, he (written with Leike) projected that AI with intellect exceeding that of humans might come within a decade—and that it will not necessarily be benevolent, prompting study into ways to govern and constrain it.

SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI. We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the tweet reads.

We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace. Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

SSI has offices in Palo Alto and Tel Aviv, and is currently seeking technical expertise.

View them all