Ilia Sut, co-founder of OpenAI, has launched a new venture called SAFE (Super Intelligence Inc) following his resignation, believing in a safe, advanced AI technology. Sut's project emphasizes safety alongside capabilities, directly addressing concerns he had with OpenAI's approach. The aim is to develop artificial super intelligence with a focus on safe outcomes, contrasting with OpenAI's evolving business model. The company, consisting of prominent figures, seeks to innovate in AI while maintaining safety as a priority without being influenced by short-term commercial pressures, although questions about their exact safety mechanisms remain unanswered for now.
Ilia Sut announces his new project focusing on AI safety post-OpenAI resignation.
SAFE emphasizes safety and capabilities in tandem, aiming to prioritize security.
Ilia highlights AI safety concerns regarding privacy and media manipulation.
The emergence of SAFE as a new player in AI safety reflects urgent global concerns regarding AI governance. Given the historical tension between safety imperatives and rapid market pressures, particularly in companies like OpenAI, Ilia Sut’s focus on 'nuclear safety' as a guiding principle underlines a pivotal shift in how AI safety is defined. This approach could set a benchmark if implemented effectively, ensuring that safety protocols evolve with AI capabilities.
Ilia Sut's departure and the establishment of SAFE signals a critical moment in the AI market landscape, particularly for investors reassessing the value of ethical considerations in AI development. With growing consumer demand for reliable tech solutions that prioritize safety, SAFE's differentiated focus could attract investors looking for long-term stability in an increasingly competitive environment, despite skepticism around achieving substantial early investments due to its unique proposition.
This is central to Ilia Sut's ambition for SAFE, where he aims to achieve ASI with a strong focus on safety.
Sut's work aims to progress towards AGI as a foundational step for developing ASI capabilities.
The focus here is on establishing AI goals aligned with human values to mitigate potential risks.
The contrasting approach to safety and capabilities with OpenAI's evolving model has influenced Sut's new venture.
Mentions: 6
Daniel Gross, a co-founder of SAFE, has previously contributed to AI efforts at Apple.
Mentions: 1
Daniel Levy, a co-founder at SAFE, previously worked in AI research at Microsoft.
Mentions: 1
Data Science Dojo 18month
Marketing Against the Grain 8month