You should NOT use CHATGPT?! - Co-Founder thinks so... #openai #ai #chatgpt

Ilia Sut, co-founder of OpenAI, has launched a new venture called SAFE (Super Intelligence Inc) following his resignation, believing in a safe, advanced AI technology. Sut's project emphasizes safety alongside capabilities, directly addressing concerns he had with OpenAI's approach. The aim is to develop artificial super intelligence with a focus on safe outcomes, contrasting with OpenAI's evolving business model. The company, consisting of prominent figures, seeks to innovate in AI while maintaining safety as a priority without being influenced by short-term commercial pressures, although questions about their exact safety mechanisms remain unanswered for now.

Ilia Sut announces his new project focusing on AI safety post-OpenAI resignation.

SAFE emphasizes safety and capabilities in tandem, aiming to prioritize security.

Ilia highlights AI safety concerns regarding privacy and media manipulation.

AI Expert Commentary about this Video

AI Ethics and Governance Expert

The emergence of SAFE as a new player in AI safety reflects urgent global concerns regarding AI governance. Given the historical tension between safety imperatives and rapid market pressures, particularly in companies like OpenAI, Ilia Sut’s focus on 'nuclear safety' as a guiding principle underlines a pivotal shift in how AI safety is defined. This approach could set a benchmark if implemented effectively, ensuring that safety protocols evolve with AI capabilities.

AI Market Analyst Expert

Ilia Sut's departure and the establishment of SAFE signals a critical moment in the AI market landscape, particularly for investors reassessing the value of ethical considerations in AI development. With growing consumer demand for reliable tech solutions that prioritize safety, SAFE's differentiated focus could attract investors looking for long-term stability in an increasingly competitive environment, despite skepticism around achieving substantial early investments due to its unique proposition.

Key AI Terms Mentioned in this Video

Artificial Super Intelligence (ASI)

This is central to Ilia Sut's ambition for SAFE, where he aims to achieve ASI with a strong focus on safety.

Artificial General Intelligence (AGI)

Sut's work aims to progress towards AGI as a foundational step for developing ASI capabilities.

AI Safety

The focus here is on establishing AI goals aligned with human values to mitigate potential risks.

Companies Mentioned in this Video

OpenAI

The contrasting approach to safety and capabilities with OpenAI's evolving model has influenced Sut's new venture.

Mentions: 6

Apple

Daniel Gross, a co-founder of SAFE, has previously contributed to AI efforts at Apple.

Mentions: 1

Microsoft

Daniel Levy, a co-founder at SAFE, previously worked in AI research at Microsoft.

Mentions: 1

Company Mentioned:

Industry:

Technologies:

Get Email Alerts for AI videos

By creating an email alert, you agree to AIleap's Terms of Service and Privacy Policy. You can pause or unsubscribe from email alerts at any time.

Latest AI Videos

Popular Topics