Friday, July 19, 2024
Advertisement
  1. You Are At:
  2. News
  3. Technology
  4. Former chief scientist of OpenAI, starts a new AI company: All you need to know

Former chief scientist of OpenAI, starts a new AI company: All you need to know

Ilya Sutskever, one of the prominent names in the development of ChatGPT and co-founder of OpenAI, has embarked on a new venture. He launched Safe Superintelligence Inc., a company dedicated to the secure and responsible advancement of superintelligent AI systems.

Written By: Saumya Nigam @snigam04 New Delhi Updated on: June 20, 2024 19:25 IST
Ilya Sutskever, openai
Image Source : REUTERS Ilya Sutskever

Ilya Sutskever, one of the visionaries behind OpenAI has reportedly stepped off from the company where he was working as a chief scientist. He is heading for a new journey and has founded Safe Superintelligence Inc., a company which will focus on the safe development of superintelligent AI systems. The announcement surfaced after the departure of Sutskever from OpenAI, where he was one of the key players in developing advanced AI models like ChatGPT.

A commitment to AI safety

Safe Superintelligence Inc. aims at pioneering the safe development of AI systems which will surpass human intelligence- often referred to as superintelligence. Sutskever and his co-founders, Daniel Gross and Daniel Levy have emphasized their commitment towards safety and security in AI, which has explicitly stated that their new enterprise would be shielded from the typical commercial pressures and management distractions that can sideline such priorities.

What is the strategy?

Headquartered in Palo Alto, California, and Tel Aviv, Israel, Safe Superintelligence Inc. will leverage these tech hubs to recruit top-tier technical talent. Sutskever, along with Gross and Levy, will note that their choice of these locations highlights their deep connections and strategic advantage in accessing the best minds in AI research and development.

A shift towards safety

Sutskever's departure from OpenAI has marked a significant shift in his career and the decision was followed by a tumultuous period at OpenAI, where he had been part of a controversial attempt to expel the CEO Sam Altman. 

The move, after which Sutskever later expressed regret, was highlighted as the internal conflicts regarding the prioritization of AI safety versus business opportunities. His exit, along with the subsequent resignation of Jan Leike, who co-led the safety team, and further signalled growing concerns about the direction OpenAI was taking.

Safety from commercial pressures

Sutskever and his co-founders made it clear that Safe Superintelligence Inc. will not be swayed by the need for immediate product cycles or profit motives. Their goal is to ensure that the development of superintelligent AI adheres strictly to safety and ethical guidelines, free from the constraints that often accompany traditional business models.

As Sutskever embarks on this new venture, the AI community watches with keen interest. His departure from OpenAI and the founding of Safe Superintelligence Inc. highlight ongoing debates within the field about balancing rapid AI advancement with the imperative of ensuring these technologies develop in ways that are safe and beneficial for humanity.

ALSO READ: Lenovo Legion Go launching on June 27 in India: Details

ALSO READ: Google expands Gemini chatbot service in India and neighboring countries

Advertisement

Read all the Breaking News Live on indiatvnews.com and Get Latest English News & Updates from Technology

Advertisement
Advertisement
Advertisement
Advertisement