Ilya Sutskever’s AI Startup, Safe Superintelligence, Raises $1 Billion

Share via:


Former OpenAI chief scientist, Ilya Sutskever’s AI startup, Safe Superintelligence, announced that it raised $1B from NFDG, a16z, Sequoia, DST Global, and SV Angel.

SSI, with a current team of 10 employees, plans to use the funds to acquire computing power and hire top talent. The company aims to build a small, highly trusted team of researchers and engineers, with operations in both Palo Alto, California, and Tel Aviv, Israel, according to a Reuters report.

While the company declined to disclose its valuation, sources close to the matter revealed it to be $5 billion. The funding highlights that some investors are still willing to make significant bets on exceptional talent focused on foundational AI research. This is despite a general decline in interest in funding such companies, which can be unprofitable for extended periods, leading several startup founders to leave for tech giants, the report added.

“It’s important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market,” Gross said in an interview.

SSI plans to partner with cloud providers and chip companies to meet its computing power needs, though it has not yet decided which firms it will collaborate with. AI startups often rely on companies like Microsoft and Nvidia to support their infrastructure requirements.

Sutskever, an early proponent of the scaling hypothesis—which suggests that AI models improve with increased computing power—played a key role in sparking a surge of AI investments in chips, data centers, and energy. This foundation has enabled advances in generative AI, such as ChatGPT.

While Sutskever mentioned that he will approach scaling differently than his previous employer, he did not provide further details.

“Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?” he said.

“Some people can work really long hours and they’ll just go down the same path faster. It’s not so much our style. But if you do something different, then it becomes possible for you to do something special.”

Sutskever founded Safe Superintelligence in June. The company, headquartered in Palo Alto with offices in Tel Aviv, is led by Sutskever, entrepreneur and investor Daniel Gross, and former OpenAI employee Daniel Levy. Gross previously co-founded the AI startup Cue, which Apple acquired in 2013 for $40-60 million.

SSI has established the world’s first lab dedicated solely to developing safe superintelligence. The company’s mission is clear: to build a safe superintelligence. 

“We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team,” said Sutskevar. 

The company emphasised that safety and capabilities will be addressed simultaneously as technical problems require revolutionary engineering and scientific breakthroughs. SSI aims to advance capabilities rapidly while ensuring that safety remains paramount. 

Sutskever left OpenAI in May, where he was succeeded by Jakub Pachocki. Last year, reports surfaced that Sutskever was concerned about AGI safety and the rapid pace at which OpenAI was advancing, leading to tensions with OpenAI chief Sam Altman. 

On November 17, 2023, Sutskever and other board members fired Altman. However, by November 21, 2023, the board’s decision was reversed, and Altman was reinstated as CEO. Sutskever publicly expressed regret for his role in the coup, stating that he never intended to harm OpenAI and deeply regretted his participation in the board’s actions.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Ilya Sutskever’s AI Startup, Safe Superintelligence, Raises $1 Billion


Former OpenAI chief scientist, Ilya Sutskever’s AI startup, Safe Superintelligence, announced that it raised $1B from NFDG, a16z, Sequoia, DST Global, and SV Angel.

SSI, with a current team of 10 employees, plans to use the funds to acquire computing power and hire top talent. The company aims to build a small, highly trusted team of researchers and engineers, with operations in both Palo Alto, California, and Tel Aviv, Israel, according to a Reuters report.

While the company declined to disclose its valuation, sources close to the matter revealed it to be $5 billion. The funding highlights that some investors are still willing to make significant bets on exceptional talent focused on foundational AI research. This is despite a general decline in interest in funding such companies, which can be unprofitable for extended periods, leading several startup founders to leave for tech giants, the report added.

“It’s important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market,” Gross said in an interview.

SSI plans to partner with cloud providers and chip companies to meet its computing power needs, though it has not yet decided which firms it will collaborate with. AI startups often rely on companies like Microsoft and Nvidia to support their infrastructure requirements.

Sutskever, an early proponent of the scaling hypothesis—which suggests that AI models improve with increased computing power—played a key role in sparking a surge of AI investments in chips, data centers, and energy. This foundation has enabled advances in generative AI, such as ChatGPT.

While Sutskever mentioned that he will approach scaling differently than his previous employer, he did not provide further details.

“Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?” he said.

“Some people can work really long hours and they’ll just go down the same path faster. It’s not so much our style. But if you do something different, then it becomes possible for you to do something special.”

Sutskever founded Safe Superintelligence in June. The company, headquartered in Palo Alto with offices in Tel Aviv, is led by Sutskever, entrepreneur and investor Daniel Gross, and former OpenAI employee Daniel Levy. Gross previously co-founded the AI startup Cue, which Apple acquired in 2013 for $40-60 million.

SSI has established the world’s first lab dedicated solely to developing safe superintelligence. The company’s mission is clear: to build a safe superintelligence. 

“We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team,” said Sutskevar. 

The company emphasised that safety and capabilities will be addressed simultaneously as technical problems require revolutionary engineering and scientific breakthroughs. SSI aims to advance capabilities rapidly while ensuring that safety remains paramount. 

Sutskever left OpenAI in May, where he was succeeded by Jakub Pachocki. Last year, reports surfaced that Sutskever was concerned about AGI safety and the rapid pace at which OpenAI was advancing, leading to tensions with OpenAI chief Sam Altman. 

On November 17, 2023, Sutskever and other board members fired Altman. However, by November 21, 2023, the board’s decision was reversed, and Altman was reinstated as CEO. Sutskever publicly expressed regret for his role in the coup, stating that he never intended to harm OpenAI and deeply regretted his participation in the board’s actions.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Alexis Ohanian is premiering his women’s soccer show on...

In a late Friday email, X CEO Linda...

BingX confirms the resumption of withdrawal services following hack

Monetary losses from the BingX hack were initially...

After Rahul Dravid, Salesforce Ropes in Sadhguru

Salesforce chief Marc Benioff recently posted a photo...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!