Their “manifesto”:
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence
SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.
We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.
If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.
Now is the time. Join us.
Ilya Sutskever, Daniel Gross, Daniel Levy
There are very little people in the world that understand llms on such a deep technological level as Ilya.
I honestly don’t think there is much else in the world he is interested in doing other then work on aligning powerful ai.
Wether his almost anti commercial style end up accomplishing much i don’t know but his intention are literal and clear.
What do you mean by anti-commercial style? I am not from North America, but this seems like pretty typical PR copytext for local tech companies. Lot’s of pomp, banality, bombast and vague assertions of caring about the world. It almost reads like satire at this point, like they’re trying to take the piss.
If his intentions are literal and clear, what does he mean by “superintelligence” (please be specific) and in what way is it safe?
This is the guy who turned against Sam for being to much about releasing product. I don’t think he plans on delivering much product at all. The reason to invest isn’t to gain profit but to avoid losing to an apocalyptic event which you may or may not personally believe, many Silicon Valley types do.
A safe Ai would be one that does not spell the end of Humanity or the planet. Ilya is famously obsessed with creating whats basically a benevolent AI god-mommy and deeply afraid for an uncontrollable, malicious Skynet.
I don’t consider tech company boardroom drama to be an indicator of anything (in of itself). This is not some complex dilemma around morality and “doing the right thing”.
Is my take on their PR copytext unreasonable? Is my interpretation purely a matter of subjectivity?
Why should I buy into this “AI god-mommy” and “skynet” stuff? Guy can’t even provide a definition of “superintelligence”. Seems very suspicious for a “top mind in AI” (paraphrasing your description).
Don’t get me wrong, I am not saying he acts like a movie antagonist IRL, but that doesn’t mean we have any reason to trust his motives or ignore the long history of similar proclamations.
No i applaud a healthy dose of skepticism.
I am everything but in favor of idolizing silicon valley gurus and tech leaders but from Sutskeva i have seen enough to know he is one of the few to actually pay attention to
Artificial Super intelligence or ASI is the step beyond AGI (artificial general intelligence)
The later is equal or better in capacity to a real human being in almost all fields.
Artificial Super intelligence is defined (long before openai was a thing) as transcending human intelligence in every conceivable way. At which point its a fully independent entity that can no longer be controlled or shutdown.