According to Sam Altman, CEO of OpenAI, humanity has officially crossed the threshold into a new and irreversible era: that of artificial superintelligence.
“We have surpassed the event horizon; the takeoff has begun”,
states Altman, emphasizing that we have now entered a phase in which artificial intelligence not only evolves rapidly, but does so in an autonomous and accelerated manner.
Despite the absence of visible signals, Altman warns that a profound transformation is already underway. Behind the scenes of the major tech companies, systems capable of surpassing human intellect in increasingly vast areas are emerging.
ChatGPT: more powerful than any human being? The opinion of Sam Altman of OpenAI
Altman does not hesitate to declare that “in a certain sense, ChatGPT is already more powerful than any human being who has ever lived.”
With hundreds of millions of users relying on this tool every day for increasingly complex tasks, artificial intelligence is already exerting a massive influence on society.
And this raises a crucial question. That is, even small defects in these systems can cause large-scale damage, amplified by their widespread diffusion.
Altman predicts that by next year we will see the arrival of agents capable of performing true cognitive jobs, revolutionizing software development and other intellectually intensive sectors.
In 2026, according to Altman, artificial intelligence will no longer be limited to reworking existing information, but will be capable of generating new discoveries. Thus paving the way for a form of digital creativity without precedent.
By 2027, we might witness the introduction of robots capable of operating in the physical world, a step that would mark the definitive entry of AI into our daily lives.
Every forecast by Altman seems to surpass the previous one, charting a trajectory that points straight towards superintelligence: systems with intellectual capabilities superior to those of humans in almost every field.
One of the most unsettling aspects of the current development of AI is what Altman describes as a “larval version of recursive self-improvement”
In practice, artificial intelligence is already helping researchers build future versions of itself, exponentially accelerating progress.
“If we can do a decade of research in a year, or in a month, then the rate of progress will obviously be very different”,
explains Altman.
This phenomenon is further amplified thanks to feedback loops. Technological development generates economic value, which in turn finances more powerful infrastructures, which produce even more advanced systems.
A transformed company, but not unrecognizable
Looking ahead, Altman envisions a future where the pace of discoveries will be so rapid as to be almost incomprehensible:
“Maybe we will go from solving high-energy physics one year to starting space colonization the following year.”
Despite the revolutionary scope of these changes, Altman believes that many aspects of human life will remain familiar. People will continue to fall in love, create art, and enjoy simple pleasures
However, beneath this surface, society will undergo profound upheavals. Entire professional categories could disappear, perhaps more quickly than new jobs can be created or workers can be retrained.
The hope, according to Altman, is that the ricchezza generata by these advances will allow for the exploration of social policies previously unthinkable. To help imagine this future, Altman proposes a mental experiment
That is, a farmer from a thousand years ago would consider our modern professions as “fake jobs,” convinced that we spend our time playing because we already have everything we need
Our descendants, Altman suggests, might look at our current careers with the same wonder.
Among all the issues raised, there is one that keeps AI security experts awake: the so-called alignment problem. How can we ensure that superintelligent systems act in line with human values and intentions?
Altman emphasizes the need to find a way to ensure that AI “learns and acts towards what we collectively want in the long term.” A task that is anything but simple, especially in a globalized world with often conflicting values.
Unlike social media algorithms, which maximize engagement by exploiting human psychological weaknesses, superintelligence will need to be designed to serve the collective good
But what exactly “bene collettivo” means is a question still without an answer.
“The sooner the world can start a conversation about what these broad limits are and how we define collective alignment, the better,”
warns Altman.
A brain for the world
Altman describes the OpenAI project as the construction of “a brain for the world”. It is not a metaphor: these are cognitive systems intended to integrate into every aspect of human civilization, surpassing human capabilities in all sectors.
According to Altman, we are about to enter an era in which intelligence will be too cheap to measure, becoming as ubiquitous and accessible as electricity
And for those who consider these statements as science fiction, Altman reminds that only a few years ago the current capabilities of AI seemed just as unlikely.
“If we had told you in 2020 that we would be where we are today, it probably would have seemed crazier than our current predictions for 2030,” he states.
As the artificial intelligence industry continues its race, Altman concludes with a hope that sounds more like a prayer:
“We can scale smoothly, exponentially, and without incidents through superintelligence”.
His vision is not a distant forecast, but an ongoing reality. The race towards superintelligence is not something that is about to start: it has already begun. And humanity must prepare to coexist with its consequences.
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
The era of superintelligence has begun: Sam Altman of OpenAI raises the alarm on the AI of the fu...
According to Sam Altman, CEO of OpenAI, humanity has officially crossed the threshold into a new and irreversible era: that of artificial superintelligence.
“We have surpassed the event horizon; the takeoff has begun”,
states Altman, emphasizing that we have now entered a phase in which artificial intelligence not only evolves rapidly, but does so in an autonomous and accelerated manner.
Despite the absence of visible signals, Altman warns that a profound transformation is already underway. Behind the scenes of the major tech companies, systems capable of surpassing human intellect in increasingly vast areas are emerging.
ChatGPT: more powerful than any human being? The opinion of Sam Altman of OpenAI
Altman does not hesitate to declare that “in a certain sense, ChatGPT is already more powerful than any human being who has ever lived.”
With hundreds of millions of users relying on this tool every day for increasingly complex tasks, artificial intelligence is already exerting a massive influence on society.
And this raises a crucial question. That is, even small defects in these systems can cause large-scale damage, amplified by their widespread diffusion.
Altman predicts that by next year we will see the arrival of agents capable of performing true cognitive jobs, revolutionizing software development and other intellectually intensive sectors.
In 2026, according to Altman, artificial intelligence will no longer be limited to reworking existing information, but will be capable of generating new discoveries. Thus paving the way for a form of digital creativity without precedent.
By 2027, we might witness the introduction of robots capable of operating in the physical world, a step that would mark the definitive entry of AI into our daily lives.
Every forecast by Altman seems to surpass the previous one, charting a trajectory that points straight towards superintelligence: systems with intellectual capabilities superior to those of humans in almost every field.
One of the most unsettling aspects of the current development of AI is what Altman describes as a “larval version of recursive self-improvement”
In practice, artificial intelligence is already helping researchers build future versions of itself, exponentially accelerating progress.
“If we can do a decade of research in a year, or in a month, then the rate of progress will obviously be very different”,
explains Altman.
This phenomenon is further amplified thanks to feedback loops. Technological development generates economic value, which in turn finances more powerful infrastructures, which produce even more advanced systems.
A transformed company, but not unrecognizable
Looking ahead, Altman envisions a future where the pace of discoveries will be so rapid as to be almost incomprehensible:
“Maybe we will go from solving high-energy physics one year to starting space colonization the following year.”
Despite the revolutionary scope of these changes, Altman believes that many aspects of human life will remain familiar. People will continue to fall in love, create art, and enjoy simple pleasures
However, beneath this surface, society will undergo profound upheavals. Entire professional categories could disappear, perhaps more quickly than new jobs can be created or workers can be retrained.
The hope, according to Altman, is that the ricchezza generata by these advances will allow for the exploration of social policies previously unthinkable. To help imagine this future, Altman proposes a mental experiment
That is, a farmer from a thousand years ago would consider our modern professions as “fake jobs,” convinced that we spend our time playing because we already have everything we need
Our descendants, Altman suggests, might look at our current careers with the same wonder.
Among all the issues raised, there is one that keeps AI security experts awake: the so-called alignment problem. How can we ensure that superintelligent systems act in line with human values and intentions?
Altman emphasizes the need to find a way to ensure that AI “learns and acts towards what we collectively want in the long term.” A task that is anything but simple, especially in a globalized world with often conflicting values.
Unlike social media algorithms, which maximize engagement by exploiting human psychological weaknesses, superintelligence will need to be designed to serve the collective good
But what exactly “bene collettivo” means is a question still without an answer.
“The sooner the world can start a conversation about what these broad limits are and how we define collective alignment, the better,”
warns Altman.
A brain for the world
Altman describes the OpenAI project as the construction of “a brain for the world”. It is not a metaphor: these are cognitive systems intended to integrate into every aspect of human civilization, surpassing human capabilities in all sectors.
According to Altman, we are about to enter an era in which intelligence will be too cheap to measure, becoming as ubiquitous and accessible as electricity
And for those who consider these statements as science fiction, Altman reminds that only a few years ago the current capabilities of AI seemed just as unlikely.
“If we had told you in 2020 that we would be where we are today, it probably would have seemed crazier than our current predictions for 2030,” he states.
As the artificial intelligence industry continues its race, Altman concludes with a hope that sounds more like a prayer:
“We can scale smoothly, exponentially, and without incidents through superintelligence”.
His vision is not a distant forecast, but an ongoing reality. The race towards superintelligence is not something that is about to start: it has already begun. And humanity must prepare to coexist with its consequences.