While the idea of AI becoming self-improving has long existed in the realm of science fiction, recent developments suggest it’s becoming increasingly tangible. Researchers are now making real progress toward AI systems that can autonomously improve themselves. These systems aren’t ready for prime time, but they’re closer than you might think.
Artificial intelligence is on the cusp of a major breakthrough. In a provocative essay published this summer, Leopold Aschenbrenner—a former OpenAI researcher—argues that artificial general intelligence (AGI) could arrive as early as 2027. He predicts that AI will consume 20% of U.S. electricity by 2029 and reshape global geopolitics, largely because AI will soon be able to conduct AI research. This recursive self-improvement could spark an “intelligence explosion,” a concept explored by thinkers like I.J. Good in 1965, Nick Bostrom, and many others.
AI’s Self-Improvement Cycle
The idea behind self-improving AI is simple: once AI can automate tasks across various industries—from customer service to taxi driving—it will only need to automate one more job to trigger an intelligence explosion: AI research itself.
Currently, AI automates narrow aspects of its own development (e.g., neural architecture search, hyperparameter optimization). But an AI that can autonomously conduct the entire AI research process—designing experiments, testing new methods, and discovering superior AI architectures—could create an accelerating cycle of self-improvement. In theory, this could lead to increasingly powerful AI systems.
It might seem implausible that AI could handle the cognitive complexity of AI research. After all, creating new AI models involves creativity and insight. But the role of an AI researcher—reading literature, generating hypotheses, designing experiments, and interpreting results—may be surprisingly amenable to automation. As Aschenbrenner explains, "The job of an AI researcher is fairly straightforward" and "AI research can be automated."
The First AI Researcher
A significant step toward this vision of self-improving AI occurred this past August when Sakana AI, a Japanese startup, unveiled its “AI Scientist.” This system can autonomously carry out AI research: reviewing existing papers, generating new ideas, designing and conducting experiments, and even writing and submitting research papers. The AI Scientist has published multiple papers in areas like transformers, diffusion models, and neural network dynamics. While the research it produces isn’t yet at the cutting edge, the system shows that AI can indeed contribute meaningfully to advancing the field.
Take, for example, the paper titled "DualScale Diffusion: Adaptive Feature Balancing for Low-Dimensional Generative Models." The AI Scientist identified an unsolved problem in diffusion models and proposed a new solution involving dual branches in the denoiser network to balance global and local features. It then designed and executed experiments to validate its hypothesis, and generated a detailed research paper outlining the results.
Though the work is still early-stage, the AI Scientist demonstrated the ability to conduct independent research, generate meaningful hypotheses, and write clear, well-structured papers. Some of the papers it produced were even rated for potential acceptance at top conferences like NeurIPS, showcasing real promise.
What’s Next? The Path to Self-Improvement
Sakana’s AI Scientist is in its early stages, with several limitations. It only processes text, lacks internet access, and was trained on a limited amount of computational resources. However, the potential for improvement is clear. With access to more compute power and further algorithmic developments, systems like this could rapidly advance.
As Cong Lu, a researcher at Sakana, put it, "We really believe this is the GPT-1 of AI science." Just as GPT-1 laid the groundwork for GPT-3 and GPT-4, AI research tools like Sakana’s AI Scientist could soon undergo rapid development, leading to breakthroughs we can’t yet fully predict.
The Coming Revolution
Today’s AI technology, like GPT-4, is impressive but still requires human involvement to improve. However, the prospect of AI creating increasingly powerful AI systems could lead to an “intelligence explosion”—where AI begins making itself more capable faster than humans can keep up.
This could dramatically accelerate the pace of innovation across all fields, from life sciences to climate change. But it also poses significant risks, as AI could quickly advance beyond our control. AI systems capable of self-improvement might soon become a central concern for policymakers and AI developers alike.
While we can’t yet predict whether AI will deliver truly groundbreaking innovations like the transformer model or convolutional neural networks, the early progress shows that AI will be able to automate much of today’s incremental research, amplifying the speed and scope of AI advancements. This could be the first step toward a world where AI is creating and improving itself, radically altering the technological landscape.
As Eliot Cowan, CEO of the AI startup AutoScience, notes, “The vast majority of AI research is incremental in nature. AI can autonomously complete that kind of research today.” The implications are profound, and AI companies like OpenAI and Anthropic are already taking this shift seriously.
The possibility of an AI-driven research explosion is now more than just a theoretical fantasy—it’s fast becoming a reality. The coming years could bring dramatic and unexpected changes.
Commenti