Research9 min readMIT Tech Review

The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot

P
Redakcja Pixelift0 views
Share
The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot

Foto: MIT Tech Review

OpenAI is working on an AI system capable of conducting scientific research entirely autonomously — from formulating hypotheses, through designing experiments, to analyzing results. This represents another step in automating scientists' work, though questions remain about the reliability and ethics of such solutions. In parallel, MIT Technology Review highlights a serious gap in clinical research on psychedelic substances. Many tests are not conducted under blind conditions — meaning both participants and researchers know whether they are receiving a placebo or an active substance. This can significantly distort results through the placebo effect and observer bias. The problem is particularly significant because psychedelics are returning to the mainstream of medical research as a potential treatment for depression and anxiety disorders. The lack of rigorous methodologies weakens the credibility of discoveries and makes it difficult for regulators to assess the actual effectiveness of these therapies. Combining automation of research with improved methodological standards is becoming crucial for the future of science.

OpenAI is no longer hiding its ambitions. The company that transformed the landscape of artificial intelligence by introducing ChatGPT is now setting an even more ambitious goal: to build a fully automated researcher — a system based on agents capable of independently solving complex scientific problems. This is not another iteration of a chatbot. This is an attempt to push the boundaries of what a machine can achieve without direct human intervention.

News of this project emerged at a time when the entire AI sector is obsessively focused on increasing computational power and the number of model parameters. OpenAI, instead, asks itself a completely different question: what if AI could not only answer questions, but also ask them? What if it could plan experiments, analyze results, formulate hypotheses, and iterate in a loop without human support? The answer to these questions could change not only technology, but also the very nature of scientific work.

At the same time, as AI technology takes giant steps forward, another field of science — research on psychedelic substances — encounters an unexpected obstacle. It turns out that the methodology of these studies has a fundamental gap that no one noticed for years. This story shows that even in the era of AI, traditional scientific processes can be blind to their own limitations.

Autonomous agents as the future of scientific research

The concept of an autonomous research agent is not new — scientists have been discussing it for years. However, what OpenAI is doing now differs in scale and specific commitment of resources. It is about creating a system that can independently search scientific literature, identify gaps in knowledge, propose experiments, and then — in the case of computational research — actually conduct them.

Such a system would need to possess several key abilities. First, a deep understanding of the existing state of knowledge in a given field — not a superficial summary, but a real ability to synthesize thousands of articles and identify genuine gaps. Second, the ability to formulate testable hypotheses that are both innovative and scientific. Third, the ability to plan experiments that can verify these hypotheses. And finally, the ability to learn from results and iterate.

Interestingly, this vision has implications that extend far beyond technology itself. If AI can conduct research independently, the dynamics between scientist and machine will change. The scientist will no longer be the primary executor of experiments, but rather a manager who formulates big questions and evaluates the credibility of answers. This is a fundamental change in role.

Technical challenges of an autonomous researcher

Naturally, the road from concept to reality is bumpy. The first challenge is reliability and accuracy. AI systems, even very advanced ones, are prone to hallucinations — generating information that sounds credible but is false. In the context of scientific research, where precision is a matter of life and death (literally, in the case of medical research), this poses a serious problem. A research agent cannot afford to make up results or references.

The second challenge is multidisciplinary complexity. Real scientific problems rarely fit within a single discipline. Research on new materials may require knowledge from chemistry, physics, materials engineering, and computer science. The agent would need to not only possess knowledge from each of these fields, but also be able to integrate them in a meaningful way. This requires not only a larger model, but a fundamentally different approach to knowledge representation.

The third challenge is ethics and safety. What if an autonomous agent proposes an experiment that is dangerous? What if its research has unintended consequences? Who bears responsibility? These questions have no easy answers and will become increasingly pressing as technology develops.

Where AI can actually work independently

Realistically speaking, in the near term an autonomous researcher will function mainly in computational and theoretical research. These fields are relatively controlled — the agent can conduct simulations, analyze data, test mathematical models — all without the need for physical access to a laboratory or risk to safety. Computational biology, theoretical physics, materials science (in its simulation part) are areas where rapid progress can be expected.

In experimental research, which requires physical manipulation of things, the situation will be more complicated. Robotics must reach a level where it can operate laboratory equipment with the precision required by science. We are not there yet. However, even without full automation, the agent can play a key role in planning and analyzing experiments conducted by humans.

It is also important to understand that OpenAI's autonomous researcher will not compete with human scientists in creativity. At least not in the near term. Instead, it will function as a tool to accelerate routine work, identify patterns that a human might miss, and rapidly prototype ideas. It will be most valuable in supporting roles.

Psychedelics and a blind spot in science

While OpenAI builds the future, traditional science has discovered something surprising about the past: for decades of research on psychedelic substances, scientists failed to notice a fundamental methodological problem. Most research on psychedelics, especially regarding their effects on the mind and cognitive abilities, is based on double-blind studies — where neither the participant nor the researcher knows who received the substance and who received placebo.

The problem is that psychedelics have very distinct and remarkable physical and psychological effects. A participant who took LSD or psilocybin knows this perfectly well. This means the study is not truly blind — the participant knows that something was taken, and the researcher, observing their behavior, may also infer this. This introduces systematic errors that can falsify results on a large scale.

This discovery has serious implications for the entire body of research on psychedelics conducted over the past two decades. Studies that have been published in prestigious journals and have formed the basis for new therapeutic approaches may contain systematic errors. This does not mean psychedelics do not work — but it means our understanding of how they work and why may be far less certain than we thought.

The significance of methodological error

The history of psychedelics illustrates an important lesson: even the most rigorous scientific approach can have blind spots. Scientists who designed these studies were not careless. They were familiar with clinical research standards, understood the importance of blind studies, were careful. Yet something escaped them — something obvious once you know about it, but completely invisible when working within a tradition that does not question it.

This has consequences for all of science. It suggests that there may be other blind spots — other areas where an entire sector works on the basis of assumptions that no one questions because they are too deep, too obvious. It could be a methodology that all laboratories apply without thinking. It could be an assumption about how biological systems work. It could be something we cannot yet even imagine.

Here an interesting tension emerges with OpenAI's vision of an autonomous researcher. Such a system, if sufficiently intelligent, could potentially identify such blind spots — precisely because it would not be burdened by the tradition and convention that limits the thinking of a human scientist. But on the other hand, it could also perpetuate these errors if trained on existing literature containing these errors.

Implications for the future of scientific research

The combination of these two stories — OpenAI building an autonomous researcher and the discovery of a fundamental error in psychedelic research — suggests a future that will be far more complicated than the vision of full automation. AI will be a powerful tool, but it will require human oversight, critical thinking, and the ability to question assumptions.

In fact, perhaps the best role for autonomous research agents will be to serve as a tool to amplify human intuition, rather than replace it. An agent can search millions of articles and propose connections that a human would miss. It can conduct thousands of simulations and identify patterns. But the ultimate assessment of whether these patterns are worth investigating, or whether they are methodological artifacts or real phenomena — that will remain in the hands of the human scientist.

The challenge will be maintaining this balance. Too much reliance on AI can lead to perpetuating errors and blind spots. Too little engagement with AI may mean wasting the potential of the technology. Scientists of the future will need to be both deep experts in their fields and capable of critical collaboration with AI systems — understanding both the capabilities and limitations of these systems.

Competition in the race for autonomous researchers

OpenAI is not the only company working on these problems. Anthropic, Google DeepMind, and other AI laboratories are also exploring the possibilities of autonomous agents. However, OpenAI has certain advantages: enormous computational resources, access to the best talent, and importantly — experience in building systems that work at scale.

However, in this field computational resources are not everything. What will be key is understanding how science actually works — how scientists think, how they formulate questions, how they assess the credibility of results. This is knowledge that can be gained through close collaboration with real scientists, by studying the history of science, by deep immersion in various fields.

Companies that are able to build such partnerships and acquire such knowledge will have a significant advantage. This will not be a race where the one with the largest model wins. This will be a race where the one who best understands the reality of science wins.

Realistic expectations for the next decade

As for realistic expectations, over the next five to ten years we should expect autonomous researchers who will be very good at specific, well-defined tasks. Searching literature and identifying trends? Yes. Conducting simulations and data analysis? Absolutely. Formulating innovative hypotheses that are truly innovative, not just combinations of existing ideas? That will be more difficult.

The history of psychedelics shows us that even if AI is very intelligent, it will need human input to avoid systematic errors. Humans bring to science not only computational power, but also intuition, creativity, and — paradoxically — the ability to question what seems obvious. These are abilities that are difficult to formalize and teach to a machine.

The most interesting scenario for the future is not one in which AI completely replaces scientists, nor one in which AI remains a marginal tool. It is a scenario in which AI and humans collaborate in a way that amplifies the capabilities of both sides — where AI handles computation, search, and analysis, and humans provide direction, critical evaluation, and creative intuitions. In such a model, the future of science will be no less interesting than the past — just different.

Comments

Loading...