Hope for new therapies on one hand, concerns about misuse on the other: AI models have made rapid advances in biology. However, this could as well lead to the design of more dangerous viruses. Some experts see a need for action.
The potential to create new pathogens using artificial intelligence (AI) is raising concerns among health security experts, even as the technology offers promising avenues for developing new therapies. Warnings about this dual-edged sword emerged at a pandemic preparedness event in Berlin in the fall of 2025, with Richard Hatchett, MD, CEO of the Coalition for Epidemic Preparedness Innovations (CEPI), emphasizing the need for increased defense readiness. The recent announcement by Stanford researchers that they had designed the first viruses using AI, he added, should give pause to everyone.
AI Designs Targeted Bacteriophages
The research, currently available as a preprint, has already sparked discussion among experts. Several professionals in Germany anticipate a high-ranking publication in a peer-reviewed journal. The study details the creation of viruses using AI, but with a beneficial goal: designing bacteriophages, viruses that infect and kill bacteria, as a potential therapy against the growing threat of antibiotic resistance. The concern isn’t with the research itself, but with the potential for misuse of these technologies to create dangerous, previously unknown pathogens.
Central to the debate are genomic language models – AI tools trained on vast amounts of biological data that can be used to design entire genomes. These models offer hope for accelerating drug and vaccine development, but researchers have also warned about the potential for AI to influence epidemics and pandemics (2) and even create new bioweapons (3). “The same biological model that can be used to develop a benign viral vector for gene therapy could also be used to design a pathogenic virus that evades immunity induced by vaccines,” a group wrote in Science in August 2024 (2). The call to action: policymakers should establish carefully balanced regulations for particularly risky AI models while preserving autonomy for researchers.
Moritz Hanke, Fellow at the Center for Health Security at Johns Hopkins University
Therapeutic Benefits Anticipated
The models Evo 1 and Evo 2, released in 2024 and 2025, were used by the research team to design the “bacteria eaters.” Researchers requested complete genome sequences with the desired host tropism from the models. From thousands of machine-generated designs, they selected approximately 300 and tested their production in the laboratory. They reported success: 16 bacteriophages with “significant evolutionary novelty” in sequences and structures were the result. A mixture of these successfully overcame resistance in three strains of *E. Coli* in the lab, demonstrating the potential of the approach to create phage therapies against rapidly evolving bacterial pathogens.
What was once considered science fiction now appears technically possible. Experts say the warnings about AI-generated viruses were previously known within specialized circles, but the Stanford study is likely to be impossible for policymakers to ignore. Aldo Faisal, a computer scientist and AI specialist at the University of Bayreuth and a member of the German Ethics Council, believes that if the study passes peer review, it will demonstrate feasibility. “Once the study is complete, no one can say that such an undertaking is too complex or too far off. This is open source. Many researchers could replicate it if they wanted to.”
Many underestimated how quickly the technology would advance, according to Dr. Moritz Hanke, a German fellow at the Center for Health Security at Johns Hopkins University and part of the team that authored the Science article in 2024. “Some experts thought that generating small viral genomes with AI was five to ten years away. Now it turns out the technology is already here,” Hanke said. The risks outlined in the preprint are not adequately addressed. “There is concern that such methods could be used in the future to generate particularly infectious or lethal pathogens,” Hanke added. The developers of Evo have excluded sequences of viruses that can infect eukaryotes from the training data for safety reasons. “The models are therefore less able to predict such potentially dangerous sequences,” Hanke explained. However, open models can be modified by users afterward, and critical viral data can be added again. This risk has been known for some time (2, 5). “We performed such a ‘finetuning’ and were able to predict possible immune escape variants of SARS-CoV-2,” Hanke said (6).
9.3
Evo 2 was trained with 9.3 trillion DNA base pairs. According to the developers, the genomic foundation model can perform predictions and design tasks for DNA, RNA, and proteins.
Potential for Misuse
Biological AI models like Evo 1 and 2 are not as easy to use as AI language models like ChatGPT and often require powerful computers, but from Faisal’s perspective, it’s a problem that they often lack security measures and are freely available online. “Once published, they can’t be taken back.” Or, as Faisal puts it, “The genie is out of the bottle.”
However, experts caution against oversimplifying the current situation. “It’s not as if you could create viruses at the push of a button,” says molecular biologist Dr. Jakob Wirbel from the Helmholtz Centre for Infection Research (Braunschweig). The Stanford experiment succeeded because a small, well-understood phage was chosen as a starting point; the result is essentially variations of it. “Creating other viruses would not be possible with current technology. I also believe that the application of AI-designed phages to humans is a long way off.”
The Robert Koch Institute (RKI) is monitoring the issue not only with regard to pathogens. It assumes that “in the medium to long term, there could be abusive use of AI tools in the creation of artificial peptides, proteins, and even pathogens,” a spokesperson explained. But how likely is that? Who would be interested? The RKI stated that such scenarios are currently secondary compared to attacks with conventional and already known biological substances. At the World Health Summit, however, Colonel Veterinary Medical Doctor Carlos Penha Gonçalves of the Portuguese Ministry of Defense spoke relatively concretely. He described the scenario of a supervirus as an existential risk as unlikely, but researchers have formulated such thought experiments: for example, a pathogen combining the rapid spread of measles, the mortality rate of smallpox, and the incubation period of HIV – or a bioweapon targeting specific populations (3). More plausible, Gonçalves said, are localized attacks, such as with antibiotic-resistant germs on clinics or schools. The equipment needed for such attacks is no longer particularly sophisticated; it requires the facilities of medium-sized university labs or small biotech startups. The literature also warns of possible laboratory accidents or other unintentional harm.
Experts also say that even with AI assistance, expertise is still needed to build new viruses. However, the barriers are also lowered by other widely available AI tools, such as language models, which can provide instructions for laboratory steps in addition to genomic AI models. Researchers also consider the possible interplay with AI-powered autonomous laboratory environments and robotics (7, 8). Then, the necessary time and expertise would be further reduced.
“The possible scenarios are very serious, although it is unclear how likely they are. We should not let it come to that,” says Hanke. Those with an interest in such acts could include extremists aiming for the deindustrialization of societies. In addition to terrorist groups and individuals, experts see a major risk with state actors, as such attacks would be complex, requiring the entire chain from generating the sequences to producing and releasing them to function.
Concerns About Blocking Research
In light of such scenarios, some voices warn against exaggerated, diffuse fears. This could lead to overregulation of these critical AI models and even more bureaucracy, hindering research, says systems biologist Prof. Dr. Michael Knop, the first spokesperson for the Center for Synthetic Genomics at the Universities of Heidelberg, Karlsruhe, and Mainz, established in 2024. There, work is being done on ways to more quickly and easily modify or completely recreate genomes. “The preprint on bacteriophages is just the beginning,” he says. The topic of AI and biological design is extremely important for the future, especially for Europe. “We must not fall behind internationally, both economically and in medical progress. Regulation must be reasonable and enable research, not prevent it,” Knop emphasizes. He calls for intensified research on these models: “The results are not yet as excellent as those from language models like ChatGPT.” This is because the information about biology with which the AI is trained is still very incomplete. “So far, we have only limited data on the significance of sequences.” The code of life still has many blind spots for AI.

Moritz Hanke, Fellow at the Center for Health Security at Johns Hopkins University
Limitations of Current AI Models
The consequences: today, generated sequences do not appear superficially like genomes, but a closer analysis reveals “a lot of biological nonsense,” describes Knop. So far, the models also do not seem to be able to generate completely new genomes, experts emphasize. The Stanford preprint’s results do not fall into this category, being essentially modifications of a well-understood phage. “Creating other viruses would not be possible with current technology. I also believe that the application of AI-designed phage therapies to humans is a long way off.”
However, the rapid technological development also means that generating entirely new genomes is on the horizon, experts say. This is why regulation will inevitably face limits.

Addressing the risks requires a multi-faceted approach. Hanke advocates for targeted steps to mitigate risks, such as monitoring orders from companies that produce DNA segments used as starting material for creating viable organisms in the lab. “This allows you to see if someone is ordering sequences of highly pathogenic agents.” He also points to the need for international coordination and the existing international Biological Weapons Convention.
AI to Counter AI
Despite the concerns, some experts remain optimistic, arguing that AI could also provide tools to address the problem. However, countries must prepare and stay ahead of developments, Gonçalves said. He recommended working on detection systems and predictive tools to assess the pathogenicity of new pathogens. Researchers initiated a workshop on AI in molecular design in late 2025 to discuss responsible use of the technology and protective mechanisms, with recommendations for policymakers and committees in development.
The RKI emphasizes the importance of raising awareness among researchers about the dual-use risks of these methods – the possibility that useful research could be misused for harmful purposes. The institute also points to the strengthening of structures for preparation and response to bioterrorist attacks at the federal and state levels, with a focus on building core competencies to respond to potential future attacks using AI-generated synthetic biological agents. Research is underway to detect intentional use of such products.
Faisal cautions that regulation could also limit solution-finding opportunities. “It’s like a chess game,” he says. He stresses the importance of academic freedom and expects that the preprint will lead to studies on countermeasures against AI-generated viruses. “In other fields, such as cryptography, clearly identified risks have led to improvements.”
However, Faisal emphasizes the need for Europe to achieve sovereignty: “We need AIs that we control ourselves. So that, for example, someone cannot simply shut them down or demand exorbitant costs in a pandemic.”