AI and Biotechnology: Navigating Opportunities and Risks in the Bioeconomy
By Allison Proffitt
August 27, 2024 | Last week’s Bioprocessing Venture, Innovation, and Partnering Conference brought together thought leaders from across the bioeconomy to discuss the rapidly evolving landscape of artificial intelligence (AI) in biotechnology.
Panel moderator Lori Ellis, BioSpace, began by defining the space. She asked Sarah Glaven, principle assistant director, biotechnology and biomanufacturing, White House Office of Science and Technology Policy, to help clarify the White House’s recent Bioeconomy Executive Order, which aims to stimulate economic growth by focusing on protection, security, and ensuring AI’s benefits outweigh the costs to the American people. While the Bioprocessing Summit focuses on the pharmaceutical industry and biomedical space, “the bioeconomy executive order, as we call it,” Glaven said, “touches on everything from health to agriculture to climate to economic security, social security, industrial chemistry.”
AI is impacting the broader bioeconomy as well as nearly every other aspect of our economy. Sherrie Eid, global head of real world evidence & epidemiology, SAS Institute, stressed that just because AI can be applied in these fields, doesn’t necessarily mean it should be. She urged stakeholders to ask critical questions about the potential impacts—both positive and negative—of AI technologies. “We might not always know. You can’t always anticipate all of the potholes on your trip, but what are you going to do when you hit one?” she asked. “If I impact a group or a subgroup of people… in a negative way, how do I mitigate it?”
From the pharmaceutical industry's viewpoint, Nagisa Sakurai, senior investment manager, Astellas, acknowledged the undeniable influence of AI on drug discovery. Pharma companies are increasingly exploring AI-based systems to analyze internal data more efficiently, potentially leading to new drug designs that surpass human imagination. However, Sakurai also voiced concerns about the risks associated with AI, particularly since the technology is developed externally. We are still debating internally how much we should invest in AI-driven drug discovery, given the uncertainties around the risks. We are not sure how much we can trust it, she noted.
Micheal Walker, executive director, life sciences supply chain, Microsoft, agreed. Walker observed a surge in interest due to generative AI (genAI), a concept that was challenging to discuss five years ago but has since become more “consumerized”. GenAI is on the peak of the Gartner Hype Cycle, he quipped. The technology itself is not new—Walker dated it to 100 years in the past. But the current convergence of accessible data, cloud storage, and high-performance computing driving the advancements in large language models (LLMs) has created a genAI tipping point.
Walker warned of the risks tied to this rapid adoption, particularly in misunderstanding the different technologies with their various potential and limitations. “You can’t treat machine learning the same as generative AI,” he said. “Machine learning is making a conclusion based on existing data. Generative AI is creating synthetic data—data that never existed at the beginning—that’s then interpreted by a large language model. You have to treat these things very differently.”
“When I heard ‘risk’, I knew it had to come to me. That was my cue,” said Colin Zick, partner at Foley Hoag. But perhaps surprisingly as the panel’s attorney representative, Zick took a more pragmatic and reassuring approach. “There are risks in everything,” he said. “And the way that you go about managing those risks is no different here from a process standpoint than the way you manage any risks.”
Define management’s comfort level with the risks. Set limits. Offer training. Perhaps buy insurance, Zick said. From a research perspective, Eid added contextualizing your data and your tools and finding standards.
But AI is just another tool, Zick argued. “I think it’s important to sort of demystify… the notion that it’s somehow different.”
Addressing the Workforce and Educational Gaps
The panelists touched on the significant challenges in workforce development and education in the context of AI. Both Glaven and Eid pointed out the growing difficulty in distinguishing between high-quality and poor data outputs from AI systems. Eid shared concerns about the younger generation's heavy reliance on AI, noting that many lack the analog experiences necessary to critically evaluate AI-generated content. "Our AI literacy and analytical literacy and data literacy in this country—globally even—is very, very concerning, and it makes us very, very vulnerable," she warned.
Zick reinforced the need for training programs that equip the workforce to recognize anomalies in AI-generated data, and Walker highlighted the dilemma facing educational institutions (and parents!) in preparing the next generation for a world increasingly dominated by digital technologies.
“Humans are analog trying to live in a digital world,” Walker said. “Organizations cannot forget that that’s the case.” He recommended that organizations strategically develop and retain talent with this reality in mind.
Regulatory and Cybersecurity Considerations
Finally, the panel delved into the regulatory landscape, particularly the potential impact of the European Union's AI Act on the US. Zick predicted a "Brussels effect," similar to what was seen with GDPR, though he anticipates greater pushback at the institutional level within the US. He noted that while state legislatures are attempting to pass AI-related laws, they are facing resistance from federal authorities, leading to increased uncertainty in regulation.
The regulatory space is “very, very challenging”, Glaven agreed, but said it was a key policy aspect of the executive order: “to streamline and clarify biotechnology regulation.”
Pharma, Sakurai said, will have to take some risks on new, emerging technologies, though she acknowledged that large pharma is, for the most part, happy to let technologies mature and navigate the regulatory environment before jumping in.
In terms of cybersecurity, Walker expressed concerns about AI’s potential to exacerbate threats such as phishing attacks and patent fraud. He compared generative AI to a "college intern you’ve hired. They’re pretty smart, they’ve got the education. But don’t know your business fully yet," underscoring the need for robust contingency plans and accountability measures. Zick echoed this sentiment, emphasizing the importance of basic data hygiene and preparedness in the face of inevitable cybersecurity challenges.
It is clear that while AI offers immense opportunities for the bioeconomy, collaboration between industry, regulators, and educators is crucial to navigate this complex landscape. Walker summarized the landscape: “While there is an enormous amount of opportunity, there’s also an enormous amount of risk that we need to proactively and deliberately manage.”