Legislators and leaders worldwide are rapidly considering and passing regulations affecting the development and use of artificial intelligence. As more pharma executives consider using AI and begin implementing the technology, they must understand how future regulations may affect their current or planned uses.
The AI regulatory landscape is rapidly evolving, but that does not mean pharma decision-makers should avoid AI — instead, leaders should remain ready to adapt to new legislation when necessary.
Understanding the EU’s AI Act
Regulators in the European Union aim to be the first to establish a legal framework for general-purpose AI. These systems handle tasks such as translation, image or speech recognition, and answering questions.
The regulation will occur through the AI Act, which EU member states unanimously approved in February 2024. Next is a plenary vote by the European Parliament in April 2024.
The AI Act recognizes most applications pose few to no risks, but some introduce potential threats regulators want to address now to avoid negative consequences. It does this with a four-tier risk-based framework that bans AI applications deemed unacceptably risky or causing a clear safety threat.
Companies using high-risk AI applications must meet strict obligations such as risk assessments and mitigations, appropriate human oversight, activity logging, and clear instructions. Applications affecting pharma manufacturers include:
- The recruitment, hiring and management of workers
- AI used as a product safety component
- Aspects of training impacting someone’s profession or life
Once a business develops a high-risk AI application and meets all AI Act requirements, the application’s details go into a dedicated database for post-market monitoring. The brand’s representatives also sign a declaration of conformity and place a Conformité Européenne (CE) mark on the product. If the system undergoes significant changes, it must meet all Act requirements again and obtain a new CE mark.
AI applications in the minimal risk category must meet specific transparency obligations. If a worker or customer uses an AI chatbot, the system must inform them it is a machine-based interaction, giving the person the information to decide whether to continue.
Finally, the AI Act does not restrict or impose obligations on applications in the minimal risk group. Most current uses of AI in the EU fall into this category.
Preparing for the AI Act
Experts expect the AI Act to be enforced in early 2024 if passed. Once this happens, organizations will have two years to comply with it — although there are a few exceptions.
For example, there will be a six-month grace period before banning certain processes. Additionally, specific rules governing new models for general AI applications will go into effect a year after the act’s establishment. Relatedly, those rules only apply to existing models two years after the act’s enactment.
Once the AI Act comes into force, it affects:
- AI vendors located within the EU
- Certain AI products made elsewhere but used in the EU
- Everyone interacting with AI products within the EU
Executives at pharma enterprises in the EU should start considering the probable risk levels of their current AI applications and how the new requirements might affect their plans to implement the technology. They can also communicate with their vendor network and learn what those organizations will do to ensure compliance with the act at the appropriate time.
The regulatory landscape in the U.S.
So far, the United States lacks federal legislation for AI. However, in October 2023, President Biden took a significant step by signing an executive order that includes the following safety and security requirements:
- Developers must share safety test results and other relevant data with government officials
- Authorities create standards, tools and tests to ensure safe, secure, and trustworthy AI systems
- Agencies funding life science projects establish standards requiring recipients to avoid using AI to engineer dangerous biological materials
- The Department of Commerce creates best practices for detecting fake AI-generated content and authenticating reputable material
- Authorities build on the existing AI Cyber Challenge and develop artificial intelligence tools to detect and fix critical software vulnerabilities
- The National Security Council and White House Chief of Staff collaborate on a national security referendum, including actions directing how the U.S. military and intelligence communities use AI
It is too early to say how the results of these actions might affect pharma manufacturers or how long it could take to see the full effects of the changes.
Additional U.S. directives
Other directives in the executive order urge Congress members to pass bipartisan privacy legislation. The outcomes could impact pharma decision-makers, especially if current AI applications use consumers’ data for marketing or research.
The most relevant part of this executive order to pharma manufacturers is a section about advancing AI to find new, affordable and effective drugs. It stipulates the Department of Health and Human Services will establish a safety program allowing representatives to receive reports of potentially harmful uses of AI in health care, as well as address those issues.
Another executive order section mentions applying automation to promote innovation and competition. A part potentially applicable to the pharma industry discusses expanded grant programs for AI research in health care and other vital areas.
As pharma companies wait to see the outcomes of this executive order, they should consider which AI applications will likely remain permissible and profitable under new potential regulations. One possibility is concentrating on options that accelerate workflows without threatening privacy.
For example, AI can be used in tissue culturing, which help researchers study drugs’ effects in controlled environments and pursue personalized medicine. In 2022, a team applied AI to tissue culturing to find the optimal conditions for growing replacement retina layers. This approach allowed autonomous screening of 200 million possible conditions for the best results. Similar efforts could save time for lab workers while posing minimal or no patient risks.
Emerging developments at state and city levels
The regulatory situation is less consistent in the U.S. at the state level. Legislators have enacted 29 bills in 17 states to regulate AI since 2019, with 12 of those ensuring data privacy and accountability. Relatedly, Texas, Vermont, New York and Illinois have legislation to require collaboration for AI’s development, design, and use. Vermont, California, Louisiana and Connecticut have additionally passed laws protecting people from unsafe AI systems' unintended but foreseeable consequences.
Data privacy is another concern of legislators overseeing AI developments. Eleven states have laws safeguarding individuals from abusive data practices and requiring companies to let people control how AI systems collect and use their information. Some legislators also want to enact requirements informing people when and why entities use AI systems. Authorities in Maryland, Illinois, California and New York City have passed such laws.
Since evidence shows AI systems contain biases, legislators in Colorado, California and Illinois have established laws forbidding discriminatory applications and promoting the development of equitable AI tools. 12 states — Virginia, Washington, Tennessee, Texas, California, Delaware, Indiana, Iowa, Oregon, Montana, Connecticut and Colorado — have regulations requiring AI developers to comply with all applicable laws and face the consequences if they do not.
In July 2023, Tempe, Arizona, officials adopted an ethical AI policy believed to be the state’s first. Similarly, other U.S. cities have established guidelines for municipal workers using tools such as ChatGPT. A recent study found a significant percentage of employees input sensitive information into generative AI tools despite knowing the risks. Since the pharma industry has so much proprietary information, its leaders may need to do the same.
A rapidly evolving situation
Pharma manufacturing executives should remain aware of how things progress in these locations and others. Since many businesses in the pharma industry operate nationwide, applying patchwork-style internal rules to comply with external regulations could quickly become inefficient.
The best approach may be developing enterprise-wide policies aligning with current legislation. After that, decision-makers must ensure all future applications stay compliant. Otherwise, ongoing challenges could restrict successful AI outcomes and make costly adverse effects more likely.
These details show how difficult it is to predict AI regulations and how they will impact pharma manufacturers. People should remain informed about developments and as agile as possible regarding their AI usage, such as understanding how future laws could restrict or forbid technology use. These realities do not mean decision-makers should avoid artificial intelligence but adapt to new legislation.