Companies in the life sciences industry make up nearly one-third of all cybercrime attacks. The pharma industry in particular is an attractive target for cybercriminals due to the sensitive nature of the data that companies handle (including clinical trial results, patient data and intellectual property).
The average cost of a breach in the pharma industry is over $5 million. To compound matters, it can take 257 days on average for companies to identify a breach.
While implementing good cybersecurity hygiene practices can help minimize the damage — organizations with good cybersecurity hygiene have a 50 times lower frequency of destructive ransomware events— as the volume and complexity of cyberthreats increase, traditional cybersecurity solutions may no longer suffice.
Artificial intelligence has exploded onto the scene in recent months as a powerful tool for cybersecurity among nearly all industries. Fortunately, the pharma industry has already made use of AI and machine learning for science. Now the industry can make use of AI for cybersecurity as well.
AI-enhanced cybersecurity solutions can offer distinct benefits to the pharma industry and enable companies to further protect themselves from cyberthreats.
Understanding the threat
According to the Black Kite Third-Party Breach Report 2022, health care makes up at least 33% of all cyberattacks.3 The report also found that most cyber breach events were ransomware attacks. A new player on the ransomware scene is ‘cryptoviral extortion,’ where cybercriminals copy data and threaten to release it on the dark web if payment is not made.
Cybercrime exists because it works, and cybercriminal organizations are structured like multinational companies. They are large organizations located in difficult-to-access parts of the world, with sales quotas and a nearly 24/7/365 workforce, and they continually target companies with detailed research reports.
With a cybercrime success rate of nearly 57% in life sciences, it is fast becoming a matter of when, not if, your pharma company will be targeted — and that targeting can be painfully precise.
These cybercriminal organizations know exactly what your company could pay out and how much it would cost to remediate a breach yourselves versus paying a ransom, and they price their ransom accordingly. They are driven to succeed because their own reputations are at stake — after all, if they fail to release data after payment, their reputation as a cybercriminal is diminished, and they lose potential future payments.
AI in pharma cybersecurity
Because of the sensitivity of the data pharma handles, a breach can have severe ramifications. Financial losses, reputational damage and regulatory sanctions are just a few of the potential consequences. Small and mid-level companies may not be able to survive a catastrophic cyberattack.
The increase in phishing, malware and ransomware attacks in the pharma industry has shifted the focus to implementing more robust measures to protect against these threats. AI can transform cybersecurity by providing faster threat detection, more accurate analysis, and automated response capabilities.
Traditional cybersecurity solutions rely on static rule sets and signature-based systems. These solutions are limited in their ability to detect and respond to complex and changing threats. AI cybersecurity solutions, on the other hand, can leverage machine learning algorithms to help identify patterns and anomalies in data that may indicate a threat. AI systems can analyze vast amounts of data in real time and detect threats that may have previously gone unidentified. AI can also adapt to changing threats by continuously learning and updating their models.
A few of the benefits that AI cybersecurity solutions can provide include:
- Faster threat detection and response: AI-enhanced cybersecurity solutions can detect threats much faster than traditional methods. With the ability to analyze large datasets in real time, AI can identify patterns and anomalies that could indicate a threat. With the increased speed in identification, the time to respond also decreases to help prevent or minimize potential damage.
- Improved accuracy: AI-enhanced cybersecurity solutions leverage ML algorithms to produce fewer false positives. This can help illuminate needless investigation time by increasing the number of valid potential threats for further investigation.
- Automated response capabilities: AI can help automate responses to cyberthreats without human intervention. This helps reduce response time and further minimize potential damage.
- Better resource utilization: AI can help optimize cybersecurity resources as well. By analyzing data and prioritizing threats, AI can help focus cybersecurity teams on responding to the most critical issues. This helps in allocating resources more effectively and reduces the risk of a data breach.
- Adaptability to emerging threats: AI cybersecurity solutions continuously learn and update their models, which allow for rapid detection and response. This helps to stay ahead of cybercriminals and minimize risk.
Challenges of AI-enhanced solutions
While there are many benefits, as with all newly created technologies, there are some challenges that remain.
To begin with, AI is new. On the surface, it seems straightforward. Under the covers, however, it is quite complex and difficult to understand. This can make it challenging for teams to understand how these systems work and how AI determines its decisions. This lack of transparency can make it difficult to assess the effectiveness of these systems. What is interesting is that even the individuals who have developed this technology do not understand how some AI interpretations and understandings are done. Hence, we do not fully understand when AI hallucinates.
Much of what we know about today’s AI was brought about by a 2017 Google research team paper.4 The paper introduced a model architecture relying entirely on an attention mechanism to draw global dependencies between input and output. But further investigation and understanding must be conducted so that we can fully understand how AI operates when left on its own.
There is also a chance of false positives, where the system detects a threat when one does not exist. A false negative could also occur if the system fails to properly detect a valid threat. These errors lead to a loss of trust in the system and increase the difficulty for teams to rely on these systems. Fortunately, proper tuning and modeling can help reduce the occurrence of both.
Because AI trains on existing data, bias-based data can be a significant problem. If the training data is biased, the system produces biased results. This can lead to unfair or discriminatory outcomes and can make it difficult to trust the system. There is work being done to combat bias in AI. Most large language models, a type of AI algorithm, are fine-tuned to account for bias. But this is an area that needs further development in order to ensure the least amount of bias possible.
It’s also important to note that AI requires specialized skills to develop and maintain. These skills may not be available within the current workforce. Some time and retraining may be required to build these skill sets. This can delay the adoption of AI-enhanced cybersecurity solutions and increase the cost of implementation.
Criminals are also using AI
With the advent of any new technology, there always exists the risk that such technology will be used for harm. Cybercriminals are already developing new and improved methods to make use of AI. By understanding some of these, we can better prepare to defend and protect against such attacks.
To that end, some of the uses that cybercriminals are making of AI include the following:
- Automated attacks: Cybercriminals can use AI to automate attacks at a larger scale and much faster than before. This can increase the difficulty of keeping up with the frequency of attacks.
- Sophisticated phishing: Phishing is already the number one attack method. AI-powered phishing can be more sophisticated and can craft phishing emails that are personalized with nearly perfect language. This can make it more susceptible to unintentional clicks on links and attachments.
- Deepfakes: Cybercriminals can make use of new and improved deepfake capabilities to impersonate individuals to carry out fraudulent or other malicious activities. Deepfakes can manipulate video and images or impersonate a known voice. This realistic content can be used for social engineering attacks.
- Adversarial attacks: AI systems could modify images or data to fool AI-powered systems into making incorrect decisions — basically pitting one AI system against another. This application of AI can be difficult to detect and can undermine trust in AI-enhanced cybersecurity solutions.
- Data poisoning: Since AI systems are only as good as the data they are training on, cybercriminals can attempt to manipulate the training data to introduce bias or other vulnerabilities. This can help them to exploit vulnerabilities and produce false results or poor decisions.
Playing the long game
Pharma companies face significant cybersecurity risks due to the sensitive nature of the data they handle, the high value of that data, and increased cybercriminal targeting due to the industry in which they operate. Traditional cybersecurity solutions may no longer be sufficient to protect against the increased complexity and frequency of evolving threats.
There are many benefits to implementing AI-enhanced cybersecurity solutions, including faster threat detection, improved accuracy and automated response capabilities. These solutions are already being implemented in many pharmaceutical companies in various ways, including threat detection, behavioral analysis, fraud detection, malware detection and vulnerability management.
By understanding how cybercriminals may also make use of AI, companies can better prepare for and defend against such attack methods.
AI is a long game. We are only beginning to understand the ways in which we can make use of this technology. Careful consideration of the challenges and continuing to innovate ways in which AI can enhance cybersecurity solutions can benefit all companies, especially high-target industries such as pharmaceuticals.