New NIST Publication Identifies AI Cyber Attack Threats

| Author , tagged in NIST
Cloudticity, L.L.C.

A publication released by the National Institute of Standards and Technology (NIST) can help prevent and mitigate attacks on AI systems. 

Overview 

For many healthcare organizations, Artificial Intelligence has become a key player in innovation and development for patient care. AI systems include data, models, processes for training, testing, and deploying machine learning models. Many generative AI systems are linked to private documents and data, and utilized for coding, documentation, and user experience. 

With all of the benefits of AI, its close connection to data and ability to independently operate create unique security challenges. For those working in healthcare with private data, the stakes–and the rewards–are high. 

NIST has discovered that AI can malfunction or be misdirected when exposed to untrustworthy data. Malicious attackers realize this too, and many have begun experimenting with strategies for exploitation. 

With this in mind, NIST released a 107-page document, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, that includes identified types of attack, strategies for mitigation, and how companies can protect their machine learning and data in a time of significant technological advancement. 

The document underscores the rapidly changing environment; uncertainty is prevalent and there is no quick fix solution.

Understanding Adversarial Machine Learning

Adversarial Machine Learning (AML) occurs when a Machine Learning program is given untrustworthy data or misdirected to incorrectly complete a task. 

Any system utilizing AI has to be trained with large quantities of data, allowing it to learn new patterns and predict how to respond to various situations. Even after a training period, AI can still evolve and continue to refine its behaviors and predictions. 

When AI is fed data maliciously, it can be trained to act in unintended ways. NIST gave the example of chatbots, which could, for example, be trained to respond to users with abusive language if its standards for operation are circumvented. 

While a chatbot is one thing, AI is being used in a myriad of ways; from driving cars to diagnosing patients or providing remote treatment. 

Because AI frequently interacts with public websites and data, and sometimes pulls data from various sources, malicious actors are beginning to interfere with AI’s learning process. 

Types of Attacks

The document outlines 4 different attack types malicious actors may use. The document also outlines unique challenges for each attack strategy and how organizations can prevent or mitigate the attack. 

Evasion attacks

Evasion attacks occur after the AI system is deployed to change how the system responds to an input. Attacks like these have been found as far back as 1988. These attacks are commonly found in AI systems that rely on imaging. A system trained to identify specific faces, could, for example, be misled if someone added glasses. 

These attacks are challenging to prevent because there are a variety of strategies adversaries could use to change how AI responds to an input. 

Some effective strategies include adversarial training, where AI is trained to respond to adversarial examples, randomized smoothing which allows AI to continue to produce the most likely predictions despite perturbations in the given input, and formal verification whereby small neural networks are certified for robustness. 

Poisoning attacks

Poisoning attacks take place in the training process when corrupted data is introduced. These are frequently common in cybersecurity and can impact network intrusion detection, spam detection, malware classification, and more. An attacker may, for instance, hack into the learning model and disturb its ability to classify items. 

These attacks can be easier to detect if the learning model is regularly monitored. 

Strategies to mitigate these attacks include training data sanitization, where data samples are cleaned and poisoned before learning occurs, and robust training

Other types of poisoning exist as well, including targeted poisoning, backdoor poisoning, and model poisoning. These require different strategies but the attack remains similar; the learning process is disrupted, changing the machine’s capability to accurately interpret data as intended. 

Privacy attacks

A privacy attack occurs during the deployment of the AI. In these attacks, a malicious actor will attempt to learn information about the AI or the data to exploit it. An actor may work with the model to learn about any potential vulnerabilities and then attempt to exploit or confuse AI. 

Some mitigation strategies include making it difficult for an attacker to learn significant information about AI or its data, called differential privacy. Organizations utilizing this strategy will have to set up privacy parameters that prevent outside actors from learning too much about the system or data. 

The NIST also notes another strategy, limiting user queries, as a possible way to prevent malicious actors from learning how to exploit an AI system. 

Abuse attacks 

These attacks occur when an actor repurposes a system to complete malicious objectives. These types of attacks occur in generative AI and can involve malicious actors using AI for fraud, malware, and manipulation. 

NIIST suggests human reinforcement and fine-tuning of the model, filtering inputs, and detecting harmful outputs as mitigatory strategies. 

Strategy Limitations

NIST discusses that several challenges can make it difficult for organizations to effectively mitigate AML. Many strategies are easier said than done. 

NIST notes that scale is a large challenge; many training programs require massive amounts of data and few report where they receive data. This can make it difficult to monitor if data poisoning has occurred. 

Numerous models of AI can make it difficult to determine what strategies should be used to improve robustness. Depending on the model, different strategies will be utilized and different limitations may exist. 

What the Experts Say

In a statement, computer scientist Apostol Vassilev says he hopes those in the AI community will “come up with better defenses” based on the report’s findings. 

Co-author and professor at Northeastern University, Alina Oprea, said, “Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities…Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.” 

Vassilev added, “Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences…There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”  

The Big Takeaway

While the challenges to AI are large, they are top of mind for cloud service providers. Just recently, Amazon Web Services (AWS) held a conference with security a frequent topic of conversation.  

Read more: AWS re:Invent 2023: Top Takeaways for Healthcare 

AWS uses internal data only, significantly mitigating the risk of poison attacks. They also utilize safeguards to ensure that only qualified individuals can access certain tools or information. 

Despite some concerns, AI is a powerful tool in the healthcare industry that could offer life saving potential, from generating reports to helping diagnose patients.

Managed cloud security services from a reputable partner are the best way to ensure that AI is being used as intended. Not all service providers offer the same care and depth of security. 

Read more: How to Choose a Cloud Managed Service Provider for Healthcare

Why Cloudticity for Healthcare Cloud Managed Services

Founded in 2011, Cloudticity was the first provider to ever deploy protected health information (PHI) workloads on the public cloud. As a HITRUST CSF Certified provider, we help organizations continuously align with regulatory requirements such as HIPAA, FISMA High, NIST 800-53, and more. Cloudticity also provides incident management and anomaly detection through our managed cloud security services, so that if an attack does occur, it will be acted upon and mitigated quickly. 

We’ve never had a breach in all our years of business so you can feel confident that your cloud-native and AI workloads are secure in our hands.

Reach out for a free consultation

speak with a healthcare cloud expert

TAGGED: NIST

Subscribe Today

Get notified with product release updates and industry news.