A human adversary stands in your way and stops at nothing to make your life more complicated, sometimes with dire consequences when they’re successful. Adversarial attacks parallel this approach, disrupting machine learning practices and resulting in dire consequences ranging from stalled business processes to serious human injury.
Adversarial machine learning is a fairly new, but nonetheless burgeoning problem for AI innovation. A report from Gartner predicts that 30% of all cyberattacks will involve data poisoning or some other adversarial attack vector by 2022. With machine learning growing in popularity, it makes sense that more attacks are leveraged to disrupt machine learning and the systems innovations that they make possible.
Let’s take a look at the current landscape of adversarial machine learning, what experts believe could be possible for attacks in the future, and how you can defend against and mitigate the risk of these adversarial attacks.
More on Machine Learning: AI vs. Machine Learning: Their Differences and Impacts
A Closer Look at Adversarial Machine Learning
- How Do Adversarial Attacks Work?
- Examples of Adversarial Attacks in Machine Learning
- Risks of Adversarial Machine Learning
- How to Defend Against an Adversarial Attack in Machine Learning
How Do Adversarial Attacks Work?
Adversarial machine learning (ML) attacks all focus on making small, malevolent changes to reference data to obstruct initial training and deep learning for ML or to interfere with ML that is already trained. The goal behind adversarial attacks is to circumvent existing parameters and data rules so that the ML confuses its instructions and makes a mistake.
Attackers invade and obstruct your machines through a mixture of poisoning/contaminating and evasion attacks:
Poisoning/Contaminating Attacks
Poisoning and contaminating attacks make small changes to training data, often in inscrutable ways over a long period of time, to slowly train ML systems to make bad decisions in the future. Adversaries who use poisoning attacks usually look for back doors into the system’s training data and disguise malicious data by mislabeling it to look like other training data, thus enabling it to pass through the classifier. It’s often difficult to detect these disguised bits of training data, especially since the mistaken inputs and actions are rarely caught until long after the ML training phase.
Evasion Attacks
Evasion attacks typically happen after an ML system has been trained. Adversaries who attempt evasion attacks are looking to poke holes in a system’s existing training parameters. If they find a hole or vulnerability, they will use that discovery to “evade” security safeguards and gain access to the algorithms and codes that guide the ML system’s actions. These types of attacks can damage everything from intended outputs to data quality to system confidentiality.
Examples of Adversarial Attacks in Machine Learning
Only a small handful of adversarial machine learning attacks have been successfully launched in the real world but considering Amazon, Google, Tesla, and Microsoft are among the known victims, companies of any size and sophistication could suffer from adversarial consequences in the future.
Data and IT professionals are currently practicing adversarial attacks in the lab, experimenting with potential attacks to see how different ML scripts and ML-enabled technologies respond to those attacks. These are some of the theoretical attacks that they’ve attempted and believe could be launched successfully in the near future:
- 3D printing human facial features to fool facial recognition technology
- Adding new markers to roads or road signs to misdirect self-driving cars
- Inserting additional text in command scripts for military drones, changing their travel or attack vectors
- Changing command recognition for home assistant IoT technology so that it will perform the same action (or no action) for very different command sets.
A Real-Life Example of Adversarial Machine Learning
One of the most famous examples of a real-life adversarial machine learning attack happened with Microsoft’s Tay Twitter bot in 2016. Microsoft released Tay as a Twitter bot for conversational understanding, or an AI meant to improve its conversational skills the more that Twitter users engaged with it.
Several Twitter users decided to overrun Tay with offensive remarks, which over the course of fewer than 24 hours, completely changed the tone of Tay and made the bot misogynistic, racist, and utterly hateful.
Because of this unsophisticated, but nonetheless adversarial, attack against the tool, Microsoft shut down the bot to prevent it from making further offensive statements. The Twitterverse took control of a machine learning innovation with little to no effort, which is why so many tech experts fear the potential of coordinated adversarial attacks in the future.
Read Next: 10 Ways to Be More Human in the Age of AI
Risks of Adversarial Machine Learning
Although some adversarial attacks can result in alarming but ultimately negligible consequences like in the case of the Tay Twitter bot, adversarial machine learning could have the capacity to cause considerable damage to human life and business processes in the future. Some possible repercussions of adversarial machine learning attacks include:
- Physical danger and death, particularly if self-driving cars miss streetside indicators or if military drones are fed incorrect attack information.
- Private training data getting stolen by competitors and used for their own competing innovations.
- Training algorithms being altered beyond your team’s recognition or ability to fix them, leaving machines virtually unusable.
- Supply chain and/or other business processes being disrupted, leading to delayed order deliveries and frustrated customers.
- Violation of personal data privacy, especially after membership inference attacks, leading to identity theft for customers.
Network Security Innovations: Are Air Gapped Networks Secure?
How to Defend Against an Adversarial Attack in Machine Learning
Adversarial attacks seem like an unavoidable, looming problem, but many organizations are already discovering ways to combat these malicious attacks. Enterprises should take these proactive steps in order to protect their machine learning tools and algorithms:
- Strengthen your endpoint security and audit existing security measures regularly (Learn more about how endpoint security can protect your ML initiatives here).
- Take your ML systems through adversarial training and attack simulations. It’s a good idea to run practice trojan attacks on both your training and seasoned systems.
- Change up your classification model algorithms so that malicious actors can’t as easily predict and learn your training models.
- Sharpen your knowledge of attacks and defense methods with an adversarial example library.
With the right research, training, and preparation in place, your team can predict and counteract many of the most likely adversarial attacks on your machine learning systems.
A Controversial, Effective Security Solution: End-to-End Encryption: Important Pros and Cons