9 min read
Nov 9, 2020
Security

3 AI Cybersecurity Threats That Can Bust Your Business

Thinking about leveraging artificial Intelligence technologies to gain an edge over your competition? Read this report to understand the darker side of AI.

P.T.
Pritam TamangContent Writer

With cars beginning to drive themselves, eCommerce apps recommending you shirts in your favorite color, and virtual assistants letting you order food without lifting a finger, the wonders of artificial intelligence (AI) and related technologies are seemingly boundless. 

If you’re being buoyant and already started thinking how you can use AI to innovate your business, it’s alright. It’s natural given all the feel-good hype surrounding AI technologies.  

But the fact is that while AI can empower your business, it can simultaneously enable cyber criminals to devise and launch advanced cyber attacks that can ruin your business.

Unless you’ve set up effective cybersecurity measures, your business is likely to be swept away by the tsunami of AI cyberattacks that closely follows the exciting, high-wave of AI tech.

But don’t worry—we’ve written this article to help you get up to speed on how to counter an AI cyber threat. We’ll first explore the ingenious AI techniques cyber attackers are using and then offer tips on how you can safeguard your business against these novel cyber threats. 

Group 3@1x Created with Sketch.

3 kinds of current AI cybersecurity threats and their solutions

The threat of cyber crime is not new. There are many well-known threats out there, such as data breach, ransomware attacks, identity theft, DDoS, and phishing attacks.

However, cyber criminals using AI is a relatively newer story and the current AI cybersecurity threat landscape revolves mostly around hacking machine learning (ML) algorithmic models—a subset of AI. AI threats include associated machine learning technologies, such as deep learning, neural networks, and natural language processing models. 

In this blog, we’ll consider the top three AI cybersecurity threat trends, viz training data poisoning, model theft, and adversarial samples, which are likely to comprise 30% of all cyberattacks on AI-powered systems by 2020 (report available to Gartner clients only).    

Group 3@1x Created with Sketch.

Threat #1. Training data poisoning

Training data poisoning is the corruption of the data set used by an ML model to learn and evolve. The attackers gain access to the model’s training data set and then feed incorrect data, which skews the decision-making capabilities of the model, rendering it useless.

An example of data poisoning is the infamous case of the chatbot Tay. Developers at Microsoft anticipated training the AI-powered chatbot on an anonymized public data set on Twitter. The developers hoped that interactions with Twitter users would speed up Tay’s training on natural language conversation.

But internet trolls orchestrated a coordinated attack to flood the chatbot with racist, misogynistic, and anti-semitic language, which led the bot to tweet hateful comments within just 16 hours of its release.

How to prevent training data poisoning

Preventing malicious access to your training data set is key to avoid data poisoning. One of the ways to do this is to check the data set for any outliers, in other words, changes in data classification by the model after every training session. The greater the change, the more likely is the case of data poisoning.

Additionally, monitoring and controlling the users who can input training data into the model is effective for identifying data poisoning attempts. If you see a few users ingesting large quantities of training data to the model, then set up parameters to limit data inputs from these users.     

Group 3@1x Created with Sketch.

Threat #2. Model theft

Model theft is a form of intellectual property crime wherein someone steals your machine learning model’s knowledge. This is done by sending queries (inputs) to your model and then studying its output, followed by reverse-engineering the algorithms to create a clone of your model.

Model theft is possible only when cyber criminals can study a model. An easy way for them to do this is by accessing machine learning-as-a-service platforms (MLaaS), such as Amazon Web Service, Google cloud platform, and Microsoft Azure, where many companies store data and develop their ML models. 

A malicious actor can access these MLaaS platforms, via public APIs made available by the MLaaS vendors, and carry out a cyber attack such as model extraction using prediction queries.

How to prevent model theft

Model theft is possible when cyber thieves can send queries to your model and study the outputs. If you observe a high number of queries or great variance in the queries received by your model, then this should ring the alarm. 

If your model analyzes sensitive data, such as private medical records, then explore machine learning frameworks, such as Private Aggregation of Teacher Ensembles (PATE), that can help in developing ML models with built-in privacy mechanisms. 

Group 3@1x Created with Sketch.

Threat #3. Adversarial samples

Adversarial samples are manipulated inputs that attackers feed into your machine learning model to fool it. These inputs result in the model misclassifying the data. 

Adversarial samples are common in ML models that are designed for image classification. Attackers manipulate the pixels of an image with slight perturbations, imperceptible to the human eye, that leave the model completely astray. 

This threat can have dangerous consequences—imagine fooling the image classification model of an autonomous vehicle to interpret “STOP” as a “YIELD” sign.

How to prevent adversarial sampling

Defending your ML model against adversarial samples requires proactive action. Take the help of a cybersecurity professional specializing in AI security to conduct an adversarial risk assessment even before you begin the ML model development. This will help you identify any vulnerabilities in your model that attackers could exploit.    

Further, conduct simulated attacks at the development phase to train your model and strengthen its defenses against adversarial attacks in the production environment. 

Group 3@1x Created with Sketch.

Is your business ready to tackle AI cybersecurity?

AI cyber threats are evolving as we speak and bad actors will continue to find more innovative ways to carry out cyber crimes. To survive this constantly changing cybersecurity threat landscape, you need to take careful stock of your AI cybersecurity preparedness.

Below are some questions that can help you gauge your organization’s readiness for AI cybersecurity:

Do you have seasoned security professionals? The answer to this question will allow you to decide whether using the in-house security team for managing AI cybersecurity or leveraging third-party security vendor solutions/services is the better choice.

What is the greatest AI security threat for your business? To answer this question, you’ll need to analyze any past security incidents to check for the involvement of any AI hacker. This will help you gauge your current and future vulnerability to AI attacks. Further, you’ll need to conduct an inventory of all AI-powered smart devices, including IoT devices, that your business owns. Doing so will give you an understanding of the overall cyber risk these endpoints present to your businesses.    

What business outcomes do you hope to achieve by implementing AI cybersecurity tools and strategies? Answering this will help you in setting realistic expectations from investments in AI security and accordingly deciding on the budget for it. 

Does your business need AI cybersecurity? While AI security is a new and exciting field, there are still many immature products on the market. You need to carefully assess their actual business value and also account for the risks of using new-fangled AI tech. Check to see if using traditional cybersecurity solutions would better serve your organization.

Back to top