Resources / Blogs / Trust in Data: The Rise of Adversarial Machine Learning

Trust in Data: The Rise of Adversarial Machine Learning

Adversarial machine learning header image

Increased dependence on data and Machine Learning, and a lack of understanding of complex ML models are giving rise to a new category of cyber attacks called Adversarial Machine Learning attacks. 
Machine learning impacts our everyday lives – it determines what we see on eCommerce websites, social media platforms, and search engines. Since machine learning predictions are based on data, businesses must obtain quality data to succeed. Data is the heart of every decision, but the overwhelming amount of data and ML models can lead to gaps and inaccuracies that are difficult to catch. Increased reliance on data and machine learning is also resulting in increased security risks. One class of risk of special interest is known as adversarial attacks. Adversarial attacks can be both intentional and unintentional. An intentional adversarial attack is usually carried out by malicious agents or competitors to manipulate ML predictions. Unintentional attacks, on the other hand, are usually due to human errors. However, organizations can adopt feature engineering techniques to fortify data corruption and adversarial attacks.

In this blog, we will look at adversarial attacks, the impact of adulterated data on outcomes of ML models, and how feature stores can help businesses regain trust in their data.

What are adversarial attacks in machine learning models?

Adversarial attacks in machine learning attempt to disrupt ML models with deceptive input data and prediction requests. Their threat is growing to become one of the most common methods to cause a malfunction in an ML model. An adversarial attack might also present a model with inaccurate data to deceive an already trained model.

To get an idea of adversarial attack, consider this example from Explaining and Harnessing Adversarial Examples:

An ML model trained for image classification classifies the image of the panda on the left correctly with 57.7% confidence. In an adversarial attack, the original data (the image of the panda on the left) is subtly modified using a carefully crafted noise signal such that the changes are undetectable to the human eye. However, this artificially introduced disturbance alters the ML model’s prediction. It now classifies the modified image as a ‘gibbon’ with a 99.3% confidence level. This illustrates how adversarial attacks can subtly alter the data and lead to wrong conclusions.

Types of Adversarial Machine Learning attacks

There are two types of adversarial attacks: WhiteBox and BlackBox. A WhiteBox attack is possible when the attacker has total access to the target model, including its parameters and model architecture. In contrast, during a BlackBox attack, an attacker does not have access to the model and can only watch the outputs of the targeted model.

Adversarial attacks can also be classified into the following three categories:

1. Poisoning

Attacks on ML models during the training phase are often referred to as “poisoning” or “contaminating.” In poisoning, incorrectly labeled data is inserted into a classifier, causing the system to make inaccurate decisions in the future. Poisoning attacks involve an adversary with access to and some degree of control over training data.

2. Evasion attacks

Evasion attacks happen after an ML system has already been trained. It occurs when an ML model calculates an output probability on new input data. Evasion attacks are often developed by trial and error, as researchers don’t know what data manipulations will “break” a model.

3. Model extraction

Model extraction or model stealing involves an attacker exploring a black box ML system to either extract the training data or reconstruct the model, especially when the model is sensitive and confidential. For instance, model extraction attacks are commonly used to steal a stock market prediction model, which the adversary uses for their financial benefit.

How adversarial machine learning attacks impact decision-making

Intentional and unintentional adversarial machine learning attacks can impact data-driven decision-making by inserting corrupt or inaccurate data in the training dataset of machine learning models.

  • Lost Revenue Opportunities: A study reported that 76% of people believed their organization missed revenue opportunities due to inaccurate data. Here is an example of a company losing revenue because of inaccurate data: An eCommerce company ordered high volumes of inventory because of heavy traffic on its platform. However, the sales data showed that they weren’t selling enough products even when the site visits and engagement rate was high. They found that their numbers were wrong due to a mistake in their incoming data — the QA team left a crawler on the website, which was poisoning (a type of adversarial attack) the understanding of demand. Because of this crawler, 1/3rd of their traffic was nonexistent, leading to inventory mismanagement and lost revenue. Here the question is on intent – was it a malicious attack to disrupt a company’s data or a genuine mistake? If considered an attack, it will fall under the space of a WhiteBox adversarial attack since the attacker (here, the QA team) had complete access to the system.

  • Flawed Decision-Making: Inaccurate data leads to flawed decisions and hinders the organization. It’s possible that decision-makers and analysts aren’t aware of flaws in their data, making it harder to judge the reliability of data-backed decisions. When the data is contaminated, it negatively impacts the potential of real-time decision-making, missing market opportunities and losing competitive advantages.

  • Delays due to poor quality: According to a recent study, data analytics projects in 84% of organizations were delayed due to the data being in an incorrect format. In comparison, in 82%, the data used was of such poor quality that analytics projects needed to be reworked. Almost all respondents (91%) believed that work was needed to improve the data quality within their organization.​

How can we trust Machine Learning outcomes?

Gartner, a leading industry market research firm, suggested that “application leaders must anticipate and prepare to mitigate potential risks of data corruption, model theft, and adversarial samples.” Companies should invest in data preparation and feature stores, so they can trust their data and pick out anomalies before it’s forwarded for analysis.

Feature Engineering

A feature is an input in a predictive model, and feature engineering is the process of selecting, manipulating, and transforming raw data into features that can be used in both unsupervised and supervised machine learning. The goal is to design and build better features for machine learning models to work well on new and complex tasks, thereby avoiding adversarial attacks.

Feature engineering consists of various processes:

  • Exploratory Data Analysis: It is a robust process that is used to summarize datasets through visualizations, descriptive and inferential statistics, and more. The process also includes creating new hypotheses or finding patterns in the data.

  • Feature Creation: Involves identifying variables that can be useful for the ML model in prediction outcomes.

  • Feature Selection: It’s used to determine features that should be removed because they are irrelevant or redundant and features that should be prioritized because they are most useful for the model.

  • Transformations: Transformation is carried out to standardize the data points to avoid errors and increase the model’s accuracy.

  • Feature Extraction: Variables are created by the process of extracting information from raw data. It doesn’t distort the original relationships and compresses the data into manageable quantities.​

How feature stores can reduce the impact of attacks?

  • Baseline establishment: Input data can be used to build baselines of expected normal behavior. This acts as the first line of defense against adversarial attacks by establishing a norm for the data’s quality. Any deviations from the norm can be flagged as suspicious.

  • Lineage tracking: Feature engineering involves an arbitrarily complex transformation of input data. Tracking each step of the transformation process allows the construction of data lineage which can be used to compute trust metrics for the generated features.

  • Data monitoring: Feature stores can provide the necessary tools for developers and end-users to proactively visualize data and make it easier to understand. Sometimes, subtle changes in data distribution are best detected manually.

  • Code tracking: Code commits and changes can be tracked to build causal relationships between the incoming data and feature distributions. This allows for tracking data corruption using side-channel information, i.e. information that is leaked from a physical cryptosystem.​

Conclusion

Currently, organizations don’t yet have the robust tools to detect and eliminate adversarial attacks in machine learning models. However, cleaning, preparing, and securing the data can be a good first step in preventing both Whitebox and Blackbox data poisoning measures. Feature stores act as the heart of data flow in modern ML models and applications and can help companies to prevent adversarial attacks.

Enrich, Scribble Data’s feature store helps businesses to train ML and DL models with reproducible, quality-checked, versioned, and searchable datasets. It allows data scientists to deploy their models with trust in the source data.

Also Read “Establishing Organizational Digital Trust: Why Your Data Products Are Only as Good as Your Data”

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Blogs

November 4, 2024

The Future of Employee Benefits: Top Trends to Watch Out for in 2025

Imagine telling an insurance executive in the 1970s that, in the not-so-distant future, they would be crafting group benefit plans that include coverage for mindfulness app subscriptions, pet insurance, or even student loan repayment assistance. They might have chuckled at the absurdity—or marveled at the complexity. Yet here we are in 2024, navigating a landscape […]

Read More
October 4, 2024

Top Insurtech Trends for 2025 and Beyond

The insurance industry stands at a crossroads. The global protection gap, a measure of uninsured risk, looms large. By 2025, it will reach $1.86 trillion. This is not just a number. It represents real people and businesses exposed to financial ruin. The old models of insurance are failing to keep pace with a rapidly changing […]

Read More
September 20, 2024

A Guide to Managing Illiquid Assets in Pension Risk Transfers

As the sun rose on a crisp autumn morning in 2022, pension fund managers worldwide awoke to a startling new reality. The gilts crisis that had rocked UK financial markets had not only sent shockwaves through the economy but also dramatically reshaped the landscape of defined benefit pension schemes. For many, the path to buyout—once […]

Read More