AI^2: Training a big data machine to defend

AI2: Training a big data machine to defend Veeramachaneni et al.IEEE International conference on Big Data Security, 2016

Will machines take over? The lesson of today’s paper is that we’re better off together. Combining AI with HI (human intelligence, I felt like we deserved an acronym of our own😉 ) yields much better results than a system that uses only unsupervised learning. The context is information security, scanning millions of log entries per day to detect suspicious activity and prevent attacks. Examples of attacks include account takeovers, new account fraud (opening a new account using stolen credit card information), and terms of service abuse (e.g. abusing promotional codes, or manipulating cookies for advantage).

A typical attack has a behavioral signature, which comprises the series of steps involved in commiting it. The information necessary to quantify these signatures is buried deep in the raw data, and is often delivered as logs.

The usual problem with such outlier/anomaly detection systems is that they trigger lots of false positive alarms, that take substantial time and effort to investigate. After the system has ‘cried wolf’ enough times they can become distrusted and of limited use. AI2 combines the experience and intuition of analysts with machine learning techniques. An ensemble of unsupervised learning models generates a set of k events to be analysed per day (where the daily budget k of events that can be analysed is a configurable parameter). The human judgements on these k events are used to train a supervised model, the results of which are combined with the unsupervised ensemble results to refine the k events to be presented to the analyst on the next day. And so it goes on.

The end result looks a bit like this:

With a daily investigation budget (k) of 200 events, AI2 detects 86.8% of attacks with a false positive rate of 4.4%. Using only unsupervised learning, on 7.9% of attacks are detected. If the investigation budget is upped to 1000 events/day, unsupervised learning can detect 73.7% of attacks with a false positive rate of 22%. At this level, the unsupervised system is generating 5x the false positives of AI2, and still not detecting as many attacks.

Detecting attacks is a true ‘needle-in-a-haystack’ problem as the following table shows:

Entities in the above refers to the number of unique IP addresses, users, sessions etc. anaysed on a daily basis. The very small relative number of true attacks results in extreme class imbalance when trying to learn a supervised model.

AI2 tracks activity based on ingested log records and aggregates activities over intervals of time (for example,counters, indicators – did this happen in the window at all? – elapsed time between events, number of unique values and so on). These features are passed into an ensemble of three unsupervised outlier detection models:

  • A Principle Component Analysis (PCA) based model. The basic idea is to use PCA to determine the most significant features (those that explain most of the variance in the data). Given an input take its PCA projection, and then from the projection, reconstruct the original variables. The reconstruction error will be small for the majority of examples, but will remain high for outliers.
  • A Replicator Neural Network (not to be confused with a RecurrentNeural Network – both get abbreviated to RNN). This works on a very similar principal. The input and output layers have the same number of nodes, and intermediate layers have fewer nodes. The goal is to train the network to recreate the input at the output layer – which means it must learn an efficient compressed representation in the lower-dimensional hidden layers. Once the RNN has been trained, the reconstruction error can be used as the outlier score.
  • The third unsupervised model uses copula functions to build a joint probability function that can be used to detect rare events.

A copula framework provides a means of inference after modeling a multivariate joint probability distribution from training data. Because copula frameworks are less well known than other forms of estimation, we will now briefly review copula theory…

(If you’re interested in that review, and how copula functions are used to form a multivariate density function, see section 6.3 in the paper).

The scores from each of the models are translated into probabilities using a Weibull distribution, “which is flexible and can model a wide variety of shapes.” This translation means that we can compare like-with-like when combining the results from the three models. Here’s an example of the combination process using one-day’s worth of data:

The whole AI2 system cycles through training, deployment, and feedback collection/model updating phases on a daily basis. The system trains unsupervised and supervised models based on all the available data, applies those models to the incoming data, identifies k entities as extreme events or attacks, and brings these to the analyst’s attention. The analysts deductions are used to build a new predictive model for the next day.

This combined approach makes effective use of the limited available analyst bandwidth, can overcome some of the weaknesses of pure unsupervised learning, and actively adapts and synthesizes new models.

This setup captures the cascading effect of the human-machine interaction: the more attacks the predictive system detects, the more feeback it will receive from the analysts; this feedback, in turn, will improve the accuracy of future predictions.

Glossary of AI Terms for Cyber Security


We often encounter confusion and hype surrounding the terminology of Artificial Intelligence. In this post, it is hoped that the security practitioner can have a quick reference guide for some of the more important and common terms.

Note that this is a limited set. We discovered that, once you start defining these terms, the terms themselves introduce new terms that require definition. We had to draw the line somewhere…

  • Artificial Intelligence – “The goal of work in Artificial Intelligence is to build machines that perform tasks that normally require human intelligence.” This quote from Nils Nilsson is an excellent definition, but it is not the only one. There are many definitions of Artificial Intelligence here and here.
  • Algorithms – are a self-contained step-by-step sets of operations to be performed. Algorithms perform calculation, data processing, and/or automated reasoning tasks. Among other things, Algorithms can be used to train Machine Learning models.
  • Machine Learning – A discipline or subfield of Artificial Intelligence. Paraphrasing the definition by Tom Mitchell, ML is the study of computer algorithms that learn from experience to perform a set of predefined tasks.
  • Machine Learning Models – The output of a machine learning algorithm. There are two types of machine learning models, those generated by Supervised algorithms and Unsupervised algorithms. See below.
  • The difference between “algorithms” and “models”: this is a common question and still quite difficult to answer. In the context of Artificial Intelligence, we can say that learning algorithms generate models. The learning algorithms are either Supervised or Unsupervised.N.B. People often use “models” and “algorithms” interchangeably which is a common source of confusion. To the layman, think that algorithms are like programs and the models are the output of the program.  
  • Unsupervised Learning (algorithm)a family of machine learning algorithms that learn without labels (labels defined below). The output of Unsupervised Learning algorithms are models that capture the structure of the data, can identify groups, or find statistical outliers. For example, Unsupervised Learning models can show you behaviors that are unlike other behaviors in a corpus of data.
  • Supervised Learning (algorithm) – a family of machine learning algorithms that learn from labeled data. The output of Supervised Learning algorithms are predictive models that can classify or assign a score to a data pattern. For example, trained Supervised Learning models can classify behaviors patterns into different attack tactics, or can assign a risk score to a behavior. In cyber-security, Supervised Learning models predict what a human would label a given behavior pattern.
  • Labeling – is the act of classification or describing something. For PatternEx, the act of labeling is something that a human analyst does every day. He or she marks something as a malicious or benign behavior. The more labels are provided, the more accurate the system becomes.
  • Active Learning – Active learning is a machine learning process in which a learning algorithm interactively requests inputs from an external source to improve a model. It is most commonly applied when only unlabeled data is available, the goal is to train Supervised Learning models, and the external source is a human expert that provides labels, and the labeling process is expensive and/or slow. Active learning strategies are also useful when, as in the case of InfoSec, the data changes fast.
  • Behavior Vectors – a quantified description of the activity of the modeled entities.
  • Entities – a thing with distinct, independent existence against which the behaviors relate. In cyber-security, examples would be users, IP’s, domains, and so on.
  • Human-Assisted Artificial Intelligence – the synergy between human intuition and artificial intelligence. Note that the humans assist the AI by providing feedback (e.g. labels) and the trained AI assists the humans by automating and scaling the tasks requiring human intelligence.
  • Predictions – are the activity of the system anticipating how an event would be classified by the security analyst.
  • Rare Events – very similar to “anomalies” and “outliers,” Rare Events are events that activities seen in log data that are unusual or out of the ordinary but not yet determined to be either malicious or benign.
  • Transfer Learning – means you can port knowledge acquired at one environment to another to improve model accuracy. For example, a model trained at company X can be transferred to companies A, B and C, increasing the detection capabilities of the entire group.
  • Virtual Analyst – the term that describes the effect of a fully trained AI system. Because a trained AI system greatly scales the analytic capability of the human analyst team, we say it is like expanding your team with “virtual analysts.”

PatternEx Unique Approach

PatternEx comes with many algorithms out-of-the-box that allow it to create predictive models that select what the analyst should review. Humans will always be needed to identify in context what is malicious in a constantly changing sea of events. In this way, Human-Assisted Artificial intelligence systems learn from analysts to identify what events are malicious. This results in greater detection accuracy at scale and reduced mean time to identify an attack.

If you’d like to learn more about how PatternEx is bringing Artificial Intelligence into the domain of Cyber Security click here:  LEARN MORE