Much of the discussion around the use of artificial intelligence in cybersecurity concentrates around finding the balance between the benefits and the threats of disruptive technologies. On the one hand, recent advances in machine learning and data science have generated algorithms with a lot of potential to quickly process information and, hence, potentially spot cybersecurity threats before they cause any harm. On the other hand, any data-driven automation is (1) only as good as the training sets that it builds on and (2) may suffer from a number of imperfections and biases. So how is AI actually used in cybersecurity?
The challenge of understanding the current applications is that there is a lot of hype around this topic. Hence, what IS ACTUALLY there and what is only talked about as a POTENTIAL to be there are two things that are often blended into some factual claims.
In terms of existing applications of AI, the following 5 directions seem to have actual technology behind them:
1. Network Intelligence
Much of the current business activity has moved into the digital domain. Hence, organisations are operating significant number of legacy, novel, internal, external, open and closed networks. These networks require a vast infrastructure in terms of cybersecurity intelligence and monitoring of communication, connection, and applications. Under these circumstances, AI solutions which can analyse network traffic and keep track of what is going on in a live regime are becoming more and more widespread and gaining momentum. Some even claim to have predictive capability such as, for example, MixMode, whose product descriptions incorporate forward-looking analytics. In Network intelligence, AI has a lot of potential as it not only monitors the network events in the real time regime, but also mines important data, which then can be used for clustering and classification of "normal" versus "abnormal" network activity.
2. Malware Spotting
AI is also incredibly powerful in detecting malware (harmful software). Anti-malware detection tools are numerous, including those produces by start-ups as well as known security players such as Kaspersky Lab. The great opportunity for AI represented by the malware challenge is that we have millions of labeled samples, which could be used by the AI algorithms for training. This means that malware spotting is currently the most data-supported AI application for security. Yet, the drawback of this direction is that very often new malware emerges on a daily basis and historical datasets are not always helpful. This is particularly trie because most malware innovation lies in the are of social engineering techniques (i.e., psychological rather than technical tricks). And, for this reason, while malware spotting using AI can filter out known threats, on predictive side, I have not seen many reliable and convincing tools, although some may are already in development. The challenge that these new tools have to address is making sure that forward-looking malware spotting modelling is based not only on a solid mathematical, statistical, and machine learning basis, but also a deep understanding of how psychological techniques could be used and fused by the adversaries.
3. Deception Aids
Recently, much work in the cybersecurity business community has been done around development and provision of cybersecurity tools for deceiving cyber adversaries. Many of these tools are based on machine learning algorithms and as well as a number of high-quality training sets. Some notable technology examples were cited in our special article on CyberBitsEtc earlier. The rationale behind the AI deception aids is that instead of concentrating on robustness goal (i.e., the goal of not letting any adversaries through the cyber defence systems of the companies), businesses should focus on resilience goal (i.e., the goal of putting all computer and data-driven systems back on track after the cyber attack). Deception aids assume that cyber attacks are inevitable - they will happen in one way or the other. Therefore, the best idea for any business is not to think of trying to keep all adversaries away (as thicker cybersecurity "walls" will only attract better, more trained and experiences cybercriminals). Instead, in addition to the attack-preventive tools, the companies should use cyber deceptive tools. The goal of these tools is to let cyber adversaries in, let then compromise all their VPNs and being forced to leave without getting any useful data and without compromising any important systems. This is a really cool philosophy as it potentially can not only spot adversaries quickly, but also lead them to wrong places in the network and help collect forensic evidence for later attack attribution. For a number of years, businesses were using a number of deceptive mechanisms: smart "honeypots" (computer or computer systems intended to imitate likely targets of cyberattacks, which are used to detect attacks or redirect them away from a legitimate target); automated traps; and decoys that imitate target systems such as cash machines (ATMs), individual computers of the organizational system, medical devices, internal network switches and routers. Many of exiting tools use machine learning algorithms and this is promising, although there is a an important caveat. When designing an AI-based deception system one has to realize that AI technology is available not only to the "goon guys", but also to adversaries. For example, I know of a case, when a company used an AI deception aid to assign "honeypots" in their computer system to lead adversaries to the "wrong" , "useless" trap machines on their network. However, adversaries used a more sophisticated AI algorithm to analytically spot the "honeypots" and succeeded in compromising the company systems. Having said that, this direction is very promising and with better AI training we will probably be able to use it in effective ways against cybercriminals in the future.
4. Security Decision Support Systems
Many decisions about business security are made by human analysts. And this is a GOOD thing. Why? Because over 2/3 of cyber threats actually have nothing to do with technology. They are utilizing and exploiting human psychology. However, there are a number of repetitive and automated tasks (such as data mining, cleansing, threat real-time detection) that human analysts can now outsource to machines. In other words, the dat-to-day security operations can really benefit from automation and a lot is currently being done to make this happen (see for example SecBi). Yet, again, there are important things to remember when considering automated solutions in this space. The tricky part about security operations in general and cybersecurity operations in particular is that they require a contrastive dialog and collaboration between business C-suit decision makers (many of whom are not domain experts in cyber); developers (who are providing data-driven solutions) and cybersecurity specialists (who offer domain expertise). These 3 groups of people need not only to talk to each other, but also work in harmony to create effective cyberdefence mechanisms. It is hard to imagine an automated system, which would replace human operations any time soon, but looking into promising AI-driven mechanisms to aid decision-making for all 3 groups is certainly a promising direction for further algorithmic development.
5. Hybrid Predictive Mitigation
This direction is an area I am particularly excited about as it utilzes behavioural data to help develop mass-customized, personalized, and future-oriented solutions based on behavioural analytics. A lot is done in this direction by the Alan Turing Institute's behavioural data science group, who look into how machine learning algorithms could be trained on data about human behaviour to help mitigate cybersecurity risks. A lot of work is also devoted to understanding behavioural segmentation of business staff and customers around their propensity to be affected by cybersecurity threats as well as their ability to detect cybersecurity risks (including those that were observed before) as well as new (unknown risks). In this domain, much work is still done in academic research. For example, Cyber Domain Specific Risk Taking Scale allows to behaviourally segment population to better understand how staff and customers could be better trained to spot cyber threats. There is also a large body of work on humans as cyber security sensors, which looks into better understanding the human ability to detect cyber attacks. Both directions have machine learning spin-offs streamed at automating segmentation and skill-detection in cyber security domain.
Takeaways
Automation in cybersecurity is a useful tool, which can significantly increase defence effectiveness. There are many promising directions, yet, most of them operate as human-machine teaming solutions helping domain experts to get better data; and other decision makers to have a more informed view on cyber issues via the AI-driven decision support.
Disclaimer: Please, note that product or brand mention in this summary is not equivalent to product or brand endorsement. Specific names are used to showcase existing solutions and provide concrete examples.
Comments