Artificial Intelligence, Robots

Can Artificial Intelligence & Robots fight the Cybercrime Epidemic?

Artificial Intelligence & Robots fight the Cybercrime Epidemic
The potential of AI and machine learning

Machine learning and AI speed up the lengthy process of sorting through data

According to 700 security professionals surveyed by IBM the top benefits of using cognitive security solutions were improved intelligence (40%), speed (37%) and accuracy (36%).

IBM say Watson performs 60 times faster than a human investigator and can reduce the time spent on complex analysis of an incident from an hour to less than a minute.
The development of quantum computing, which is expected to be more widely available in the next 3 to 5 years could make Watson look as slow as a human.

Machine learning and AI speed up the lengthy process of sorting through data. Quantum computing aims to be able to look at every data permutation simultaneously.
Canada based company, D-Wave recently sold its newest, most powerful machine to a cyber security company called Temporal Defense Systems to work on complex security problems.

The rules-based systems of yesterday are no longer effective against today’s sophisticated attacks. Any system that can improve accurate detection and boost incident response time is going to be in demand.

We have clearly reached a point where the sheer volume of security data can no longer be processed by humans. The successful answer to beating the cat-and-mouse game of cybercrime lies in so-called human-interactive machine learning.
Human-interactive machine learning systems analyse internal security intelligence, and marry it with external threat data to direct human analysts to the needles in the haystack. Humans then provide feedback to the system by tagging the most relevant threats. The system adapts its monitoring and analysis based on human inputs, enhancing the chances of finding real cyber threats and minimising false positives.

cybersecurity spending

Deploying machine learning to the laborious first line security data assessment enables human analysis to focus on advanced investigations of threats. The unity of applying AI using a human-interactive approach offers the optimum solution for keeping ahead in the cybercrime war.

It’s important to recognise that while machine learning may be both fast and cheap, it is not perfect.

Algorithms can be manipulated by hackers. Donal Byrne, CEO of Corvil says:

“Those software applications interact with each other in very complicated ways. If someone understands how the algorithm works, it can be manipulated in predictable ways. This means that even without changing the software itself, introducing specific input data can allow one to manipulate an algorithm towards a different outcome than expected.”

“Circuit breakers” can be used to monitor the algorithms’ output to combat this manipulation. This is an ‘overseer’ algorithm or software that can pull the plug – stopping all or a specific portion of the action – whenever it sees divergent conditions beyond a certain threshold.

However this cannot completely solve the problem of rogue algorithms.

When these algorithms are used within large computer systems no human can monitor the volume and speed of the interactions. We have to use algorithms to monitor the performance of algorithms generated by other algorithms.

It is the beginning of what John Danaher calls an algocracy – an algorithm-driven artificial intelligence revolution.

“By gradually pushing human decision-makers off the loop, we risk creating a ‘black box society’. This is one in which many socially significant decisions are made by ‘black box AI’. That is: inputs are fed into the AI, outputs are then produced, but no one really knows what is going on inside. This would lead to an algocracy, a state of affairs in which much of our lives are governed by algorithms.”

Global spending on cybersecurity products and services are predicted to exceed £1 trillion over the next five years, from 2017 to 2021.

By 2020, 60% of digital businesses will suffer a major service failure due to the inability of IT security teams to manage digital risk, according to Gartner. If we marry all this new Internet of Things (IoT) data with artificial intelligence (AI) and machine learning, there’s a chance to win the fight against cybercriminals.

Conclusion
When it comes to cyber security, businesses need to act now to tighten up cyber defences. With large-scale security breaches only increasing in number over recent years, organisations both big and small should consider investing in AI systems designed to bolster their defences.

With the Centre for Cyber Safety and Education revealing that the world will face a shortfall of 1.8 million cyber security professionals by 2022, we are reaching a critical point where urgent action is needed.

Not only must organisations invest in preventative AI, but the government must continue to back the development of the next generation of technology professionals. After all, there’s no use in having the technology without skilled humans knowing how to use it.