Since the concept of cybersecurity was invented, keeping information safe has been a mostly human endeavor.
While the technical aspects have been carried out by finely tuned programs, humans designed the programs, found the bugs and worked to fix them.
But as scientific development continues to move toward smart technology, it seems inevitable that artificial intelligence, or AI, will one day completely take over the job of uncovering flaws in the cyberworld.
The question that still remains, however, is if a future, AI-driven cybersecurity world will include us. Will we still need analysts, the people who create responsive technology to combat cybercrime?
Researchers from the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology recently introduced a new system that combines AI and human efforts to detect cyberattacks. The researchers say the result, dubbed AI2, can detect 85 percent of cyberattacks while also reducing the number of “false positives” often picked up in cybersecurity scans to a little under 5 percent.
“You can think about the system as a virtual analyst,” Kalyan Veeramachaneni, one of AI2’s developers, said in an MIT statement. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.”
The team says that many of the roadblocks to identifying, fixing and preventing cyberattacks come from ever-changing attack forms and the expensive, lengthy human factor.
“Relying on analysts to investigate attacks is costly and time-consuming,” a paper explaining the MIT project reads. “We present a solution that combines analysts’ experience and intuition with state-of-the-art machine learning techniques to provide an end-to-end, artificially intelligent solution.”
AI2 claims to fight against increasingly intelligent hackers, as well as freeing up human analysts, whom the researchers say are already bogged down sorting through immense amounts of data, looking for attacks. The program picks the top 200 abnormal events in a set of data and delivers them to a human expert, who determines which are the real cyberthreats.
AI2 then takes that information and commits it to memory for the next round of data, MIT says, thereby allowing it to “learn” and detect those threats if they come up again.
But while this new technology builds on human knowledge to create a smarter detecting machine, will the day ever come that a cybersecurity program can act independently of people?
Some companies are already exploring the idea of cybersecurity programs using solely AI. In an article for Information Week’s Dark Reading, an information security website, writer Andrew Thomson discussed four startups that are working on this technology, including a startup called Darktrace based in the United Kingdom:
“Unlike traditional cybersecurity systems in which malicious threats and viruses are manually added to a list and then blocked, Darktrace uses a system based on machine learning and mathematics that can detect threats without any prior knowledge of what it is looking for, cutting out the need for human intervention,” Thomson wrote.
By teaching these programs to think for themselves, Thomson says, cybersecurity professionals are able to bridge the ever-increasing knowledge gap between hackers and those who work to defend data.
But with the increased efficiency of AI comes some concerns. A primary one, addressed in a 2015 article published by the Association for Computing Machinery, is that AI creations themselves could be at risk for hacking.
“AI algorithms are as vulnerable as any other software to cyberattack,” the article reads. “As we roll out AI systems, we need to consider the new attack surfaces that these expose. … Before we put AI algorithms in control of high-stakes decisions, we must be confident these systems can survive large-scale cyberattacks.”
Others argue that AI has yet to be developed that can accurately mimic the value or depth of a threat. Information Age editorial director Ben Rossi makes the case that humans are needed to create the artificial intelligence that can identify and shut down attackers.
“[People’s] insight and knowledge are vital to establishing how to react to a specific scenario and whether or not a reaction is even needed,” Rossi said. “Common sense and the five senses have yet to be replicated and these play no small role in the management and control of security.”
As security professionals struggle to fix the hacking problem, the issue only worsens. Symantec says 430 million new pieces of malware were discovered in 2015, a 36 percent increase from 2014, and 500 million personal records were stolen or lost last year.
Whether cybersecurity becomes an entirely machine-run enterprise, innovation should not be inhibited by fear of incorporating artificial intelligence, Rossi said.
“The arrival of machine-led security systems capable of machine learning and swift responses is not one that should be met with concern,” he said. “Instead, it is an opportunity for security professionals to expand their reach and refine their skills, harnessing the technology to create systems that are aware and ready.”
The need for a better solution is more pressing than ever, and despite some concerns about losing the human element, there is a growing sense that AI may be the only solution that can keep up as hackers grow more organized and more intelligent.