Artificial Intelligence is seen by many as the latest solution to a growing threat: the rise of cyber attacks in recent years. Machine learning and other AI applications can be embedded within algorithms in basically any software. Given the fact that today’s world pretty much runs digitally, AI seems to be the answer to cybercrime damages that will cost the world $6 trillion every year by the time we reach 2021. However, while AI can exponentially boost cybersecurity, it can also make the task even more complex. AI can be used and modified by hackers, who are always eager to evolve and use the last available tech in the market to cause harm.

IBM Chairperson and President, Ginni Rometty recently said, “Cybercrime, by definition, is the greatest threat to every profession, every industry, and every company in the world.” Many of these actors have taken this into account and have finally started to develop a consistent cybersecurity strategy, though is mainly based on AI apps. As recent figures by Webroot shows, AI is used by approximately 87% of US cybersecurity professionals.

Most of these AI applications use Machine Learning capabilities. As such, AI/ML nowadays has been inserted into a whole array of cyber measures called Intelligent Security solutions. These are protocols, software or even raw code, that is added to the IT system of a company or institution. AI, then, adds another layer of security capable of learning from threats, security breaches and other data collected through their mechanisms. AI, therefore, works through thousands of data and learns from it.

However, what is good for cybersecurity experts is also good for hackers. All that data out in the vastness of the internet – and, it is thought that more than 2.5 quintillion bytes are being added to it every single day – can be used from the other side of the cybersecurity spectrum. And in the same way that AI powered systems can help prevent security breaches through this data, hackers can use the same data to crack them.

AI expert Ahmed Banafa explained the process quite accurately in a recent article,

For example, AI can be used to automate the collection of certain information — perhaps relating to a specific organization — which may be sourced from support forums, code repositories, social media platforms and more. Additionally, AI may be able to assist hackers when it comes to cracking passwords by narrowing down the number of probable passwords based on geography, demographics and other such factors.

The ways hackers can use AI to override security systems are varied and creative. Taken as an example one of the latest cybersecurity tools, sandboxing technology can be surmounted by hackers. This cybersecurity mechanism allows to separate running programs, usually in an effort to mitigate system failures or software vulnerabilities, from spreading. However, newly discovered malware has been found, “able to recognize when they are inside a sandbox, and wait until they are outside the sandbox before executing the malicious code,” as Mr. Banafa points out, hence this malware being powered by AI algorithms.

Ransomware is another type of attack that has been very prolific in recent years. From the devastation caused by WannaCry and NotPetya in 2017, many viruses, mainly worms and trojans have become complex and almost impossible to remove. These ransomware programs effectively kidnap all the data within the targeted computer and in order to liberate them, a ransom is demanded. New intelligent solutions, like the Internet Of Things, bring new layers of vulnerabilities as not only must one device be protected, but many others too.

But what makes AI most risky is how companies use these machine learning and other capabilities within their applications.

Surveillance cameras, production lines in a factory, smart-phones and other home devices, electric grids and power plants… The list goes on as far as the loose ends do. Hackers know this and using AI-powered capabilities they perpetrated an attack in October 2016 which effectively disrupted services like Twitter, NetFlix, NYTimes, and PayPal across the US. Mr. Banafa noted that attackers used “countless Internet of Things (IoT) devices that power everyday technology like closed-circuit cameras and smart-home devices. [These] were hijacked by the malware, and used against the servers,” noted Mr. Banafa. These examples are just the tip of the iceberg in a growing intelligence threat. But what makes AI most risky is how companies use these machine learning and other capabilities within their applications. Lack of time and a rush to implement this time-saving, trendy technology has made IT experts not sufficiently trained in these algorithms use them, and so they become a cybersecurity risk before they ever reach the market.

As MIT expert Martin Giles explained in a thorough article about AI, “Many products being rolled out involve “supervised learning,” which requires firms to choose and label data sets that algorithms are trained on—for instance, by tagging code that’s malware and code that is clean.” Artificial Intelligence code is as intelligent as programmers make it.

As such, various risks from this implementation arise: one risk is that in rushing to get their products to market, companies use training information that hasn’t been thoroughly scrubbed of anomalous data points. That could lead to the algorithm missing some attacks. Another is that hackers who get access to a security firm’s systems could corrupt data by switching labels so that some malware examples are tagged as clean code.

These could lead hackers to target this code and override them with their own malware disrupting the whole cybersecurity without being even flagged as malware. This type of measure is more likely to happen from now on as companies still prioritise market needs over well developed and deployed cybersecurity measures.

AI just can be the perfect companion for what has traditionally been a mastery of deception. Hackers have always tried to overcome antivirus, firewalls, anti-malware software and other security barriers, and to do so, they have deployed all imaginable ways to blend themselves within the system without being recognised. Viruses installing themselves within the System32 file is now an old trick and malware deliberately changing their names to go undetected was a common practice in the early 2000s. Thanks to Artificial Intelligence, the possibilities multiply enormously.


In the midst of a world where so many are disengaged, cynical and apathetic, isn’t it time for some fresh air? Isn't it time to join together in building a refreshing, new community founded upon “real” relationships, “real” thought leadership, and “authentic” engagement? NO Clutter. NO Spam. NO NO Fees. NO Promotions. NO Kidding. SIMPLY Pure Engagement Unplugged. ☕️ CLICK TO GRAB YOUR SEAT IN OUR NEW ENGAGE CAFÉ ☕️

Previous articleThe Story of Our Time
Next articleIn Search Of Authors – Do You Have The “Write” Stuff?
Aghiath Chbib
Aghiath Chbib is a Cyber Security, Fintech, Blockchain and Digital Forensics Business Leader, Director / CEO. He is the founder of Seecra, a new financial empowerment platform, and app. Aghiath Chbib is an Established executive with close to 2 decades of proven successes driving business development and Sales across Europe, the Middle East, and North Africa. Expert knowledge of cybersecurity, digital forensics, blockchain, and data protection. Detail-oriented, diplomatic, highly-ethical thought leader and change agent equipped with the ability to close multi-million-dollar projects allowing for rapid market expansion. Business-minded professional adept at cultivating and maintaining strategic relationships with senior government officials, business leaders, and stakeholders. A passionate entrepreneur with an extensive professional network comprised of hundreds of customers with access to major security system integrators and suppliers, as well Aghiath is an Author in Intelligenthq, Openbusinesscouncil, and Hedgethink.
avatar
3000
  Subscribe  
Notify of