How Will Artificial Intelligence And Machine Learning Improve Governmental Cyber Security Strategies?

A deep-rooted risk to any state’s national security is that of cybersecurity. There are constantly hackers that want to use technology for malicious intentions, not to say the considerable list of enemies that a country can stack up along the years. In a national security context, the cybersecurity risk rises significantly, as well as the potential threat. That is so as what it is at stake is millions of sensitive data from citizens, companies, records, executive officers and members of the government, state’s records and more.

Sadly, not all Governments take this risk as precariously as they should, and the attempts towards building cyber-defense strategies – in most nations – lack budget, expert personnel and even real, domain knowledge. Before this lack of sound policies, Artificial Intelligence might be well perceived as a good starting point where to build the barriers that keep out any potential threats.

Governments and nations held sensitive data of millions of unaware citizens, though their cyber-defense strategies leave much to be improved. According to an article written by Mckinsey Global Institute, “Many countries have yet to clarify their cyber-defense strategies across all dimensions of cybersecurity or to impose a single governance structure.” As such, that need for clarity can appear in a disorganized response to disasters and ineffective use of insufficient resources.

According to Mckinsey’s article, an effective national cybersecurity strategy should be centralized and adequately designed. They state: “a single organization should have overall responsibility for cybersecurity, bringing operational activity and policy together with clear governance arrangements and a single stream of funding.”

Most advanced countries have started to introduce AI in their departments to develop effective strategies toward cyber attacks. For instance, applications are being developed to use AI in the domains of fighting crime, for monitoring and the military, as well as in governments.

This strategy should consider many different kinds of cyber attacks, as not all of them are clear. An especially disturbing problem that emerged in this regard in recent months has been how the election process in both the UK and the USA were probably hit on by manipulative use of platforms such as Facebook. It is even considered that such efforts affected election results, and there are also concerns that fraud could occur through the use of these types of technologies, targeting other State’s domains.

If a comparatively small and privately held organization such as Cambridge Analytica could interfere in the public elections of the USA, other nations with greater-funded units could do much more abuse. Nevertheless, using private networks – like Facebook – to influence elections is just the tip of the iceberg from a national security viewpoint.

As we can see, the risk introduced by cybersecurity problems is not irrelevant. Inside Big Data published in late 2018, it was said that “there were 5.99 billion recorded malware attacks in the first half of 2018, which doubled the number in 2017 over the same period.”  Governments and society as a whole should find ways to decrease these numbers and to ensure that cybersecurity is strong.

AI solutions, such as machine learning, can become important soon as cybersecurity attacks towards national databases have grown in recent years. These solutions can help countries to develop a strong backstop to these cyber attacks if adequately inserted in a common national security strategy.

Examples can be seen across the globe: the United Kingdom’s National Cyber Security Centre (NCSC) is one of the top-level models while Estonia’s Cyber Security Council is one of the most superior. These have selected AI-based applications to their many other countermeasures, and have been shown as two of the most superior nation-wide strategies.

AI and machine learning application are being developed to tackle and add additional layers of defense to these networks. The purpose behind this is that by including massive amounts of data into a system, machine learning can occur and the system can learn how to recognize certain kinds of issues and possible threats. This can lead to alerts being produced which can help organizations better guard themselves against attacks. Nevertheless, this is not a “plug and play” type of setup, as vast amounts of data are required to help artificial intelligence to do its job in the first place, and this takes time. Equally, risks increase over time, as hackers learn new ways to get through security systems.

This recent threat is slightly mitigated against by the fact that hackers manage to build new solutions on how they performed past threats. This indicates that using machine learning to recognize new attacks based on what was received about previous attacks does have its advantages, at least to some extent. It does have the potential to lead to threats being recognized faster and in a more effective manner. This has significant benefits in terms of reducing the impact of attacks, since pinpointing the start of a problem can lead to corrective or protective action being considered before other parts of a system become affected, or before other organizations are attacked. It is considered that this could be important in terms of preventing IT teams from having to deal with trivial issues. This would free them up to allow them to deal with the more significant challenges – perhaps such as predicting potential new types of attack, for example.

A challenge still waiting to be solved, is how to get AI to resolve problems in the same ways that humans can. To date, AI has been programmed to solve particular problems and to learn from the past (machine learning). Nevertheless, discovering ways to get machines to think like humans is difficult, and is a matter that has not yet been resolved. While these conventional types of issues can be solved, the main problem exists concerning artificial general intelligence, and it is believed that we are many years away from answers that see these types of AI implemented within them. That said, new ways are starting to be revealed that help to address these types of problems. For instance, deep learning is being implemented to help machines to understand how different kinds of judgments were made in areas that influence society and its functioning. These involve the making of decisions relating to criminals and determining whether a person or a business should get financing or not. At the same time, a solution called “transfer learning” is showing to be helpful in initially training a machine how to deal with a particular activity, but then applying what was learned to an another but similar type of activity. These types of AI applications could help address threats faced by cybersecurity issues.

However, AI’s potential can be a double-edged weapon: what works for businesses and governments also works for the ones that want to damage them. It does not help both the consecutive race amongst organizations. The need for the latest, polished AI software is remarkable and, as a result, AI solutions are being developed in such a speed that developers can barely keep up with. This fact means that chances may be taken and corners cut, presenting unpolished applications that may lead to cybersecurity risks. Because, yes, when we talk about software, there is always cybersecurity risks, and AI is no different at all.

Artificial intelligence (AI) gives excellent opportunities for the world overall and the development of national cybersecurity strategies in particular. It has led to the development of a complete pattern of several in-app solutions towards improving growth in productivity, improving efficiency and, above of it all, giving essential tools to smoothen processes up within Governments and its public institutions.

Governments need to make sure that cybersecurity is recognized as a top priority and also need to assure that AI is developed responsibly. If such a step is not practiced, malicious use or privacy violations could lead to the public suffering any trust it may have had in such technologies. Discovering ways to increase security protections will be especially crucial in this regard.


Aghiath Chbib
Aghiath Chbib
Aghiath Chbib is a Cyber Security, Fintech, Blockchain and Digital Forensics Business Leader, Director / CEO. He is the founder of Seecra, a new financial empowerment platform, and app. Aghiath Chbib is an Established executive with close to 2 decades of proven successes driving business development and Sales across Europe, the Middle East, and North Africa. Expert knowledge of cybersecurity, digital forensics, blockchain, and data protection. Detail-oriented, diplomatic, highly-ethical thought leader and change agent equipped with the ability to close multi-million-dollar projects allowing for rapid market expansion. Business-minded professional adept at cultivating and maintaining strategic relationships with senior government officials, business leaders, and stakeholders. A passionate entrepreneur with an extensive professional network comprised of hundreds of customers with access to major security system integrators and suppliers, as well Aghiath is an Author in Intelligenthq, Openbusinesscouncil, and Hedgethink.

DO YOU HAVE THE "WRITE" STUFF? If you’re ready to share your wisdom of experience, we’re ready to share it with our massive global audience – by giving you the opportunity to become a published Contributor on our award-winning Site with (your own byline). And who knows? – it may be your first step in discovering your “hidden Hemmingway”. LEARN MORE HERE