We are living in a disruptive period of technological innovation, and it is imperative to adjust to the new cybersecurity realities. The physical and digital worlds are meshing together, and people and gadgets are becoming exponentially more connected.
In cybersecurity, the increasing sophistication of threat actors, particularly those with state-sponsored or criminal agendas is evident in their quest for vulnerabilities and malware penetration especially through the adaptation and automation of supporting machine learning(ML), and deep learning, under the umbrella of artificial intelligence (AI).
AI/ML technologies are transformative and are primarily intended for tasks like speech recognition, learning and planning, and problem-solving. Specifically, there are 4 types of machine learning, a subcomponent of AI. They include 1) Supervised Learning which involves using labeled datasets to train algorithms for accurate classification and regression analysis for outcome prediction, and 2) Unsupervised Learning which utilizes unlabeled and unclassified data sets to make predictions without human intervention. It is helpful for categorizing or grouping unsorted data to identify hidden patterns or groups in data, which makes them ideal for clustering, anomaly detection, and exploratory data analysis, Semi-supervised Learning which allows for machines to learn from all the available data by utilizing both supervised and unsupervised learning advantages to improve their accuracy and performance, and 4) Reinforcement Learning that utilizes a trial and error approach with a feedback-based process that allows the agent to learn from its experiences. Its main advantage is its ability to learn from experience and improve performance over time. Source: https://emeritus.org
Although AI and ML can be useful instruments for cyber-defense, they can potentially have unintended consequences. danger actors can make use of them, even though they can be utilized to quickly discover patterns, spot danger abnormalities, and improve cyber security capabilities.
Cybercriminals are already attacking and probing target networks with AI and ML techniques and the frequency of data breaches has increased over the past few years due to advancements in data collecting and exfiltration technologies.
Hackers and adversarial governments are currently using AI and MI as instruments to locate and take advantage of holes in threat detection models. They employ a number of techniques to achieve this. Their favored methods typically involve automated phishing attempts that imitate people and malware that can alter itself in order to trick or even compromise cyber-defense systems and applications.
TRADITIONAL PASSWORD SECURITY IS UNDERMINED BY AI
One significant area of cybersecurity has been passwords. Strong passwords have been a first-tier defense against cyber-attacks and breaches. However, with the development of AI and ML tools, the effectiveness of cyber-defense has been thoroughly diminished, especially from more sophisticated cyber actors who use AI/ML tools to circumvent password defenses.
Machine learning techniques and artificial intelligence have made it easier for hackers to get around security mechanisms for password authentication. Those tools are enabling hackers to scrape the internet for personal data and find passwords. AI technologies combined with social engineering can now decipher passwords far more quickly than earlier systems. These programs can then make use of the information gathered to enhance phishing attempts, get additional information, and reveal weaknesses.
One of the preferred methods of attack by hackers is called brute-forcing, where a specialized program checks various combinations of letters, numbers, and symbols at a speed much beyond human capabilities, is one of the most used attack techniques. Hackers can attempt millions of possible passwords each minute using AI-driven brute-force attacks that enable hackers to take advantage of password complexity flaws.
Traditional passwords are subject to statistical assault since a modern hacker is typically an automated bot capable of making billions of attempts using the most popular passwords on numerous targets in a second. Even in cases when a user is shrewd enough to select a more secure password, bots may still attempt to access password-protected systems using other means, such as ransomware, malware, phishing, insider threats, and distributed denial of service assaults.
The reality in 2023 is that AI password crackers can breach most passwords in seconds and more difficult ones in minutes. While longer passwords and phrases make it more challenging, as computational capabilities of AL and ML continue to evolve, those solutions will experience a significant reduction in efficacy.
AI technologies are also negating the cybersecurity value of two-factor authentication. For example, the common use of CAPTCHAS, Known as Completely Automated Public Turing test to tell Computers and Humans Apart, is becoming obsolete. AI bots have become so adept at mimicking the human brain and vision that CAPTCHAs are no longer a barrier.
Making CAPTCHAS more complex is not the answer. Cengiz Acartürk, a cognition and computer scientist at Jagiellonian University in Kraków, Poland, says that there’s a problem with designing better CAPTCHAs because they have a built-in ceiling. “If it’s too difficult, people give up,” Acartürk says. Whether CAPTCHA puzzles are worth adding to a website may ultimately depend on whether the next step is so important to a user’s experience that a tough puzzle won’t turn away visitors while providing an appropriate level of security. AI bots are better than humans at solving CAPTCHA puzzles (qz.com)
Another way AI undermines passwords is via the use of keylogging. The use of AI can enable keyloggers to keep track of your keystrokes in order to retrieve your passwords. According to a University of Surrey study, artificial intelligence (AI) can be trained to recognize the key that is being pressed more than 90% of the time simply by listening to it.
Using an Apple MacBook Pro, the group recorded the sound of 25 distinct finger and pressure combinations being used to press each key on the laptop. The noises were captured during a conversation on a smartphone and during a Zoom meeting. A machine learning system was then trained to recognize the sound of each key using some of the data that had been provided to it. The algorithm was able to accurately identify which keys were being pressed 95% of the time for the call recording and 93% of the time for the Zoom recording when it was evaluated using the remaining data. What secrets can AI pick up on by eavesdropping on your typing? (govtech.com)
As a result of increasing vulnerability other methods have been used to bolster password security. Despite the fact that many of these alternatives have shown to significantly increase security over passwords, alternatives like biometrics, tokens, passkeys, passphrases, etc., have not been able to successfully unseat passwords.
For instance, biometrics—such as fingerprints and facial recognition—have been around for more than 20 years. Biometrics provides a very convenient answer to the query “something that you are.” However, due to privacy concerns, government agencies and privacy groups oppose biometrics like facial recognition. Even their descriptive names suggest that passkeys, passphrases, and tokens are password-focused. In essence, they are just strings of letters in a certain order that serve as passwords.
And password managers have not fared much better. In fact, a password manager company called LastPass confirmed that it suffered data breaches. In one instance, the company suffered the theft of proprietary information after a hacker used a compromised developer account to access the company’s development environment. The incident compromised portions of the company’s source code and some proprietary technical information. LastPass Confirms Hack And Theft Of Source Code | Silicon UK Tech News