Google’s email platform is under attack from AI hackers. Gmail certainly isn’t the only target of such attacks, but with over 2.5 billion users it is the biggest. I’m endlessly being attacked by phishing attempts and getting tired of it. Here’s what you need to know to protect yourself.
The AI Threat To Billions Explained
No email server is immune to advanced attacks from threat actors looking to exploit everyone’s email inbox. Google itself has warned about a second wave of Gmail attacks that include extortion and invoice-based phishing, for example. At least once a week I receive fake invoices to my personal email accounts claiming I need to approve and pay for services rendered. With Apple also warning iPhone users about spyware attacks, now is not the time to be cyber-complacent.
McAfee recently issued a new warning about the biggest threat facing Gmail users: AI-powered phishing attacks that are frighteningly convincing.
“Scammers are using artificial intelligence to create highly realistic fake videos or audio recordings that pretend to be authentic content from real people,” McAfee warned, “As deepfake technology becomes more accessible and affordable, even people with no prior experience can produce convincing content.”
The Convincing AI-Powered Attacks Targeting Email Users
Sharp U.K. research has also concluded that “AI is being weaponized for cyber attacks,” and pointed to six specific attack methodologies that account for much of this weaponization. “While AI offers great benefits in various fields,” the report stated, “its misuse in cyber attacks represents a significant and growing threat.” Those threats were:
- The Use Of AI In Password Cracking—AI is taking over from brute-force password cracking strategies. “AI algorithms can analyze millions of passwords and detect common trends, allowing hackers to generate highly probable password guesses.” It’s far more efficient than bog-standard brute-forcing, allowing hackers to complete this stage of an attack process far quicker and at less cost in terms of time and resources.
- Cyberattack automation—anything that can be automated will be automated when it comes to determined hackers and cybercriminals looking for ways into your network and data; from vulnerability scanning to attack execution at scale. By deploying AI-powered bots to scan thousands of websites or networks simultaneously, the Sharp U.K. report said, weaknesses can be found to be exploited. And that exploitation process can also be automated with the help of AI. “AI-powered ransomware can autonomously encrypt files, determine the best way to demand ransom, and even adjust the ransom amount based on the perceived wealth of the target,” the researchers said.
- Deepfakes—these are being used in attacks targeting Gmail users. “In one high-profile case,” the report said, “a deepfake audio of a CEO’s voice was used to trick an employee into transferring $243,000 to a fraudster’s account. As deepfake technology continues to evolve, it becomes increasingly difficult for people and organizations to distinguish between real and fake, making this a powerful tool for cyber attackers.”
- Data mining—because AI can enable an attacker to not only collect but also analyze data at scale and at speeds that would have been considered impossible just a couple of years ago, it’s hardly surprising that this is a resource that’s being used and used hard. “By using machine learning algorithms, cybercriminals can sift through public and private databases to uncover sensitive information about their targets,” the report warned.
- Phishing attacks—the methodology that is most applicable to the Gmail attack threat, the use of AI in constructing and delivering authentic and believable social engineering attacks. “AI tools can analyze social media profiles, past interactions, and email histories,” the report warned, “to craft messages that seem legitimate.”
- The evolution of malware, at scale—AI-powered malware is a thing in its own right, often coming with the ability to adapt behavior in an attempt to evade detection. “AI-enhanced malware can analyze network traffic to identify patterns in cyber security defenses,” the report said, “and alter its strategy to avoid being caught.” Then there’s the small matter of code-changing polymorphism to make it harder for security researchers to recognize and, as we’ll explore in a moment, the use of large language models to create these subtle malware variations at speed and scale.
What Gmail And McAfee Recommend You Do To Avoid AI Attacks
Always err on the side of caution, and frankly don’t be gullible is my advice. McAfee’s advice is to “protect yourself by double-checking any unexpected requests through a trusted, alternate method and relying on security tools designed to detect deepfake manipulation”.
Google’s advice for mitigating attacks against Gmail can be broken down into these main points:
- If you receive a warning, avoid clicking on links, downloading attachments or entering personal information. “Google uses advanced security to warn you about dangerous messages, unsafe content or deceptive websites,” Google said, “even if you don’t receive a warning, don’t click on links, download files or enter personal info in emails, messages, web pages or pop-ups from untrustworthy or unknown providers.”
- Don’t respond to requests for your private info by email, text message or phone call and always protect your personal and financial info.
- If you think that a security email that looks as though it’s from Google might be fake, go directly to myaccount.google.com/notifications. “On that page,” Google said, “you can check your Google Account’s recent security activity.”
- Beware of urgent-sounding messages that appear to come from people you trust, such as a friend, family member or person from work.
- If you click on a link and are asked to enter the password for your Gmail, Google account or another service: Don’t. “Instead, go directly to the website that you want to use,” Google said, and that includes your Google/Gmail account login.
Main source: Forbes.com – Dec 25, 2024