pwshub.com

The ultimate dual-use tool for cybersecurity

Sponsored Feature Artificial intelligence: saviour for cyber defenders, or shiny new toy for online thieves? As with most things in tech, the answer is a bit of both.

AI is the latest and most powerful example of a common technology trope: the dual-use tool. For decades, tools from password crackers to Metasploit have had a light and a dark side. Penetration testers have used them for good, highlighting holes in systems that admins can then patch. But cyber criminals - from script kiddies to nation-state intruders - also use the same tools for their own nefarious ends.

Similarly, AI offers cyber defenders the chance to further automate threat detection, accelerate incident response, and generally make life harder for attackers. But those same black hats are all too happy to scale up attacks in multiple ways with the help of AI.

The rise of AI-enhanced cyber attacks

AI is a Swiss Army knife for the modern cyber crook, especially with the arrival of generative AI (GenAI) powered by technologies such as large language models (LLMs) and generative adversarial networks. CISOs are rightfully worried about this relatively new tech. Proofpoint's 2024 Voice of the CISO report found 54 percent of CISOs globally are concerned about the security risks posed by LLMs, and with good reason. GenAI opens up plenty of new possibilities for cyber criminals to create more accurate, targeted malicious content.

New tools are emerging that can create fraudulent emails indistinguishable from legitimate ones. These tools, such as WormGPT, follow none of the ethics guidelines coded into foundational LLMs like ChatGPT and Claude. Instead, they produce convincing emails that can be the basis for a business email compromise (BEC) attack.

"Ultimately, these tools are enabling the attackers to craft better, more convincing phishing emails, translating them into ever more languages, targeting more potential victims across the globe," warns Adenike Cosgrove, VP of cybersecurity strategy at cybersecurity vendor Proofpoint.

These automated phishing mails are getting better and better (if you're a cyber criminal) or worse and worse (if you're a defender tasked with spotting and blocking them). Malicious text produced using LLMs is so effective that in a test by Singapore's Government Technology Agency, more users clicked on links in AI-generated phishing emails than on links in manually written ones. And that was in 2021.

While criminals aren't leaping entirely to AI for their malicious online campaigns, the technology is helping to refine their phishing campaigns, enabling them to focus on both quality and quantity at the same time. Proofpoint's 2024 State of the Phish report found 71 percent of organizations experienced at least one successful phishing attack in 2023.

That figure is down from 84 percent in 2022, but the negative consequences associated with these attacks have soared, resulting in: a 144 percent increase in reports of financial penalties such as regulatory fines, and a 50 percent rise reports of reputational damage.

GenAI takes the work out of writing hyper-personalized messages that sound like they're coming from your boss. That's especially useful for BEC scammers that siphon huge amounts of cash from institutional victims by impersonating customers or senior execs. This promises to exacerbate an already growing problem; 2023 saw Proofpoint detect and block an average of 66 million BEC attacks each month.

This goes beyond simple text creation for crafting ultra-convincing phishing emails. GenAI is also the foundation for the kinds of deepfake audio and video that are already powering next-level BECs. Five years ago, scammers used audio deepfake technology to impersonate a senior executive at a UK energy company, resulting in the theft of €220,000. There have been plenty more such attacks since, with even greater financial loss.

Criminals have also used AI to create video impersonations, enabling them to scam targets in video calls. In early 2024, two UK companies were duped out of HK$4.2m in total after scammers used video deepfakes to impersonate their chief financial officers during Zoom calls for example. These attacks are so potentially damaging that the NSA, FBI and the Department of Homeland Security's CISA jointly warned about them last year.

Fighting fire with (artificial) fire

It's not all doom and gloom. As a dual-use technology, AI can be used for good, empowering defenders with advanced threat detection and response capabilities. The technology excels at doing what only humans could previously do, but at scale. As AI allows cybercriminals to launch attacks in more volume, security solutions with integrated AI technology will become a critical means of defence for security teams who will be unable to grow their staff numbers sufficiently to ride this digital tide.

"For smaller teams that are defending large global organizations, humans alone can no longer scale to sufficiently secure these enterprise level attack surfaces that are ever expanding," says Cosgrove. "This is where AI and machine learning starts to come in, leveraging these new controls that complement robust cybersecurity strategies."

Vendors like Proofpoint are doing just that. It's integrating AI into its human-centric security solutions to stop inappropriate information making its way out of its clients' networks. Adaptive Email DLP uses AI to detect and block misdirected emails and sensitive data exfiltration in real time. It's like having a really fast intern with attention to detail checking every email before it goes out.

The company also uses AI to stop digital toxins reaching its clients via email. AI algorithms in its Proofpoint Targeted Attack Protection (TAP) service detect and analyse threats before they reach user inboxes. This works with Proofpoint Threat Response Auto-Pull (TRAP), another service that uses AI to analyse emails after delivery and quarantine any that turn out to be malicious.

AI and ML solutions tend to require powerful detection models and a high-fidelity data pipeline to yield accurate detection rates, operational efficiencies, and automated protection. Cosgrove says that Proofpoint analyses more human interactions than any other cybersecurity company, giving an unparalleled view of the tactics, techniques and procedures threat actors use to attack people and compromise organisations.

"The data that we are training our AI machine learning models is based on telemetry from the 230,000 global enterprises and small businesses that we protect," she says, pointing out that this telemetry comes from the activities of thousands of individuals at those customer sites. "We're training those models with 2.6 billion emails, 49 billion URLs, 1.9 billion attachments every day."

Stopping humans doing what humans do

How do companies get hit in phishing attacks in the first place? Simple: humans remain the weakest link. Even after countless sessions of relentless cybersecurity awareness finger wagging, someone will still click on attachments they shouldn't, and use their dog's name for all of their passwords.

In reality, the culprit isn't just one person. According to Proofpoint's 2024 State of the Phish report, 71 percent of users admitted to taking risky actions, and 96 percent of them knew they were doing so. That's why a whopping 63 percent of CISOs consider users with access to critical data to be their top cybersecurity risk, according to the company's 2024 Voice of the CISO report. To borrow from Sartre, hell is other people who don't follow corporate cybersecurity policy.

Proofpoint's AI goes beyond simple signature scanning to sift patterns from the metadata and content associated with user email. This enables it to build up a picture of human behaviour.

"The reason why we developed a behavioural AI engine and why it's critical to integrate into your email security controls is that it is analysing patterns of communication," Cosgrove says. That's especially critical when there are few other technical signals to go on. "Often what we see in email fraud or business email compromise attacks is that it's simple email with just text. There's no attachment, there's no payload, there's no link or URL to sandbox."

AI tools like Proofpoint's make nuanced decisions based on subtle signals that only humans could have previously made, and they're doing it at scale. As they mimic human strengths in areas such as judgement, they're also becoming our best shot at shoring up the weaknesses that get us into digital trouble; distraction, impatience, and a lack of attention to detail.

The key to staying ahead in the fight against cyber attackers will be using tools like these to create another layer of defence against digital attackers who will increasingly fold it into their own arsenals. Other layers include effective cyber hygiene in areas ranging from change management through to endpoint monitoring, effective data backups, and more engaging cybersecurity awareness training to try and minimise the likelihood of user error in the first place.

Cybersecurity has always been a cat and mouse game between attackers and defenders, and AI is the latest evolution in that struggle. Defenders must develop and deploy tools that keep modern businesses one step ahead in the AI arms race - because if we don't, our adversaries will gain a potentially devastating advantage.

Sponsored by Proofpoint.

Source: theregister.com

Related stories
1 month ago - We tested portable grills so you don't have to -- and these are the hottest models you can buy.
1 week ago - This is your last chance to pick up some amazing savings this Labor Day. Act fast before the deals vanish.
2 weeks ago - Looking for discounts on your favorite tech brands this Labor Day weekend? You don't want to miss these deals on top brands at Best Buy.
2 weeks ago - Labor Day deals from Best Buy are in full swing. We found deep discounts on laptops, video games, and other tech items.
2 weeks ago - Labor Day deals at Best Buy are in full swing with huge discounts on laptops, video games and other tech items.
Other stories
2 minutes ago - You pipsqueaks want memory safety? We'll show you memory safety! We'll borrow that borrow checker After two years of being beaten with the memory-safety stick, the C++ community has published a proposal to help developers write less...
31 minutes ago - We tested multiple types of adjustable dumbbells, and these are the ones that made the cut.
31 minutes ago - More states are offering the ability to change your Medigap coverage to purchase a cheaper plan without a physical exam.
31 minutes ago - Why You Can Trust CNET Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy...
31 minutes ago - Revised App Review Guidelines are now being applied to iPadOS 18, the latest version of its iPad-exclusive operating system. The OS will give European users the ability to access apps from third-party sources beyond the traditional App...