TelcoNews Australia - Telecommunications news for ICT decision-makers

Exclusive: Google on AI-powered attacks & cyber threats in Australia

Mon, 10th Nov 2025

Australia's cyber threat landscape is evolving rapidly, with artificial intelligence (AI) now at the core of both criminal operations and corporate defence, according to Andrew Aston, Manager of Mandiant Intelligence at Google Threat Intelligence Group.

Who is the GTIG?

Aston's team, following Google's acquisition of Mandiant, now operates within the broader Google Threat Intelligence Group. The remit covers supporting external customers in industry and government as well as internal Google priorities.

Deepfake threat

The advancement of AI-generated media is now a prominent cyber risk. "Other threat actors are starting to take notice of that and are using AI to help, you know, get hyper realistic vision and daily fakes created. And what we've seen over the past year is that the media that is required to create hyper realistic voice and imagery is getting less and less, and the imagery and the voices are getting better and better," said Andrew Aston, Manager of Mandiant Intelligence at Google Threat Intelligence Group.

"The growth in AI... is going to dominate a lot of what is going to happen in 2026 and threat actors will follow those innovations and those developments," said Aston.

AI has drastically reduced the barrier for creating convincing forgeries, requiring less data and making previously unlikely targets vulnerable. Aston said, "Individuals who may only be providing, you know, 5, 10, 20 minutes of audio, they're able to sample that and produce a realistically sounding voice."

Social engineering

Attackers' adoption of new technologies means traditional network protections are increasingly bypassed in favour of direct, sophisticated social engineering exploits. "They're simply able to manipulate engage in social engineering to get someone on the other end of the call to carry out an action to their benefit," said Aston.

This includes scams such as business email compromise, fraudulent bank transfer requests, and more complex infiltration attempts. "There needs to be more done around identifying phishing and vishing activities, because they can achieve that success," he added.

Insider threats

Nation-state backed operations, particularly originating from North Korea, have penetrated Australian organisations through fraudulent job applications supported by deepfake technologies. "This is a nation state, so this is a threat actor who literally pays people to do - they get up in the morning and they do badness," said Aston, describing how these actors combine resume manipulation with legitimate-seeming video personas to secure roles and enable reconnaissance and subsequent attacks.

Once embedded, these operatives often perform real work to build trust while mapping out systems and identifying valuable assets. "They're facilitating a separate attack by a completely different threat group, who are then able to get into the network, move very quickly to those areas of intellectual property, or, you know, the crown jewels for that organisation, and extract it."

Consistent exposure

While high-profile cases sometimes dominate headlines, the overall profile of threat actors targeting Australia remains stable. "It's a very diverse group. So I wouldn't say there's been a mix or a change. I think the attack landscape is pretty consistent," said Aston.

"Australia's position as a mature, English-speaking, technology-driven market makes it an inviting but not uniquely vulnerable target. Opportunistic attacks often exploit generic vulnerabilities rather than precise targeting," Aston explained, "If they're able to get in via a vulnerability ... they're going out and scraping the internet, looking for openings. So a lot of the time, these threat actors don't know who they're getting in [to] until they're in there and they start poking around.'"

Nation-state espionage

Aston highlighted growing nation-state interest driven by Australia's defence acquisitions and its standing in the Five Eyes intelligence alliance. Recent and upcoming defence procurements have sharpened the focus of regional actors on cyber espionage targeting government, suppliers, and critical infrastructure. "There's a significant impending investment in high tech weaponry that Australia is going into, and so nation state actors are interested in that," he said.

He pointed to China's commercial and strategic interests as a driving force, given Australia's status as a major trading partner and a hub for critical minerals and technology. "We've seen for many years, that China has an interest, as any country does, in where they are doing business and... Chinese espionage has links to Chinese industry."

AI operationalisation

Recent months have seen a shift from automation and productivity gains, such as using AI for drafting phishing emails, to "novel incidents" where malware leverages AI to rewrite itself and evade detection. "Malware... is calling out to AI to change itself to make it harder to detect," said Aston. Bypassing the guardrails of commercial AI tools is a particular focus of threat actors.

"The AI is not able to detect that. So they're getting much, much better at hiding those prompts inside what appears to be legitimate activity," said Aston.

Lowering the bar

These new tools are making sophisticated attacks accessible to a wider pool of criminals. "It's lowering the bar. It's 100% lowering the bar. This is AI is really contributing to threat actors who may be able to specialise in initial access, they may be able to specialise in one part of the cyber attack kill chain to really upskill in those areas."

New as-a-service offerings for malicious AI are proliferating, with Aston estimating eight major malicious tools being actively marketed, at prices starting in the hundreds of USD. This has led to more convincing phishing in multiple languages and more widespread social engineering against previously less-targeted populations.

Defensive AI

On the defence side, AI is also becoming essential. Aston sees this as a necessary shift, not a replacement of human analysts but an enabler: "No one hires you and says you're going to go through these logs. They're going to hire you to identify malicious activity or activity of concern and investigate. That's what you're able to, you're hired to do. So AI is going to allow organisations to have their security staff focus on those jobs that they are paid to do."

Regulatory outlook

Debate continues over global regulation of AI and cybersecurity. "I think rules, as opposed to laws or agreements, are probably more effective, because you can't really stop [threat actors]," said Aston. Internally, companies may be most effective if they focus on their own standards and technical controls, such as watermarking AI images or curbing shadow AI use within their networks.

"Google is at the forefront of making sure that when you are making this content, they're aware that people need to know, or people want to know that it is AI generated," said Aston.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X