A deeper dive into Our Security Principal’s commentary on AI-fuelled scams and the new rules of cyber resilience

Earlier this week, I had the chance to contribute to an article in the Australian Financial Review about how cybercriminals are using AI to scale and sharpen their attacks. It was a good high-level discussion, but obviously, space is limited, and nuance tends to get compressed into quotes.

So, for anyone who caught the article (or didn’t), I wanted to take the opportunity to expand on what I said (and what I meant) when we spoke about the rise of AI-powered cybercrime, the growing attack surface, and the role of humans in defending it.

AI as a Force Multiplier for Cybercriminals

What we’re seeing right now isn’t a new class of attacks; it’s a massive upgrade to existing ones. Generative AI doesn’t invent new exploits. It just makes the old ones cheaper, faster, and harder to detect.

Attackers are now using tools like FraudGPT to automate reconnaissance, generate phishing campaigns, and even create short malware payloads that evade traditional detection. These tools can scrape platforms like LinkedIn or GitHub, match employee names to internal systems, mimic your comms style, and produce a credible pretext all in one workflow.

AI is now able to automate the nitty-gritty, tedious work of deciding whether your organisation is worth targeting. That work used to act as a barrier. Not anymore.

From Speed to Friction: A Change in Security Culture

One of the biggest shifts we’re seeing inside large enterprises is a cultural one: the move away from speed-at-all-costs.

Processes that were once built around agility; like procurement approvals or finance workflows, are now being re-evaluated through a risk lens. In short: we’re reintroducing friction, and it’s working.

Dual approvals, step-up authentication, behavioural anomaly detection: these aren’t just “security add-ons.” They’re becoming structural parts of how smart organisations do business. We’re starting to see the understanding that security is a form of operational integrity.

Visibility First: The Technical Baseline

On the technical side, visibility has become the new baseline. With cloud-native infrastructure, third-party SaaS, and remote work environments, most organisations now have a threat surface that’s fragmented by design.

If you can’t see your infrastructure clearly, or know who’s accessing what and from where, you’re basically playing security on hard mode.

That’s why we’re seeing growing investment in AI-driven tooling that can map your environment, identify misconfigurations in real time, and detect unusual behaviour before it becomes an incident. These platforms give defenders the context they need to respond before the damage is done.

Humans Are Still the Point of Failure—and the Point of Control

All of that said, technology doesn’t solve the human problem.

AI-generated phishing, deepfake impersonation, and pretexting attacks are all still fundamentally about tricking a human being into making the wrong decision. And because the content looks more credible and lands more precisely than ever before, the usual red flags don’t always apply.

That’s why education, culture, and policy still matter. You can’t automate trust. But you can train people to pause, ask questions, and escalate when something doesn’t feel right.

Security culture isn’t just a checkbox; it’s your last line of defence.

What Comes Next

The bottom line is this: AI is giving threat actors scale. And while defenders are slowly starting to catch up, the old assumptions no longer hold.

The idea that only large enterprises are worth targeting? Gone.
The idea that attacks will come through a firewall? Outdated.
The idea that urgency means action? Actively dangerous.

At Frame, we’re helping organisations not just deploy tools, but rethink processes, culture, and risk strategy in a world where deception can be done at scale. If that sounds like something your team’s been thinking about, let’s talk.