AVAR 2025: Asia’s Digital Velocity vs. the Reality of AI Security

AVAR 2025 velocity vs security
AVAR 2025 velocity vs security

If technology was born in Silicon Valley, it is growing up in Asia.


The countries I flew over getting to Kuala Lumpur represent roughly 9 TIMES the population of the U.S. Everyone here has technology, seemingly almost surgically embedded in every facet of society. You can hardly even get into the country without a smartphone anymore (via the Malaysia Digital Arrival Card, or MDAC). Threat actors have noticed the market opportunity.


I’m sitting in a steamy Starbucks coffee shop across the street from the rapidly approaching AVAR conference in Kuala Lumpur, and right outside the door there are a half-dozen street cart vendors who take mobile payments with QR codes. I doubt their owners are security experts.

While there are only a few people inside this coffee shop, outside there’s a bustling street life. Small street vendors—entrepreneurs running open-air stalls—are accepting mobile payments via QR codes with a fluidity that puts many Western retailers to shame.


They’re not focused on security, they have other things to worry about, like feeding their families and surviving the next day. They probably won’t just suddenly adopt MFA and Post Quantum Cryptography technology because it’s interesting. So then, it’s up to us to make it not only secure, but easily useable so it just transparently “works”.

The AI Hype vs. Reality


This brings us to the central theme of this year’s AVAR: Artificial Intelligence. There’s going to be a whole host of talks here at AVAR about AI “magic” versus reality. But the AI marketing frenzy these days seems more like an irrational mood swing than something concrete. Yes, AI security IS something concrete, but not so that you’d notice by reading the flood of spam about what it’s SUPPOSED to do.

We want AI to handle the scale of security that humans no longer can. We want it to protect that street vendor’s mobile transaction without them needing a degree in cybersecurity. But currently, the industry is flooded with “AI-powered” claims which are dubious at best.


Will AI security actually work?


That all depends on how each of the security vendors will implement AI product builds this year. Their technologies are, after all, the ones who will be in charge of protecting your digital lives in the years to come. Except many of them are still trying to precisely define AI’s scope and implementation.


That’s why some of the best conversations we’ve recently had surround exactly that – how to measure if AI is (and will be) secure. The question is: Is it adversarial-resistant? Is it effective? Is it secure by design? We’ve been working to break down barriers to have these kind of open conversations with security vendors, and it’s been working. By understanding how the testing industry will approach this, your company can have meaningful conversations about what will – or won’t work.


As an organization, we’re not trying to find a testing “gotcha” with vendors (to quote one of our vendors recently to their internal team while on a call). We’re trying to test from the standpoint of real end user organizations who are planning to use your technology (or not). Will they be secure if they just read the instructions and turn the thing on, or do they need a whole red team to vet the solution?


Can they explain the budget item to their exec team in simple terms – convince them that it works and they’re getting something besides hot air from their spend? Those are reasonable questions we hope to help answer, by putting some real methodology behind our AI security testing this year. We hope it will help make the industry build more secure apps, like those the street vendors here selling fruit derivatives (with perhaps hit-and-miss sanitation) can use to at least make sure the transaction is safe and secure. You might want to wash the fruit first though.

Share the Post: