At Blackhat, there wasn’t a space of 10 feet not festooned with overtly optimistic promises of AI curing – basically everything in security. During an AI summit here, multiple tens of would-be AI security contenders, sometimes with little else but a PhD and a burning pile of cash they’d drummed up, said they had all but solved the important parts. But is AI security really ready yet?
Certainly, trusting your core company secrets to AI with a paint-deep veneer of security is a terrible idea, especially since precious few vendors’ booth staff here could even elucidate the difference between excessive agency and model poisoning in an AI security context, let alone implement a good methodology for their protection.
On the subject of testing, few vendors really know how to best implement AI security, let alone multi-agentic systems (MAS). But some do. For them, we had many hours of good conversations about how to best test the things their customers care about – and not focus on the things they don’t.
These conversations, importantly, inform our crafting of the AI security methodologies we are baking into our own testing secret sauce in our upcoming tests. These conversations will affect how we will best assess who’s doing it right, building secure code through a secure development life cycle in a security culture solving real world problems.
Thanks to all the companies we spoke with, but if we didn’t connect with you and you want to share what you believe needs to be included in an effective AI test scenario, reach out to us, we’d love to start (or continue) a conversation. The input we get from you all now will influence the next generation of test harnesses and methodologies, and we’d like to build that using the customer needs and real-world problems as the foundation.
Maybe by Blackhat next year the industry will be a lot closer to being ready to have a real-world AI security bake-off that’s meaningful.
