We found a single booth at RSA that didn’t feature AI. Just one. There may have been others, but it seemed almost like a requirement to get a booth presence.
What stood out wasn’t just the presence of AI—it was the lack of specificity. Many vendors could perhaps explain what their AI does, but far fewer could explain:
- What it fails to stop
- How it behaves under pressure
- Where it breaks
That leads us to wonder what will happen post-peak-saturation of AI at RSA next year: Where will we go from here? Here are a few trends to watch, as we look ahead to what’s next:
- Movement from AI features to AI accountability. After the rush-to-market of a flood of AI features, organizations will have to understand the guardrails to adopt AI in serious ways. This will include model security (weights, extraction, poisoning), RAG security (data provenance, retrieval poisoning), agentic risk (excessive agency, tool misuse), output risk (data leakage, hallucination impact) and others, not just prompt injection low-level hacks.
- The rise of AI “prove it works.” As AI ecosystems increasingly become targets, we’ll see a rise in AI-tailored threats—a rise in adversarial tradecraft. So far, we’ve seen a steep rise in potential attacks, which will accelerate. Expect attackers to study vulnerabilities to find successful exploits on an ever-expanding attack surface. Vendors will need to prove their defenses are effective.
- Identity. As AI systems expand, so does the number of identities—not just users, but agents, APIs, and services acting autonomously. That means controlling identities interacting with APIs, and other non-human identities, and managing risk will expand exponentially, and need to be secured. This means the next identity crisis will likely not be human.
- Post-quantum cryptography creeping forward. As quantum computing—that tricky, but power compute technology—stabilizes and the cost drops, it will begin to be able to break widely used cryptography, necessitating the creation and deployment of much more complicated keys. It will also portend the impetus of rolling out Perfect Forward Secrecy (PFS) schemas where cryptographically sound keys used just once will start to become the norm, as in TLS 1.3. Meanwhile, the “harvest now, decrypt later” accumulation of stored keys for later decryption will start to occur. This means there will be a “shot clock” to upgrade legacy keys, adding to the workload of organizations. The risk isn’t that quantum breaks encryption tomorrow—it’s that today’s data is already being stockpiled for that moment.
RSA 2026 was about endless possibilities. RSA 2027 will be about accountability. The question will no longer be: “What can your AI do?” but: “What does it fail to stop—and can you prove it?”
