Veer Yedlapalli is a seasoned cybersecurity executive with 16+ years leading enterprise security at Fortune 500 companies like Grainger and Cummins. As Director of Security Engineering, Product & AI Security at Grainger, he connects security and engineering teams to drive secure, fast-paced digital innovation across a $17B+ business. Veer has built enterprise Product Security programs from the ground up, slashing critical vulnerabilities through DevSecOps while keeping developers moving fast. He's led global IAM/CIAM transformations securing millions of identities, cut OPEX significantly via smart consolidation, and powered secure ERP/CRM/cloud migrations that sped up time-to-market and lifted platform adoption. He's a hands-on pioneer in AI/ML security, creating internal frameworks and controls to protect autonomous agents, LLMs, and agentic workloads—with a focus on identity-first authorization, least-privilege access, and runtime safeguards for safe, rapid AI rollout. A frequent speaker on AI agent authentication, threat evolution, and supply chain security, Veer holds CCSP, CIAM, and other certifications. He believes: "Security isn't about saying no—it's about architecting the secure path to yes."
As organizations deploy AI agents and models to interact directly with external users, risk exposure increases across security, compliance, and trust. These systems often operate without human oversight, raising challenges around bias, prompt injection, data leakage, and unpredictable behaviour. This session focuses on governing public-facing agentic AI across the full lifecycle, with emphasis on pre-deployment testing, adversarial evaluation, and continuous monitoring post-deployment. Panellists also discuss challenges and the tension between scaling ROI and ensuring systems are secure, reliable, and accountable.
The rapid evolution of agentic AI, from single LLM-powered agents to coordinated crews and massive swarms, promises transformative autonomy in domains like supply chain, healthcare, and finance. Frameworks like CrewAI, LangGraph, and Google's ADK, combined with the Model Context Protocol (MCP) standard, enable dynamic agent-to-agent (A2A) collaboration and tool access. Yet this interconnected ecosystem introduces severe cascading risks: a single compromised agent can poison swarms via unsecured MCP calls, leading to data exfiltration, unauthorized actions, or ethical failures.
Check out the incredible speaker line-up to see who will be joining Veer.
Download The Latest Agenda