As the CEO of Bedrock Security, I had the privilege of attending AWS re:Invent 2024 (just as I did every re:invent that preceded it). I had a chance to meet many enterprises, partners, and thought leaders that are looking to begin, continue, or accelerate the current secular shift of migration to the cloud and participate in the next secular shift of generative AI. As with previous re:Invent conferences, this year’s event was a hive of activity. The scale and energy of the conference are truly impressive, though its sheer size can make it difficult to experience everything on offer. That said, the vibrancy of the event and the diverse opportunities for learning and networking are undeniable, and I was able to pick up on a number of common threads from participants.
CXO party at TAO with the Bedrock team: Bruno Kurtic, Pranava Adduri, Conor Kelly |
Bruno Kurtic (center) with Vladimir Lukic (left) and Vikas Taneja (right) from Boston Consulting Group |
AI was the headline topic, but the real focus was actually on data. Attendees highlighted that AI is essentially sophisticated, often black-box software processing vast amounts of data for training or inference, which raises critical concerns around security, audibility, and control. Once the model is trained it becomes nearly impossible to know what data underpins the model and what could be exposed and RAG implementations often lack the ability to pass through entitlements. This represents a seismic change in data security approach that forces everyone to put data at the center of their security architecture. The rush to implement generative AI solutions, fueled by “FOMO,” is now slowing as architects, security teams, and governance leaders grapple with these challenges. The consensus? You can’t control AI itself; you can only control the data feeding it.
This starts with comprehensive visibility into six key questions:
What’s in your data?
Who can access it?
Who is accessing it?
Is the information up-to-date and comprehensive?
Can data context be effectively used by necessary tools and processes, including AI?
Responsible AI adoption hinges on solving these visibility and security challenges, and enterprises must adapt quickly to avoid being overwhelmed by AI's complexity.
Pranava Adduri, our CTO at Bedrock, also attended the event. Pravana notes that Agentic AI, where autonomous agents interact and collaborate across various systems and workflows, was another hot topic at the event. This emerging approach leverages multiple products and data sources to streamline complex processes, but it also significantly heightens data security challenges. By its nature, agentic AI involves data being accessed, transformed, and shared autonomously, which magnifies the risk of sensitive data being improperly combined or leaked to unauthorized users.
As Pravana explained it, the complexity of these workflows makes traditional governance models insufficient. Without clear visibility into which teams are deploying AI models, the data they’re using, and its sensitivity, enterprises face significant challenges in ensuring models are both fair and secure. Conversations emphasized that responsible AI starts with data governance. Strong controls around data access and comprehensive tracking of its use are essential to mitigate risks and unlock the full potential of agentic AI.
Data security was a dominant theme throughout the event. As AI adoption accelerates, the need for robust data management and security solutions has never been greater. As my colleagues Conor Kelley and Daniel Weaver, who also attended the event, point out, this urgency is driving the rise of Data Security Posture Management (DSPM), a concept that’s rapidly gaining market recognition.
Pravana notes that VARs are also seeing an opportunity to help their customers rise to the governance challenges imposed by the acceleration towards AI, and that this is increasingly a top-of-mind concern for security, governance, and legal times alike.
Concurrently, VARs recognize the headcount needed to meet these tasks is now increasing and that now, more than ever, they have an opportunity to introduce solutions that save their customers time rather than creating more work. Enterprises are looking to migrate away from traditional data security models that rely on rule and manual interventions and instead leverage the latest in AI to learn what data and data usage is most important. When dealing with dynamic data patterns like agentic workflows, this is no longer optional but required.
The takeaway is clear: AI's secular shift has made effective, accurate, efficient, scalable data security and management not just a technical necessity but a critical business driver. Enterprises and channel partners alike are recognizing the value of innovative DSPM solutions to address these challenges head-on.
Those were my takeaways - what were yours?