AI is rapidly shifting from a technology organizations experiment with to one they’re expected to use.
In many businesses, it’s already part of day-to-day operations, built into the tools employees depend on and embedded within background systems.
What sets this moment apart isn’t only the speed at which AI is being adopted, but the extent to which it’s becoming fundamental to how employees work.
Article continues below
There’s plenty of reason for optimism. A recent KPMG study found that among the 85% of organizations already integrating AI into their operations, productivity has increased by an average of 35% following the introduction of AI agents into the workforce.
Teams are unlocking new opportunities to accelerate workflows, automate repetitive tasks, and surface insights that previously took far longer to uncover.
However, as AI becomes more deeply embedded across the enterprise, organizations must take a more intentional approach to its management.
This is especially true when it comes to keeping identities secure, where decisions made today will determine how securely AI can scale in the future.
Securing the AI workforce
So far, most of the conversation has focused on humans using AI. Assistants and copilots that sit alongside employees have dominated headlines, and for good reason. They are changing how people write content, develop code, analyze data, and communicate with others. But that is only part of the story.
A quieter shift is underway where AI is no longer just supporting the workforce, but becoming a distinct part of it. We’re in the early stages of autonomous AI agents taking on tasks independently, accessing applications, pulling data, and making decisions with little or no human involvement.
While it is tempting to see them simply as the next evolution of assistants, they are something fundamentally different. These agents operate as independent actors inside the environment and should be using their own credentials and permissions, which means they behave far more like digital employees than tools.
This shift matters because most organizations are still treating these agents like software, even as they take on responsibilities that look a lot like human work. For example, many AI agents take the easy way out and ask the human to reuse their existing credentials and permissions.
Why identity systems are playing catch up
For decades, identity and access management (IAM) has been designed around a simple assumption: the primary user is human.
Even when organizations extended IAM to cover service accounts and machine identities, those identities were tied to predictable systems performing narrow, repetitive tasks.
Autonomous agents disrupt that model. They are adaptive, work through tasks in flexible and non-uniform ways, operate at machine speed, and may touch far more systems than any single employee ever would.
Despite this, many environments are trying to squeeze them into frameworks that were never built for independent, decision-making digital workers.
A recent 2025 data and AI security research report shows that only 16% of organizations treat AI as its own identity class with dedicated policies.
The result is a growing gap between how these agents behave and how their identity management, creating blind spots that attackers are ready to exploit.
There is no HR system for AI
That gap begins the moment an organization tries to onboard an autonomous agent. When a new employee joins, HR software triggers identity creation, roles are assigned, access is provisioned, and ownership is clear. There is a record of who the person is, what they are responsible for, and who manages them.
Autonomous agents arrive with none of that structure. They are created by developers, embedded into workflows, or introduced through new platforms, often without any central visibility or consistent process. There is no HR system for AI, no default manager, and no guarantee that anyone is accountable for what that agent can access or do.
This is where identity governance must evolve. Organizations need to discover these agents, register them, and give them distinct identities tied to clear business ownership.
Every autonomous agent should have a clear owner who understands why it exists, what it is meant to do, and which systems it should touch. Without that foundation, it becomes difficult to answer even basic questions about how many agents exist, who owns them, and whether their access is still justified.
Given estimations that nearly 3 in 4 companies plan to deploy agentic AI in the next two years, with just 1 in 5 having a mature governance model for these autonomous agents –– according to Deloitte––these challenges are only set to expand.
The challenge of governance at machine speed
Onboarding is only the beginning. Once agents are in the environment, the real difficulty lies in governing what they can do and when. It’s easy to focus on securing models or code, but governance is ultimately about managing identities and privileges in line with business intent.
If an agent can act on behalf of the organization, its identity should be governed with the same rigor as a human employee. In many cases, it should be governed even more tightly, as AI agents operate autonomously, continuously, and across trust boundaries at machine speed and scale. That makes over-privileged access particularly dangerous.
AI has fundamentally altered the identity security paradigm. Privileged actions are being increasingly performed across hybrid ecosystems –– from on-prem and cloud to databases and SaaS –– and organizations have lost the centralized point of control over privileged access they once relied on.
Organizations can no longer depend on standing, always-on access. They must shift toward dynamic and ephemeral models. Short-lived credentials, just-in-time access, tightly scoped permissions, and continuous monitoring help ensure agents can complete specific tasks at the moment of action without holding more power than they need.
This kind of approach supports innovation while reducing the blast radius if something goes wrong.
Managing offboarding risks
Just as important as onboarding and governance is offboarding. When a human leaves the organisation, access is revoked and accounts are closed. With autonomous agents, there is often no clear lifecycle event that triggers that same cleanup.
An agent may be retired quietly, replaced by something new, or simply forgotten. If no one is watching, that identity can remain in place with access it no longer needs. An unmanaged agent with lingering privileges becomes an easy target and a hidden entry point into critical systems.
Extending discovery and lifecycle processes to identify idle or orphaned agents, and removing them promptly, is essential to keeping the environment clean and reducing long-term risk.
Human oversight is still key
Even in a world of autonomous systems, humans remain central. Every agent should ultimately be tied back to a person or team responsible for its behaviour. Sensitive actions should require human approval. Activity should be clearly visible and auditable so teams can understand not just what happened, but why.
Autonomy does not remove accountability. If anything, it raises the bar for oversight, because the pace and scale of machine-driven activity leave less room for error. Organizations that build clear ownership and human-in-the-loop controls into their identity programs will be far better positioned to earn trust in how they use AI.
IAM for an always-on workforce
The future of work isn’t simply about humans using AI. It’s about a blended workforce in which people and AI-native agents work alongside one another, each contributing to how the organization operates. With 62% of organizations already experimenting with AI agents, that future is rapidly becoming reality.
Those that thrive will move beyond viewing autonomous agents as background software and begin managing them as digital employees. They’ll establish onboarding processes aligned with HR, implement governance frameworks that can keep pace with machine-speed operations, and enforce offboarding practices that ensure no access points are left exposed.
Now is the time to ready identity and access programs for a workforce that doesn’t clock in, and to acknowledge that in the era of autonomous AI, identity and authorization extend far beyond people alone.


