Infringement
Account Sharing
Data Loss
Delegated Execution via Artificial Intelligence Agents
Denial of Service
Disruption of Business Operations
Excessive Personal Use
Exfiltration via Email
Exfiltration via Media Capture
Exfiltration via Messaging Applications
Exfiltration via Other Network Medium
Exfiltration via Physical Medium
- Exfiltration via Bring Your Own Device (BYOD)
- Exfiltration via Disk Media
- Exfiltration via Floppy Disk
- Exfiltration via New Internal Drive
- Exfiltration via Physical Access to System Drive
- Exfiltration via Physical Documents
- Exfiltration via Target Disk Mode
- Exfiltration via USB Mass Storage Device
- Exfiltration via USB to Mobile Device
- Exfiltration via USB to USB Data Transfer
Exfiltration via Screen Sharing
Exfiltration via Web Service
Harassment and Discrimination
Inappropriate Web Browsing
Installing Malicious Software
Installing Unapproved Software
Misappropriation of Funds
Non-Corporate Device
Providing Access to a Unauthorized Third Party
Public Statements Resulting in Brand Damage
Regulatory Non-Compliance
Sharing on AI Chatbot Platforms
Theft
Unauthorized Changes to IT Systems
Unauthorized Printing of Documents
Unauthorized VPN Client
Unlawfully Accessing Copyrighted Material
- ID: IF028
- Created: 03rd March 2026
- Updated: 03rd March 2026
- Contributor: The ITM Team
Delegated Execution via Artificial Intelligence Agents
A subject causes organizational harm by delegating the execution of an infringement to an artificial intelligence (AI) agent, including in circumstances where the agent possesses access or authority beyond the subject’s own direct permissions.
This behavior occurs when a subject authorizes, configures, or directs an AI agent to perform operational actions inside the trusted environment that result in unauthorized access, data loss, fraud, operational disruption, or other policy-violating impact. The AI agent executes the harmful activity on the subject’s behalf.
An AI agent, in this context, is a system capable of autonomously or semi-autonomously performing structured tasks, interacting with enterprise systems, invoking APIs, chaining actions across platforms, or maintaining persistent monitoring logic. Unlike simple prompt-based AI use, the agent is empowered to act within organizational systems.
In certain environments, AI agents are deployed with elevated or system-level permissions to enable productivity, indexing, analytics, or workflow automation. A subject may intentionally leverage this broader authority to access data, systems, or functionality that exceeds their own interactive role-based access. When such delegated activity results in material harm, it constitutes an infringement under this Section.
Examples include:
- Directing an AI agent with repository-wide indexing permissions to aggregate sensitive documents outside the subject’s legitimate need.
- Leveraging an AI agent’s service account privileges to enumerate restricted datasets.
- Tasking an AI agent integrated with identity or ticketing systems to extract privileged operational information.
- Using an AI agent’s cross-platform automation authority to stage or transfer data at scale.
The defining characteristic is that the harmful act is executed by the AI agent under authority granted or exploited by the subject. The subject extends the organization’s trust boundary to an autonomous system and operationalizes it to inflict harm.
The subject remains fully accountable for the resulting impact. The AI agent amplifies speed, scale, and efficiency, and in cases of elevated agent permissions, may enable privilege amplification beyond the subject’s direct access.
Subsections (3)
| ID | Name | Description |
|---|---|---|
| IF028.003 | AI Agent Impersonation Execution | A subject commits an infringement by delegating impersonation activity to an artificial intelligence (AI) agent that autonomously or semi-autonomously executes deceptive communications within or outside the organization.
This behavior occurs when a subject configures or tasks an AI agent to replicate the identity, tone, authority, or communication style of another individual (such as an executive, HR representative, legal counsel, or trusted colleague) and the agent executes impersonation actions that result in material harm.
The AI agent may be directed to:
Unlike manual impersonation, this behavior involves delegated execution. The AI agent operates as the impersonation engine, producing and transmitting deceptive content at scale or with persistence beyond what the subject could realistically maintain manually.
Examples include:
The infringement is established when the AI agent executes deceptive communications that result in fraud, credential compromise, unauthorized disclosure, reputational harm, or operational disruption.
The defining characteristic is the autonomous execution of impersonation through an AI agent acting under the subject’s direction.
The subject remains fully accountable for the deception and resulting harm. The AI agent amplifies realism, adaptability, and scale, significantly increasing the effectiveness and persistence of impersonation-based misconduct. |
| IF028.001 | AI Agent Internal Reconnaissance | A subject causes harm by directing an artificial intelligence (AI) agent to generate unauthorized internal intelligence about the organization—information the subject would not reasonably possess through legitimate role-based access or business need.
This behavior occurs when an AI agent is used to systematically query, correlate, or infer sensitive internal information by leveraging its broad integrations, indexing authority, or analytical capabilities. The resulting intelligence may reveal restricted projects, executive decisions, legal matters, acquisition activity, investigation status, architectural weaknesses, or other sensitive organizational insight.
Unlike routine search or manual browsing, AI agent internal reconnaissance enables:
In certain enterprise deployments, authorized AI platforms possess integration-level access across knowledge bases, ticketing systems, document repositories, messaging platforms, and identity directories. A subject may deliberately leverage this aggregated visibility to extract or infer intelligence beyond their legitimate business scope.
Examples include:
The infringement is established when the agent-generated intelligence materially exceeds legitimate business need and results in unauthorized exposure of sensitive organizational insight. The harm lies in the unauthorized acquisition of internal intelligence—particularly where that intelligence enables subsequent exploitation, trading, coercion, or strategic misuse.
The defining characteristic is not merely access, but computational synthesis. The AI agent transforms distributed internal data into actionable intelligence that the subject could not reasonably derive manually within policy boundaries. |
| IF028.002 | AI Agent Privilege Exploitation | A subject commits an infringement by exploiting the elevated, aggregated, or differently scoped permissions of an artificial intelligence (AI) agent to obtain access to restricted data or systems beyond their authorized role.
This behavior occurs when an AI agent operates with service account privileges, enterprise-wide indexing authority, cross-platform integrations, or API-level permissions that exceed the subject’s direct interactive access. The subject intentionally leverages that authority to retrieve, view, or extract protected information.
The infringement is established when the AI agent accesses restricted repositories, datasets, or systems that the subject could not lawfully access using their own credentials. The harm lies in the bypass of role-based access controls through delegated authority.
Examples include:
The defining characteristic is delegated access control bypass. The AI agent exercises permissions that differ from or exceed the subject’s own access scope, and the subject exploits that differential to obtain protected information.
The subject remains fully accountable for the misuse of the agent’s authority. The infringement arises from leveraging expanded system trust to circumvent established access controls. |