Preventions
- Home
- - Preventions
- -PV078
- ID: PV078
- Created: 22nd October 2025
- Updated: 22nd October 2025
- Contributor: David Larsen
Service Account Classification and Scope Limitation
Establish and enforce strict classification, ownership, and access scope limitations for all service accounts. These non-human accounts often hold elevated privileges and operate without the same oversight as user accounts. When left ungoverned, they create blind spots in forensic reconstruction, increase the risk of lateral movement, and enable subjects to access sensitive systems without attribution.
Service accounts must be treated as operational identities, not technical abstractions. Without rigorous control, they are a frequent vector for privilege misuse, staging, and exfiltration behaviors.
Key Prevention Measures
- Maintain a centralized inventory of all service accounts using identity providers such as Microsoft Entra ID, Okta, or on-premises Active Directory.
- Require each service account to have a documented business owner responsible for its purpose and review.
- Record the account's assigned system or integration point, authentication method, and intended function.
- Tag all service accounts explicitly in directory metadata as non-human.
- Block service accounts from interactive login, remote desktop sessions, and GUI-based authentication.
- Use conditional access policies to restrict service account access to predefined IP ranges and service endpoints only.
- Require credential rotation on all service accounts using platforms such as CyberArk, HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault.
- Implement just-in-time provisioning and session expiration for elevated service accounts using Privileged Access Management (PAM) tools.
- Audit all service account permissions monthly to ensure least-privilege alignment with documented needs.
- Automatically disable service accounts not used within a defined operational window unless a justified exemption is recorded.
- Generate alerts when service accounts are used outside expected time windows, from unauthorized locations, or to access sensitive resources unrelated to their documented function.
Investigator Considerations
- Service accounts used interactively are red flags during insider threat investigations, often indicating evasion of attribution or misuse of automation.
- Misclassified or shared service accounts inhibit incident reconstruction and may obscure which subject initiated a given action.
- High-volume data access by service accounts should be correlated with staging or exfiltration windows.
- Accounts with privileged access but no assigned owner should be considered security gaps and reviewed as priority investigative artifacts.
Sections
| ID | Name | Description |
|---|---|---|
| AF024 | Account Misuse | The subject deliberately misuses account constructs to obscure identity, frustrate attribution, or undermine investigative visibility. This includes the use of shared, secondary, abandoned, or illicitly obtained accounts in ways that violate access integrity and complicate forensic analysis.
Unlike traditional infringement behaviors, account misuse in the anti-forensics context is not about the action itself—but about how identity is obfuscated or displaced to conceal that action. These behaviors sever the link between subject and activity, impeding both real-time detection and retrospective investigation.
Investigators encountering unexplainable log artifacts, attribution conflicts, or unexpected session collisions should assess whether account misuse is being used as a deliberate concealment tactic. Particular attention should be paid in environments lacking centralized identity governance or with known privilege sprawl.
Account misuse as an anti-forensics strategy often coexists with more overt infringements—enabling data exfiltration, sabotage, or policy evasion while preserving plausible deniability. As such, its detection is crucial to understanding subject intent, tracing activity with confidence, and restoring the chain of custody in incident response. |
| ME030 | Enterprise-Integrated AI Platforms | A subject operates within an environment where artificial intelligence (AI) platforms or agents are integrated across multiple enterprise systems, providing centralized access to data, services, or functionality within the organization.
These platforms are typically deployed to support productivity, knowledge retrieval, automation, or decision-making. As part of their implementation, they may be connected to internal repositories, collaboration tools, identity systems, ticketing platforms, or other business-critical services. Integration is often achieved through APIs, service accounts, or enterprise-wide indexing capabilities.
As a result, the AI platform may provide:
This form of integration creates a consolidated access layer within the environment that differs from standard user interaction patterns. Rather than accessing systems individually, the subject may interact with multiple data sources or services through the AI platform.
In some cases, the scope of access available through the platform may not align precisely with role-based access expectations, particularly where data is aggregated, summarized, or retrieved across systems. The platform may also operate with service account permissions or API-level access that are not directly accessible to the subject through traditional interfaces or individual user access controls, creating a divergence between user-level access and effective access via the platform.
This Section captures the availability of AI platforms that are integrated into the enterprise environment with broad access to data or systems. While deployed for legitimate operational purposes, such platforms may provide expanded capability that can be leveraged by a subject in the course of insider activity. |
| AF024.002 | Unauthorized Credential Use | The subject employs valid credentials that were obtained outside of sanctioned provisioning channels to conceal their identity or perform actions under a false or misleading identity. This behavior, categorized as unauthorized credential use, is distinct from traditional account compromise—it reflects insider-enabled misuse, not external intrusion.
Credentials may be acquired through casual observation (e.g., shoulder surfing or unlocked workstations), social engineering, prior access (e.g., retained credentials from a former role), or covert means such as password capture tools. In some cases, credentials may be voluntarily shared by a collaborator or acquired opportunistically from unmonitored or abandoned accounts.
This tactic allows the subject to dissociate their actions from their known identity, delay detection, and in some cases, redirect suspicion to another individual. When used within privileged or high-sensitivity environments, unauthorized credential use can enable significant harm while bypassing conventional identity-based controls and alerting mechanisms.
Unlike service account sharing or account obfuscation (which involve legitimate, active credentials assigned to the subject), this behavior revolves around unauthorized access to credentials not formally linked to the subject. Investigators should prioritize this sub-section when audit trails show activity under an identity that does not correspond to role expectations, known behavioral patterns, or device history.
Key forensic indicators include:
Unauthorized credential use is a high-risk concealment technique and often coincides with malicious or high-impact infringements. |
| IF025.001 | Service Account Sharing | A subject deliberately shares credentials for non-personal, persistent service accounts (e.g., admin, automation, deployment) with other individuals, either within or outside their team. These accounts often lack individual attribution, and when shared, they create a pool of untracked, unaccountable access.
Service account sharing typically emerges in high-pressure operational environments where speed or convenience is prioritized over access hygiene. Teams may rationalize the behavior as necessary to meet deployment deadlines, maintain uptime, or circumvent perceived access bottlenecks. In other cases, access may be extended informally to external collaborators, such as contractors or partner engineers, without proper onboarding or oversight.
When service account credentials are distributed, they become functionally equivalent to a shared key—undermining all identity-based controls. Investigators lose the ability to reliably associate actions with individuals, making forensic attribution difficult or impossible. This gap often delays incident response and enables repeated policy violations without detection.
Service accounts also frequently carry elevated privileges, operate without MFA, and are excluded from normal UAM logging, compounding the risk. Their use in this manner represents not just a technical misstep, but a structural breakdown in control integrity and accountability. In environments with compliance obligations or segmented access controls, service account sharing is a critical investigative red flag and should trigger formal review. |
| ME021.001 | User Account Credentials | User credentials that were available to the subject during employment are not revoked and can still be used. |
| IF028.002 | AI Agent Privilege Exploitation | A subject commits an infringement by exploiting the elevated, aggregated, or differently scoped permissions of an artificial intelligence (AI) agent to obtain access to restricted data or systems beyond their authorized role.
This behavior occurs when an AI agent operates with service account privileges, enterprise-wide indexing authority, cross-platform integrations, or API-level permissions that exceed the subject’s direct interactive access. The subject intentionally leverages that authority to retrieve, view, or extract protected information.
The infringement is established when the AI agent accesses restricted repositories, datasets, or systems that the subject could not lawfully access using their own credentials. The harm lies in the bypass of role-based access controls through delegated authority.
Examples include:
The defining characteristic is delegated access control bypass. The AI agent exercises permissions that differ from or exceed the subject’s own access scope, and the subject exploits that differential to obtain protected information.
The subject remains fully accountable for the misuse of the agent’s authority. The infringement arises from leveraging expanded system trust to circumvent established access controls. |
| IF028.003 | AI Agent Impersonation Execution | A subject commits an infringement by delegating impersonation activity to an artificial intelligence (AI) agent that autonomously or semi-autonomously executes deceptive communications within or outside the organization.
This behavior occurs when a subject configures or tasks an AI agent to replicate the identity, tone, authority, or communication style of another individual (such as an executive, HR representative, legal counsel, or trusted colleague) and the agent executes impersonation actions that result in material harm.
The AI agent may be directed to:
Unlike manual impersonation, this behavior involves delegated execution. The AI agent operates as the impersonation engine, producing and transmitting deceptive content at scale or with persistence beyond what the subject could realistically maintain manually.
Examples include:
The infringement is established when the AI agent executes deceptive communications that result in fraud, credential compromise, unauthorized disclosure, reputational harm, or operational disruption.
The defining characteristic is the autonomous execution of impersonation through an AI agent acting under the subject’s direction.
The subject remains fully accountable for the deception and resulting harm. The AI agent amplifies realism, adaptability, and scale, significantly increasing the effectiveness and persistence of impersonation-based misconduct. |
| PR035.001 | AI Agent Data Staging | A subject prepares for potential insider activity by directing an artificial intelligence (AI) agent to aggregate, organize, or transform sensitive organizational data into structured or portable formats.
This behavior occurs when an AI agent is tasked with systematically collecting information from internal repositories and consolidating it into outputs that are easier to store, review, transfer, or exploit. The agent performs bulk summarization, data normalization, or cross-repository aggregation that significantly reduces the effort required to later misuse the information.
Unlike reconnaissance activities that focus on discovering intelligence, AI Agent Data Staging focuses on operational preparation of data. The AI agent converts dispersed or complex internal information into consolidated outputs that increase its portability, usability, or accessibility outside its original context.
Examples include:
The defining characteristic of this Sub-section is the delegated consolidation of sensitive information. The subject leverages the AI agent to perform scalable data preparation that increases the volume, portability, or usability of organizational data.
While the staged data may not yet have been transferred outside the organization, the consolidation process materially lowers the effort required to exfiltrate or exploit it. In environments where AI platforms possess broad repository visibility, this capability can significantly accelerate the preparation phase of insider activity. |
| IF028.001 | AI Agent Internal Reconnaissance | A subject causes harm by directing an artificial intelligence (AI) agent to generate unauthorized internal intelligence about the organization—information the subject would not reasonably possess through legitimate role-based access or business need.
This behavior occurs when an AI agent is used to systematically query, correlate, or infer sensitive internal information by leveraging its broad integrations, indexing authority, or analytical capabilities. The resulting intelligence may reveal restricted projects, executive decisions, legal matters, acquisition activity, investigation status, architectural weaknesses, or other sensitive organizational insight.
Unlike routine search or manual browsing, AI agent internal reconnaissance enables:
In certain enterprise deployments, authorized AI platforms possess integration-level access across knowledge bases, ticketing systems, document repositories, messaging platforms, and identity directories. A subject may deliberately leverage this aggregated visibility to extract or infer intelligence beyond their legitimate business scope.
Examples include:
The infringement is established when the agent-generated intelligence materially exceeds legitimate business need and results in unauthorized exposure of sensitive organizational insight. The harm lies in the unauthorized acquisition of internal intelligence—particularly where that intelligence enables subsequent exploitation, trading, coercion, or strategic misuse.
The defining characteristic is not merely access, but computational synthesis. The AI agent transforms distributed internal data into actionable intelligence that the subject could not reasonably derive manually within policy boundaries. |
| ME030.001 | AI Platform Aggregated Data Access | A subject has access to an artificial intelligence (AI) platform that aggregates data from multiple internal systems and presents it through a unified interface, where access controls are insufficiently enforced or misaligned with underlying role-based access restrictions.
These platforms are typically configured to index, query, or retrieve information from enterprise repositories such as file storage systems, collaboration platforms, knowledge bases, and internal documentation systems. Data from these sources may be combined, summarized, or surfaced in response to a single query.
In some implementations, the platform aggregates data across repositories without consistently applying the access controls of the underlying systems. As a result, information may be surfaced through the AI interface that the subject would not ordinarily access through direct interaction with those systems.
The AI platform may provide:
This access model creates a divergence between the subject’s direct access permissions and the information available to them through the AI platform. Data that is distributed, restricted, or contextually separated within underlying systems may be surfaced together through aggregated queries.
The presence of aggregated data access with insufficiently constrained access controls provides the subject with a means to obtain information beyond their intended role-based scope, particularly where enterprise-wide indexing or broad query capabilities are implemented. |
| ME030.002 | AI Platform System Interaction Capability | A subject has access to an artificial intelligence (AI) platform that is integrated with internal systems and capable of interacting with those systems through APIs, service accounts, automation frameworks, or agent interaction protocols (e.g., Model Context Protocol (MCP)), where the platform operates with permissions or capabilities that exceed typical user-level access controls.
These platforms are connected to enterprise systems such as identity services, ticketing platforms, communication tools, file storage systems, and other operational applications. Integration enables the platform to execute actions, retrieve data, or interact with system functionality on behalf of the user.
In some implementations, the platform is granted broad or persistent permissions to support automation and cross-system functionality. These permissions may not align precisely with the subject’s role-based access and may allow the platform to perform actions or retrieve data beyond what the subject could achieve through direct interaction with the underlying systems.
The AI platform may:
This interaction model creates a divergence between the subject’s direct capabilities and the effective capabilities available through the AI platform. Actions that would normally require elevated access, multi-system coordination, or additional authorization may be performed through the platform’s integrated functionality.
The presence of AI platforms with system interaction capability and insufficiently constrained permissions provides the subject with a means to interact with internal systems and services beyond their intended role-based authority. |