ITM is an open framework - Submit your contributions now.

Insider Threat Matrix™

  • ID: DT048
  • Created: 02nd June 2024
  • Updated: 19th July 2024
  • Contributor: The ITM Team

Data Loss Prevention Solution

A Data Loss Prevention (DLP) solution refers to policies, technologies, and controls that prevent the accidental and/or deliberate loss, misuse, or theft of data by members of an organization. Typically, DLP technology would take the form of a software agent installed on organization endpoints (such as laptops and servers).

 

Typical DLP technology will alert on the potential loss of data, or activity which might indicate the potential for data loss. A DLP technology may also provide automated responses to prevent data loss on a device.

Sections

ID Name Description
IF010Exfiltration via Email

A subject uses electronic mail to exfiltrate data.

ME007Privileged Access

A subject has privileged access to devices, systems or services that hold sensitive information.

ME009FTP Servers

A subject is able to access external FTP servers.

ME010SSH Servers

A subject is able to access external SSH servers.

ME023Sensitivity Label Leakage

Sensitivity label leakage refers to the exposure or misuse of classification metadata, such as Microsoft Purview Information Protection (MIP) sensitivity labels, through which information about the nature, importance, or confidentiality of a file is unintentionally or deliberately disclosed. While the underlying content of the document may remain encrypted or otherwise protected, the presence and visibility of sensitivity labels alone can reveal valuable contextual information to an insider.

 

This form of leakage typically occurs when files labeled with sensitivity metadata are transferred to insecure locations, shared with unauthorized parties, or surfaced in logs, file properties, or collaboration tool interfaces. Labels may also be leaked through misconfigured APIs, email headers, or third-party integrations that inadvertently expose metadata fields. The leakage of sensitivity labels can help a malicious insider identify and prioritize high-value targets or navigate internal systems with greater precision, without needing immediate access to the protected content.

 

Examples of Use:

  • An insider accesses file properties on a shared drive to identify documents labeled Highly Confidential with the intention of exfiltrating them later.
  • Sensitivity labels are exposed in outbound email headers or logs, revealing the internal classification of attached files.
  • Files copied to an unmanaged device retain their label metadata, inadvertently disclosing sensitivity levels if examined later.
MT021Conflicts of Interest

A subject may be motivated by personal, financial, or professional interests that directly conflict with their duties and obligations to the organization. This inherent conflict of interest can lead the subject to engage in actions that compromise the organization’s values, objectives, or legal standing.

 

For instance, a subject who serves as a senior procurement officer at a company may have a financial stake in a vendor company that is bidding for a contract. Despite knowing that the vendor's offer is subpar or overpriced, the subject might influence the decision-making process to favor that vendor, as it directly benefits their personal financial interests. This conflict of interest could lead to awarding the contract in a way that harms the organization, such as incurring higher costs, receiving lower-quality goods or services, or violating anti-corruption regulations.

 

The presence of a conflict of interest can create a situation where the subject makes decisions that intentionally or unintentionally harm the organization, such as promoting anti-competitive actions, distorting market outcomes, or violating regulatory frameworks. While the subject’s actions may be hidden behind professional duties, the conflict itself acts as the driving force behind unethical or illegal behavior. These infringements can have far-reaching consequences, including legal ramifications, financial penalties, and damage to the organization’s reputation.

ME025Placement

A subject’s placement within an organization shapes their potential to conduct insider activity. Placement refers to the subject’s formal role, business function, or proximity to sensitive operations, intellectual property, or critical decision-making processes. Subjects embedded in trusted positions—such as those in legal, finance, HR, R&D, or IT—often possess inherent insight into internal workflows, organizational vulnerabilities, or confidential information.

 

Strategic placement can grant the subject routine access to privileged systems, classified data, or internal controls that, if exploited, may go undetected for extended periods. Roles that involve oversight responsibilities or authority over process approvals can also allow for policy manipulation, the suppression of alerts, or the facilitation of fraudulent actions.

 

Subjects in these positions may not only have a higher capacity to carry out insider actions but may also be more appealing targets for adversarial recruitment or collusion, given their potential to access and influence high-value organizational assets. The combination of trust, authority, and access tied to their placement makes them uniquely positioned to execute or support malicious activity.

ME024Access

A subject holds access to both physical and digital assets that can enable insider activity. This includes systems such as databases, cloud platforms, and internal applications, as well as physical environments like secure office spaces, data centers, or research facilities. When a subject has access to sensitive data or systems—especially with broad or elevated privileges—they present an increased risk of unauthorized activity.

 

Subjects in roles with administrative rights, technical responsibilities, or senior authority often have the ability to bypass controls, retrieve restricted information, or operate in areas with limited oversight. Even standard user access, if misused, can facilitate data exfiltration, manipulation, or operational disruption. Weak access controls—such as excessive permissions, lack of segmentation, shared credentials, or infrequent reviews—further compound this risk by enabling subjects to exploit access paths that should otherwise be limited or monitored.

 

Furthermore, subjects with privileged or strategic access may be more likely to be targeted for recruitment by external parties to exploit their position. This can include coercion, bribery, or social engineering designed to turn a trusted insider into an active participant in malicious activities.

MT017Espionage

A subject carries out covert actions, such as the collection of confidential or classified information, for the strategic advantage of a nation-state.

MT009Fear of Reprisals

A subject accesses and exfiltrates or destroys sensitive data or otherwise contravenes internal policies in an attempt to prevent professional reprisals against them or other persons.

MT016Human Error

The subject has no threatening motive and is not reckless in their actions. The infringement is a result of an honest mistake made by the subject.

MT008Lack of Awareness

A subject is unaware that they are prohibited from accessing and exfiltrating or destroying sensitive data or otherwise contravening internal policies.

MT003Leaver

A subject leaving the organisation with access to sensitive data with the intent to access and exfiltrate sensitive data or otherwise contravene internal policies.

MT013Misapprehension or Delusion

A subject accesses and exfiltrates of destroys sensitive data or otherwise contravenes internal policies as a result of motives not grounded in reality.

MT002Mover

A subject moves within the organisation to a different team with the intent to gain access to sensitive data or to circumvent controls or to otherwise contravene internal policies.

MT004Political or Philosophical Beliefs

A subject is motivated by their political or philosophical beliefs to access and destroy or exfiltrate sensitive data or otherwise contravene internal policies.

MT015Recklessness

The subject does not have a threatening motive. However, the subject under takes actions without due care and attention to the outcome, which causes an infringement.

MT007Resentment

A subject is motivated by resentment towards the organisation to access and exfiltrate or destroy data or otherwise contravene internal policies. 

MT019Rogue Nationalism

A subject, driven by excessive pride in their nation, country, or region, undertakes actions that harm an organization. These actions are self-initiated and conducted unilaterally, without instruction or influence from legitimate authorities within their nation, country, region, or any other third party. The subject often perceives their actions as acts of loyalty or as benefiting their homeland.

 

While the subject may believe they are acting in their nation’s best interest, their actions frequently lack strategic foresight and can result in significant damage to the organization.

MT010Self Sabotage

A subject accesses and exfiltrates or destroys sensitive data or otherwise contravenes internal policies with the aim to be caught and penalised.

MT006Third Party Collusion Motivated by Personal Gain

A subject is recruited by a third party to access and exfiltrate or destroy sensitive data or otherwise contravene internal policies for in exchange for a personal gain.

MT012Coercion

A subject is persuaded against their will to access and exfiltrate or destroy sensitive data, or conduct some other act that harms or undermines the target organization. 

MT022Boundary Testing

The subject deliberately pushes or tests organizational policies, rules, or controls to assess tolerance levels, detect oversight gaps, or gain a sense of impunity. While initial actions may appear minor or exploratory, boundary testing serves as a psychological and operational precursor to more serious misconduct.

 

Characteristics

  • Motivated by curiosity, challenge-seeking, or early-stage dissatisfaction.
  • Actions often start small: minor policy violations, unauthorized accesses, or circumvention of procedures.
  • Rationalizations include beliefs that policies are overly rigid, outdated, or unfair.
  • Boundary testing behavior may escalate if it is unchallenged, normalized, or inadvertently rewarded.
  • Subjects often seek to gauge the likelihood and severity of consequences before considering larger or riskier actions.
  • Testing may be isolated or gradually evolve into opportunism, retaliation, or deliberate harm.

 

Example Scenario

A subject repeatedly circumvents minor IT security controls (e.g., bypassing content filters, using personal devices against policy) without immediate consequences. Encouraged by the lack of enforcement, the subject later undertakes unauthorized data transfers, rationalizing the behavior based on perceived inefficiencies and low risk of detection.

IF010.001Exfiltration via Corporate Email

A subject exfiltrates information using their corporate-issued mailbox, either via software or webmail. They will access the conversation at a later date to retrieve information on a different system.

IF010.002Exfiltration via Personal Email

A subject exfiltrates information using a mailbox they own or have access to, either via software or webmail. They will access the conversation at a later date to retrieve information on a different system.

PR015.003Email Forwarding Rule

The subject creates an email forwarding rule to transport any incoming emails from one mailbox to another.

IF002.005Exfiltration via Physical Documents

A subject tansports physical documents outside of the control of the organization.

ME005.002SD Cards

A subject can mount and write to an SD card, either directly from the system, or through a USB connector.

ME006.001Webmail

A subject can access personal webmail services in a browser.

ME006.002Cloud Storage

A subject can access personal cloud storage in a browser.

IF002.007Exfiltration via Target Disk Mode

When a Mac is booted into Target Disk Mode (by powering the computer on whilst holding the ‘T’ key), it acts as an external storage device, accessible from another computer via Thunderbolt, USB, or FireWire connections. A subject with physical access to the computer, and the ability to control boot options, can copy any data present on the target disk, bypassing the need to authenticate to the target computer.

IF018.001Exfiltration via AI Chatbot Platform History

A subject intentionally submits sensitive information when interacting with a public Artificial Intelligence (AI) chatbot (such as ChatGPT and xAI Grok). They will access the conversation at a later date to retrieve information on a different system.

IF018.002Reckless Sharing on AI Chatbot Platforms

A subject recklessly interacts with a public Artificial Intelligence (AI) chatbot (such as ChatGPT and xAI Grok), leading to the inadvertent sharing of sensitive information. The submission of sensitive information to public AI platforms risks exposure due to potential inadequate data handling or security practices. Although some platforms are designed not to retain specific personal data, the reckless disclosure could expose the information to unauthorized access and potential misuse, violating data privacy regulations and leading to a loss of competitive advantage through the exposure of proprietary information.

IF004.006Exfiltration via Python Listening Service

A subject may employ a Python-based listening service to exfiltrate organizational data, typically as part of a self-initiated or premeditated breach. Python’s accessibility and versatility make it a powerful tool for creating custom scripts capable of transmitting sensitive data to external or unauthorized internal systems.

 

In this infringement method, the subject configures a Python script—often hosted externally or on a covert internal system—to listen for incoming connections. A complementary script, running within the organization’s network (such as on a corporate laptop), transmits sensitive files or data streams to the listening service using common protocols such as HTTP or TCP, or via more covert channels including DNS tunneling, ICMP, or steganographic methods. Publicly available tools such as PyExfil can facilitate these operations, offering modular capabilities for exfiltrating data across multiple vectors.

 

Examples of Use:

  • A user sets up a lightweight Python HTTP listener on a personal VPS and writes a Python script to send confidential client records over HTTPS.
  • A developer leverages a custom Python socket script to transfer log data to a system outside the organization's network, circumventing monitoring tools.
  • An insider adapts an open-source exfiltration framework like PyExfil to send data out via DNS queries to a registered domain.

 

Detection Considerations:

  • Monitor for local Python processes opening network sockets or binding to uncommon ports.
  • Generate alerts on outbound connections to unfamiliar IP addresses or those exhibiting anomalous traffic patterns.
  • Utilize endpoint detection and response (EDR) solutions to flag scripting activity involving file access and external communications.
  • Inspect Unified Logs, network flow data, and system audit trails for signs of unauthorized data movement or execution of custom scripts.
IF022.001Intellectual Property Theft

A subject misappropriates, discloses, or exploits proprietary information, trade secrets, creative works, or internally developed knowledge obtained through their role within the organization. This form of data loss typically involves the unauthorized transfer or use of intellectual assets—such as source code, engineering designs, research data, algorithms, product roadmaps, marketing strategies, or proprietary business processes—without the organization's consent.

 

Intellectual property theft can occur during employment or around the time of offboarding, and may involve methods such as unauthorized file transfers, use of personal storage devices, cloud synchronization, or improper sharing with third parties. The consequences can include competitive disadvantage, breach of contractual obligations, and significant legal and reputational harm.

IF022.002PII Leakage (Personally Identifiable Information)

PII (Personally Identifiable Information) leakage refers to the unauthorized disclosure, exposure, or mishandling of information that can be used to identify an individual, such as names, addresses, phone numbers, national identification numbers, financial data, or biometric records. In the context of insider threat, PII leakage may occur through negligence, misconfiguration, policy violations, or malicious intent.

 

Insiders may leak PII by sending unencrypted spreadsheets via email, exporting user records from customer databases, misusing access to HR systems, or storing sensitive personal data in unsecured locations (e.g., shared drives or cloud storage without proper access controls). In some cases, PII may be leaked unintentionally through logs, collaboration platforms, or default settings that fail to mask sensitive fields.

 

The consequences of PII leakage can be severe—impacting individuals through identity theft or financial fraud, and exposing organizations to legal penalties, reputational harm, and regulatory sanctions under frameworks such as GDPR, CCPA, or HIPAA.

 

Examples of Infringement:

  • An employee downloads and shares a list of customer contact details without authorization.
  • PII is inadvertently exposed in error logs or email footers shared externally.
  • HR data containing employee National Insurance or Social Security numbers is copied to a personal cloud storage account.
IF022.003PHI Leakage (Protected Health Information)

PHI Leakage refers to the unauthorized, accidental, or malicious exposure, disclosure, or loss of Protected Health Information (PHI) by a healthcare provider, health plan, healthcare clearinghouse (collectively, "covered entities"), or their business associates. Under the Health Insurance Portability and Accountability Act (HIPAA) in the United States, PHI is defined as any information that pertains to an individual’s physical or mental health, healthcare services, or payment for those services that can be used to identify the individual. This includes medical records, treatment history, diagnosis, test results, and payment details.

 

HIPAA imposes strict regulations on how PHI must be handled, stored, and transmitted to ensure that individuals' health information remains confidential and secure. The Privacy Rule within HIPAA outlines standards for the protection of PHI, while the Security Rule mandates safeguards for electronic PHI (ePHI), including access controls, encryption, and audit controls. Any unauthorized access, improper sharing, or accidental exposure of PHI constitutes a breach under HIPAA, which can result in significant civil and criminal penalties, depending on the severity and nature of the violation.

 

In addition to HIPAA, other countries have established similar protections for PHI. For example, the General Data Protection Regulation (GDPR) in the European Union protects personal health data as part of its broader data protection laws. Similarly, Canada's Personal Information Protection and Electronic Documents Act (PIPEDA) governs the collection, use, and disclosure of personal health information by private-sector organizations. Australia also has regulations under the Privacy Act 1988 and the Health Records Act 2001, which enforce stringent rules for the handling of health-related personal data.

 

This infringement occurs when an insider—whether maliciously or through negligence—exposes PHI in violation of privacy laws, organizational policies, or security protocols. Such breaches can involve unauthorized access to health records, improper sharing of medical information, or accidental exposure of sensitive health data. These breaches may result in severe legal, financial, and reputational consequences for the healthcare organization, including penalties, lawsuits, and loss of trust.

 

Examples of Infringement:

  • A healthcare worker intentionally accesses a patient's medical records without authorization for personal reasons, such as to obtain information on a celebrity or acquaintance.
  • An employee negligently sends patient health data to the wrong recipient via email, exposing sensitive health information.
  • An insider bypasses security controls to access and exfiltrate medical records for malicious use, such as identity theft or selling PHI on the dark web.
IF023.001Export Violations

Export violations occur when a subject engages in the unauthorized transfer of controlled goods, software, technology, or technical data to foreign persons or destinations, in breach of applicable export control laws and regulations. These laws are designed to protect national security, economic interests, and international agreements by restricting the dissemination of sensitive materials and know-how.

 

Such violations often involve the failure to obtain the necessary export licenses, misclassification of export-controlled items, or the improper handling of technical data subject to regulatory oversight. The relevant legal frameworks may include the International Traffic in Arms Regulations (ITAR), Export Administration Regulations (EAR), and similar export control regimes in other jurisdictions.

 

Insiders may contribute to export violations by sending restricted files abroad, sharing controlled technical specifications with foreign nationals (even within the same organization), or circumventing export controls through the use of unauthorized communication channels or cloud services. These actions are considered violations regardless of the recipient’s sanction status and may occur entirely within legal jurisdictions if export-controlled information is shared with unauthorized individuals.

 

Export violations are distinct from sanction violations in that they pertain specifically to the nature of the goods, data, or services exported, and the mechanism of transfer, rather than the status of the recipient.

Failure to comply with export control laws can result in civil and criminal penalties, loss of export privileges, and reputational damage to the organization.

IF023.002Sanction Violations

Sanction violations involve the direct or indirect engagement in transactions with individuals, entities, or jurisdictions that are subject to government-imposed sanctions. These restrictions are typically enforced by regulatory bodies such as the U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC), the United Nations, the European Union, and equivalent authorities in other jurisdictions.

 

Unlike export violations, which focus on the control of goods and technical data, sanction violations concern the status of the receiving party. A breach occurs when a subject facilitates, authorizes, or executes transactions that provide economic or material support to a sanctioned target—this includes sending payments, delivering services, providing access to infrastructure, or sharing non-controlled information with a restricted party.

 

Insiders may contribute to sanction violations by bypassing compliance checks, falsifying documentation, failing to screen third-party recipients, or deliberately concealing the sanctioned status of a partner or entity. Such conduct can occur knowingly or as a result of negligence, but in either case, it exposes the organization to serious legal and financial consequences.

 

Regulatory enforcement for sanctions breaches may result in significant penalties, asset freezes, criminal prosecution, and reputational damage. Organizations are required to maintain robust compliance programs to monitor and prevent insider-driven violations of international sanctions regimes.

IF023.003Anti-Trust or Anti-Competition

Anti-trust or anti-competition violations occur when a subject engages in practices that unfairly restrict or distort market competition, violating laws designed to protect free market competition. These violations can involve a range of prohibited actions, such as price-fixing, market division, bid-rigging, or the abuse of dominant market position. Such behavior typically aims to reduce competition, manipulate pricing, or create unfair advantages for certain businesses or individuals.

 

Anti-competition violations may involve insiders leveraging their position to engage in anti-competitive practices, often for personal or corporate gain. These violations can result in significant legal and financial penalties, including fines and sanctions, as well as severe reputational damage to the organization involved.

 

Examples of Anti-Trust or Anti-Competition Violations:

 

  • A subject shares sensitive pricing or bidding information between competing companies, enabling coordinated pricing or market manipulation.
  • An insider with knowledge of a merger or acquisition shares details with competitors, leading to coordinated actions that suppress competition.
  • An employee uses confidential market data to form agreements with competitors on market control, stifling competition and violating anti-trust laws.

 

Regulatory Framework:

 

Anti-trust or anti-competition laws are enforced globally by various regulatory bodies. In the United States, the Federal Trade Commission (FTC) and the Department of Justice (DOJ) regulate anti-competitive behavior under the Sherman Act, the Clayton Act, and the Federal Trade Commission Act. In the European Union, the European Commission enforces anti-trust laws under the Treaty on the Functioning of the European Union (TFEU) and the Competition Act.

ME024.001Access to Customer Data

A subject with access to customer data holds the ability to view, retrieve, or manipulate personally identifiable information (PII), account details, transactional records, or support communications. This level of access is common in roles such as customer service, technical support, sales, marketing, and IT administration.

Access to customer data can become a means of insider activity when misused for purposes such as identity theft, fraud, data exfiltration, competitive intelligence, or unauthorized profiling. The sensitivity and volume of customer information available may significantly elevate the risk profile of the subject, especially when this access is unmonitored, overly broad, or lacks audit controls.

 

In some cases, subjects with customer data access may also be targeted by external threat actors for coercion or recruitment, given their ability to obtain regulated or high-value personal information. Organizations must consider how customer data is segmented, logged, and monitored to reduce exposure and detect misuse.

ME024.002Access to Privileged Groups and Non-User Accounts

A subject with access to privileged groups (e.g., Domain Admins, Enterprise Admins, or Security Groups) or non-user accounts (such as service accounts, application identities, or shared mailboxes) gains elevated control over systems, applications, and sensitive organizational data. Access to these groups or accounts often provides the subject with knowledge of security configurations, user roles, and potentially unmonitored or sensitive activities that occur within the system.

 

Shared mailboxes, in particular, are valuable targets. These mailboxes are often used for group communication across departments or functions, containing sensitive or confidential information, such as internal discussions on financials, strategic plans, or employee data. A subject with access to shared mailboxes can gather intelligence from ongoing conversations, identify targets for further exploitation, or exfiltrate sensitive data without raising immediate suspicion. These mailboxes may also bypass some security filters, as their contents are typically considered routine and may not be closely monitored.

 

Access to privileged accounts and shared mailboxes also allows subjects to escalate privileges, alter system configurations, access secure data repositories, or manipulate security settings, making it easier to both conduct malicious activities and cover their tracks. Moreover, service and application accounts often have broader access rights across systems or environments than typical user accounts and are frequently excluded from standard monitoring protocols, offering potential pathways for undetected exfiltration or malicious action.

 

This elevated access gives subjects insight into critical system operations and internal communications, such as unencrypted data flows or internal vulnerabilities. This knowledge not only heightens their potential for malicious conduct but can also make them a target for external threat actors seeking to exploit this elevated access.

ME024.005Access to Physical Spaces

Subjects with authorized access to sensitive physical spaces—such as secure offices, executive areas, data centers, SCIFs (Sensitive Compartmented Information Facilities), R&D labs, or restricted zones in critical infrastructure—pose an increased insider threat due to their physical proximity to sensitive assets, systems, and information.

 

Such spaces often contain high-value materials or information, including printed sensitive documents, whiteboard plans, authentication devices (e.g., smartcards or tokens), and unattended workstations. A subject with physical presence in these locations may observe confidential conversations, access sensitive output, or physically interact with devices outside of typical security monitoring.

 

This type of access can be leveraged to:

  • Obtain unattended or discarded sensitive information, such as printouts, notes, or credentials left on desks.
  • Observe operational activity or decision-making, gaining insight into projects, personnel, or internal dynamics.
  • Access unlocked devices or improperly secured terminals, allowing direct system interaction or credential harvesting.
  • Bypass digital controls via physical means, such as tailgating into secure spaces or using misappropriated access cards.
  • Covertly install or remove equipment, such as data exfiltration tools, recording devices, or physical implants.
  • Eavesdrop on confidential conversations, either directly or through concealed recording equipment, enabling the collection of sensitive verbal disclosures, strategic discussions, or authentication procedures.

 

Subjects in roles that involve frequent presence in sensitive locations—such as cleaning staff, security personnel, on-site engineers, or facility contractors—may operate outside the scope of standard digital access control and may not be fully visible to security teams focused on network activity.

 

Importantly, individuals with this kind of access are also potential targets for recruitment or coercion by external threat actors seeking insider assistance. The ability to physically access secure environments and passively gather high-value information makes them attractive assets in coordinated attempts to obtain or compromise protected information.

 

The risk is magnified in organizations lacking comprehensive physical access policies, surveillance, or cross-referencing of physical and digital access activity. When unmonitored, physical access can provide a silent pathway to support insider operations without leaving traditional digital footprints.

ME025.002Leadership and Influence Over Direct Reports

A subject with a people management role holds significant influence over their direct reports, which can be leveraged to conduct insider activities. As a leader, the subject is in a unique position to shape team dynamics, direct tasks, and control the flow of information within their team. This authority presents several risks, as the subject may:

 

  • Influence team members to inadvertently or deliberately carry out tasks that contribute to the subject’s insider objectives. For instance, a manager might ask a subordinate to access or move sensitive data under the guise of a legitimate business need or direct them to work on projects that will inadvertently support a malicious agenda.
  • Exert pressure on employees to bypass security protocols, disregard organizational policies, or perform actions that could compromise the organization’s integrity. For example, a manager might encourage their team to take shortcuts in security or compliance checks to meet deadlines or targets.
  • Control access to sensitive information, either by virtue of the manager’s role or through the information shared within their team. A people manager may have direct visibility into highly sensitive internal communications, strategic plans, and confidential projects, which can be leveraged for malicious purposes.
  • Isolate team members or limit their exposure to security training, potentially creating vulnerabilities within the team that could be exploited. By controlling the flow of information or limiting access to security awareness resources, a manager can enable an environment conducive to insider threats.
  • Recruit or hire individuals within their team or external candidates who are susceptible to manipulation or willing to participate in insider activities. A subject in a management role could use their hiring influence to bring in new team members who align with or are manipulated into assisting in the subject's illicit plans, increasing the risk of coordinated insider actions.

 

In addition to these immediate risks, subjects in people management roles may also have the ability to recruit individuals from their team for insider activities, subtly influencing them to support illicit actions or help cover up their activities. By fostering a sense of loyalty or manipulating interpersonal relationships, the subject may encourage compliance with unethical actions, making it more difficult for others to detect or challenge the behavior.

 

Given the central role that managers play in shaping team culture and operational practices, the risks posed by a subject in a management position are compounded by their ability to both directly influence the behavior of others and manipulate processes for personal or malicious gain.

IF022.004Payment Card Data Leakage

A subject with access to payment environments or transactional data may deliberately or inadvertently leak sensitive payment card information. Payment Card Data Leakage refers to the unauthorized exposure, transmission, or exfiltration of data governed by the Payment Card Industry Data Security Standard (PCI DSS). This includes both Cardholder Data (CHD)—such as the Primary Account Number (PAN), cardholder name, expiration date, and service code—and Sensitive Authentication Data (SAD), which encompasses full track data, card verification values (e.g., CVV2, CVC2, CID), and PIN-related information.

 

Subjects with privileged, technical, or unsupervised access to point-of-sale systems, payment gateways, backend databases, or log repositories may mishandle or deliberately exfiltrate CHD or SAD. In some scenarios, insiders may exploit access to system-level data stores, intercept transactional payloads, or scrape logs that improperly store SAD in violation of PCI DSS mandates. This may include exporting payment data in plaintext, capturing full card data from logs, or replicating data to unmonitored environments for later retrieval.

 

Weak controls, such as the absence of data encryption, improper tokenization of PANs, misconfigured retention policies, or lack of field-level access restrictions, can facilitate misuse by insiders. In some cases, access may be shared or escalated informally, bypassing formal entitlement reviews or just-in-time provisioning protocols. These gaps in security can be manipulated by a subject seeking to leak or profit from payment card data.

 

Insiders may also use legitimate business tools—such as reporting platforms or data exports—to intentionally bypass obfuscation mechanisms or deliver raw payment data to unauthorized recipients. Additionally, compromised service accounts or insider-created backdoors can provide long-term persistence for continued exfiltration of sensitive data.

 

Data loss involving CHD or SAD often trigger mandatory breach disclosures, regulatory scrutiny, and severe financial penalties. They also pose reputational risks, particularly when data loss undermines consumer trust or payment processing agreements. In high-volume environments, even small-scale leaks can result in widespread exposure of customer data and fraud.

MT005.002Corporate Espionage

A third party private organization deploys an individual to a target organization to covertly steal confidential or classified information or gain strategic access for its own benefit.

MT005.003Financial Desperation

A subject facing financial difficulties attempts to resolve their situation by exploiting their access to or knowledge of the organization. This may involve selling access or information to a third party or conspiring with others to cause harm to the organization for financial gain.

MT005.001Speculative Corporate Espionage

A subject covertly collects confidential or classified information, or gains access, with the intent to sell it to a third party private organization.

IF001.006Exfiltration via Generative AI Platform

The subject transfers sensitive, proprietary, or classified information into an external generative AI platform through text input, file upload, API integration, or embedded application features. This results in uncontrolled data exposure to third-party environments outside organizational governance, potentially violating confidentiality, regulatory, or contractual obligations.

 

Characteristics

  • Involves manual or automated transfer of sensitive data through:
  • Web-based AI interfaces (e.g., ChatGPT, Claude, Gemini).
  • Upload of files (e.g., PDFs, DOCX, CSVs) for summarization, parsing, or analysis.
  • API calls to generative AI services from scripts or third-party SaaS integrations.
  • Embedded AI features inside productivity suites (e.g., Copilot in Microsoft 365, Gemini in Google Workspace).
  • Subjects may act with or without malicious intent—motivated by efficiency, convenience, curiosity, or deliberate exfiltration.
  • Data transmitted may be stored, cached, logged, or used for model retraining, depending on provider-specific terms of service and API configurations.
  • Exfiltration through generative AI channels often evades traditional DLP (Data Loss Prevention) patterns due to novel data formats, variable input methods, and encrypted traffic.

 

Example Scenario

A subject copies sensitive internal financial projections into a public generative AI chatbot to "optimize" executive presentation materials. The AI provider, per its terms of use, retains inputs for service improvement and model fine-tuning. Sensitive data—now stored outside corporate control—becomes vulnerable to exposure through potential data breaches, subpoena, insider misuse at the service provider, or future unintended model outputs.