Shadow AI in Business: The Invisible Threat to AI Act and Performance

Shadow AI — the unauthorized use of artificial intelligence tools within an organization — poses a critical threat to compliance with both the GDPR and the EU AI Act. Identifying hidden AI usage, establishing clear governance, and promoting responsible innovation are essential steps to turn this invisible risk into a driver of secure, compliant, and controlled innovation.

By
Rémy Bozonnet
1
Min
Share this article
Artificial intelligence robot and computer with data

In the corporate world, the term Shadow AI refers to the unauthorized, unsupervised, or uncontrolled use of artificial intelligence (AI) tools — particularly generative AI — by employees or departments without validation or coordination from IT, security, or compliance teams.


While often driven by a genuine desire for innovation or productivity gains, such practices introduce major risks: data leaks, regulatory non-compliance, bias, loss of traceability, technological fragmentation, and reputational damage.

This article explores:

  • What Shadow AI is and why it is spreading
  • The business, security, and compliance risks it brings
  • Its tension with the EU AI Act
  • The key strategies to effectively fight Shadow AI

What is Shadow AI?

The concept stems from “Shadow IT” — the use of applications without IT approval. Shadow AI goes further: users interact directly with public AI models (chatbots, APIs, text or code generators), inject sensitive or strategic data, or even build their own AI agents — without the organization’s knowledge or oversight.

Why Shadow AI is particularly risky?

  • AI models can retain user-provided data (“model leakage”) and reproduce it in other contexts
  • Access is easy — via free tools, web interfaces, or plugins in office software
  • The data involved is often sensitive: internal documents, client information, or financial data
  • Typically, there is no logging or audit trail, making investigations or incident responses difficult

A rapidly growing phenomenon

Recent studies show that over a quarter of data entered into public AI tools is now sensitive, and three out of four employees have used an unapproved AI system at work. Yet, fewer than one in three companies have formal AI governance policies in place.

Business and Compliance Risks under the AI Act

Security and Data Leak Risks

Employees may unknowingly transmit strategic information — source code, client data, product designs — to external AI tools. This data may be stored on foreign servers, reused for model training, or exposed to third parties. Without oversight, the risks of data leakage, espionage, and loss of intellectual property become severe.

Regulatory Compliance Risks

Uncontrolled processing of personal or regulated data through uncertified AI tools exposes companies to violations of GDPR and the AI Act. Without documentation or traceability, proving compliance to regulators or partners becomes impossible.

Operational and Reputational Risks

Decisions or analyses generated by unvalidated AI tools can be false, biased, or inconsistent, causing business errors and undermining reliability. At the same time, a proliferation of disparate tools drives up costs and fragments workflows. A single public incident tied to Shadow AI can cause lasting reputational harm and erode customer trust.

Governance and Ethical Risks

Undeclared AI usage weakens overall data and model governance: lack of human oversight, undetected biases, and no explainability. This raises critical ethical and accountability concerns that go beyond cybersecurity.

Why Shadow AI is Growing?

Multiple Causes:

  • Accessibility – AI tools are simple, free, and powerful
  • Productivity pressure – employees seek to save time without waiting for IT validation
  • Lack of clear policy – the absence of a defined framework leaves room for individual initiatives
  • Hybrid work and SaaS – decentralized environments make unsupervised use easier

A Hard-to-Contain Phenomenon:

Shadow AI is more insidious than Shadow IT because it’s not just about tools — it’s about behavior. Copying confidential text into a chatbot or generating code with a public model is nearly invisible to IT departments. Without clear governance, these micro-actions multiply and escape control.

Shadow AI vs. AI Act: Balancing Innovation and Compliance

The EU AI Act introduces strict requirements for classification and governance of AI systems based on their risk level: minimal, limited, high, or prohibited. It mandates transparency, technical documentation, auditability, and human oversight.

By definition, Shadow AI completely bypasses these requirements:

  • Impossible to identify the system’s risk category
  • No documentation or risk assessment available
  • No control over transparency or data traceability
  • No guarantee of human supervision or explainability

This invisibility exposes companies to structural non-compliance, with potential fines of up to €35 million or 7% of global turnover under the AI Act.

The stricter the European regulatory framework becomes, the more Shadow AI turns into a critical blind spot. Ensuring AI Act compliance requires a proactive strategy: detecting hidden AI usage, classifying systems, documenting risks, and enforcing clear governance.

How to Fight Shadow AI?

Fighting Shadow AI is not about punishment — it’s about culture, organization, and technology working together to balance innovation and control.

Build a Culture of Responsibility

Employees are not the problem — they are the solution. They must be trained and empowered:

  • Raise awareness about confidentiality and bias risks
  • Communicate clearly which AI tools are approved
  • Encourage feedback to identify unmet needs and drive safe innovation

Provide Safe Alternatives

Offering approved AI tools is the best way to reduce unregulated use:

  • Deploy internal or locally hosted AI assistants
  • Create a catalogue of certified tools with compliance summaries
  • Implement a rapid evaluation process for new AI solutions

Strengthen Detection and Governance

Establish technical monitoring mechanisms:

  • Use DLP, CASB, or Zero Trust solutions to detect unauthorized connections to AI services
  • Conduct regular log audits and behavioral analyses
  • Centralize governance with IT, legal, and security departments working together

Encourage Controlled Experimentation

Allow secure innovation environments (“AI sandboxes”) where teams can test models safely. This approach promotes creativity while ensuring compliance and oversight.

The goal is not to block AI — it’s to regain control and turn every use case into a managed, compliant opportunity.

Conclusion

Shadow AI is one of today’s most subtle yet pervasive corporate threats. It often stems from good intentions — to innovate faster and work smarter — but without governance, it evolves into a systemic risk: legal, technical, ethical, and reputational.

With the AI Act and growing regulatory pressure, companies must shift from reactive control to proactive governance and compliance.

By combining responsible culture, clear policies, and secure solutions, organizations can transform Shadow AI from a hidden danger into a driver of trusted innovation.

FAQ – Shadow AI, Compliance & Governance

Is Shadow AI always illegal or non-compliant?

Not necessarily. The issue isn’t the tool itself but the context of its use.


A public AI can be used lawfully if the data processed is anonymous, non-sensitive, and the use complies with company policy.


However, as soon as personal, strategic, or confidential data is processed without supervision or traceability, the organization becomes legally responsible for a compliance breach (GDPR, trade secrets, confidentiality obligations, etc.).

How can a company detect Shadow AI without creating a surveillance culture?

The goal is not to spy on employees but to create visibility without intrusion.


Best practices include:

  • Anonymous network analysis to detect traffic toward known AI services
  • Use of Cloud Access Security Broker (CASB) or Data Loss Prevention (DLP) tools to spot unauthorized data transfers
  • Implementation of voluntary reporting channels (an “AI usage registry”) where employees can declare experiments safely and transparently

What are the real consequences of uncontrolled AI usage?

Penalties depend on the type of violation:

  • Personal data breaches → GDPR fines up to 4% of global turnover
  • AI Act violations → up to €35 million or 7% of global turnover
  • Trade secret or confidentiality breaches → civil lawsuits, contract losses, or disqualification from tenders

Beyond penalties, the reputational and trust damage (from clients, investors, or regulators) is often far more costly.

Does the EU AI Act also apply to internal AI systems?

Yes. The regulation is not limited to commercial AI providers.


Any organization that develops, deploys, or modifies an AI system for internal use is subject to some obligations, such as:

  • Technical documentation and risk assessment
  • Human oversight mechanisms
  • Transparency about the data and decision criteria used


For example, an internal AI used for customer scoring or recruitment already falls under the AI Act’s scope.

How can companies reconcile innovation with control?

The key is to channel experimentation rather than block it:

  • Create internal AI Labs where teams can test safely within a secure framework
  • Implement fast-track validation processes for promising tools (security, ethics, legal review)
  • Adopt a “Compliance by Design” approach — embedding compliance into every stage of AI development

This is agile governance: enabling innovation while staying compliant.

How should Legal and IT collaborate on Shadow AI?

Fighting Shadow AI requires cross-functional governance:

  • Legal → assess regulatory, contractual, and GDPR/AI Act risks
  • IT & Security → monitor access, data flows, and integrations
  • HR & Communications → train and raise awareness among employees

A dedicated AI governance committee or a cross-department AI usage charter helps unify actions and avoid conflicting initiatives.

Are there metrics to measure Shadow AI risk?

Yes — several companies are developing AI governance KPIs, such as:

  • % of employees trained on responsible AI use
  • Number of AI tools identified vs officially approved
  • Volume of sensitive data detected in outbound traffic to AI services
  • Share of internal AI projects audited for compliance

These metrics help integrate AI risk management into the company’s overall cybersecurity and compliance maturity model.

How can Shadow AI become a strategic opportunity?

Rather than a threat, Shadow AI can be seen as a signal of innovation.


Unauthorized use often reveals unmet business needs.


By identifying these practices, organizations can:

  • Develop internal tools that truly fit business workflows
  • Prioritize the most valuable AI use cases
  • Build a participatory and responsible AI culture that encourages safe experimentation

They have trusted us for years

Discover Adequacy

One of our experts introduces Adequacy to you in a real situation.