The Shadow AI Catch-22: Why saying no to generative AI could be the biggest security risk of all

Organizations prohibiting the use of Generative AI services to prevent data leakage may inadvertently be creating a far more dangerous security landscape. By failing to provide controlled, enterprise-aligned AI services, employees turn to unsanctioned tools, ushering in the rise of Shadow AI.

In the current cybersecurity landscape, the most dangerous stance a Chief Information Security Officer can take regarding Artificial Intelligence (AI) is often the one that feels the safest: saying “no”.

Publicly available research shows that roughly one in four organizations initially responded to generative AI by banning its use. However, an MIT study, State of AI in Business 2025, revealed that while only 40% of companies have purchased an official Large Language Model (LLM) subscription, workers from over 90% of companies surveyed reported regular use of personal AI tools for work tasks. In other words, bans have not stopped adoption, only driven it out of sight, exposing a widening gap between policy and operational reality.

While management’s first instinct might be to prohibit the use of Generative AI services to minimize data leakage risk, this response is paradoxically creating a far more hostile security environment for organizations. By failing to provide controlled and organization-aligned ways to benefit from AI services, organizations unintentionally drive usage underground.  This “Shadow AI”, a version of “Shadow IT”, is the unregulated, invisible use of IT services that bypasses standard governance, compliance, and security controls.

Organizations have been battling different versions of “Shadow IT” for decades, but none have been as disruptive as the use of generative AI.  Simply uploading a screenshot from a personal device to an AI chatbot is enough to leak confidential internal information or intellectual property.

DNV Cyber Threat Insights

This article is part of DNV Cyber Threat Insights Newsletter. Register here to get your copy.

 

Productivity versus security

To understand why Shadow AI is an inherent risk of the “just say no” strategy, we must first understand user motivation. These employees are not malicious; they are simply trying to be more productive. They face pressure to write better, code faster, and deliver insights more quickly, often knowing that peers or competitors already leverage AI for these tasks.

When an organization blocks access to enterprise-grade AI tools without offering a sanctioned alternative, they place the employee in a dilemma. The employee knows that a free, publicly available tool can bring great value to the organization. Faced with a choice between following a security policy or meeting a tight deadline, many will choose the latter. Even well-intentioned employees may inadvertently expose sensitive information if they cannot clearly distinguish what is proprietary.

By refusing to sanction a tool, the organization does not stop the activity- it just strips away the ability to monitor it. The security team now trades a managed risk for an unmanaged one.

 

The real risks of shadow AI

When organizations fail to offer approved AI tools, that gap is quickly filled by consumer-grade applications. These tools may look polished, but they lack the confidentiality, controls, and governance required for enterprise use.
The risks here are severe, often outweighing the risks of a managed rollout, and can be summarized as follows:

A. Data leakage and 3rd party model training

The most cited risk in Shadow AI is data exfiltration, where data is intentionally or accidentally transferred to third parties. AI models may use input to retrain their algorithms, creating a scenario in which sensitive internal information, whether source code, client data, or proprietary documents, could be exposed outside the organization. In 2023, Samsung engineers unintentionally uploaded proprietary source code into a public AI chatbot illustrating how easily sensitive information can end up in systems not designed for enterprise confidentiality.

B. The black hole

You cannot secure what you cannot see. When AI use is driven underground, it bypasses access policies, access logs and identity management systems. If an employee uses a personal device or personal account to interact with an AI chatbot, there is no audit trail, no logging, and no enforceable retention or deletion policy. If a breach occurs via that vector, forensic analysis is impossible because the traffic never passed through the corporate perimeter.

C. Regulatory non-compliance

For sectors like finance and healthcare, “shadow AI” starts to slowly eat away some of the pillars of compliance. Even a seemingly harmless request, such as summarizing an internal report or patient note, can constitute a direct violation of frameworks such as GDPR or HIPAA when done through public AI tools. Research by the Nuffield Trust in December 2025 found that 28% of GPs surveyed are using AI tools in their clinical practice, with 11% using public tools and 4% using a combination of both public and enterprise tools.  Without a controlled and sanctioned “safe zone” for these tasks, employees may not even realize they are violating compliance standards until it is too late.

In addition, Shadow AI introduces data integrity and decision quality risks. AI tools may hallucinate facts or produce convincing but incorrect summaries. When these outputs are unknowingly incorporated into reports, submissions, or strategic decisions, the organization is left accountable for errors it cannot trace or explain.

Without guidance on AI usage, employees often do not realize they are violating compliance rules, or compromising decision integrity, until the damage is already done.


Make enablement the new perimeter

The only effective counter to Shadow AI, is to bring it into the light. This requires a shift from a “gatekeeper” mindset to a “guardrail” mindset which can be achieved with the three following steps:

  1. Act. Provide an enterprise license for a major LLM or AI service. This allows enforcement of Single Sign-On (SSO), encryption, and legally binding commitments that the provider will not use your data for training.
  2. Create a feedback Loop: The sanctioned service should be easier to access than the shadow AI alternative. Corporate AI services may lag behind consumer versions, so listen to your employees’ feedback and ensure that good ideas are acted upon.
  3. Amend your data policies: Define clearly what data classification levels can and cannot be used for what AI tool and ensure the guidance is practical and widely communicated.


Conclusion

The risk of Shadow AI that every organization out there is currently facing is not a failure of technology, it is a failure of strategy. As we all further explore a future with AI, the organizations with the most control and the least incidents will not be those imposing the strictest bans, but those that offer simple, safe, and approved ways for employees to use and experiment with AI. When people have easy access to the right tools, security becomes a natural side effect of convenience. By providing controlled, monitored access to AI tools, security teams can transform AI from a shadow risk into a strategic asset. The cost of providing these tools is measurable, but the cost of ignoring Shadow AI and its impact, is incalculable.

3/5/2026 5:58:00 AM