Across public sector organizations, AI adoption isn’t starting with the IT roadmap — it’s starting with your frontline teams. From clerks drafting meeting notices to teachers using AI to build lesson plans, artificial intelligence is already reshaping the way work gets done.
According to the 2026 NASCIO Top Ten Priorities report, Artificial Intelligence has officially overtaken Cybersecurity as the number one priority for State CIOs. But while leadership is focused on enterprise strategy, your staff is already experimenting with AI tools in ways that may be putting your organization at risk.
This blog breaks down the hidden risks of Shadow AI, why it matters for compliance and litigation, and what you can do to secure your agency’s innovation.
Shadow AI Data Leakage Risks for FERPA and HIPAA
The biggest red flag with unlicensed AI tools is what happens to the data you enter. Many public sector employees copy and paste sensitive records such as student behavioral notes, disciplinary histories, billing information, into public-facing AI platforms without realizing that content may be used to train future models.
This is not a theoretical concern. The 2025/2026 Center for Democracy and Technology (CDT) report found that 85% of teachers and 86% of students are using AI tools, yet many platforms include trackers or third-party data sharing by default.
For public agencies, this could violate federal privacy laws:
- FERPA (student records)
- HIPAA (health data)
If your staff is using AI tools without vendor vetting or usage controls, you may be exposing sensitive data to platforms that do not offer any contractual protections.

Shadow AI Risks in Critical Infrastructure and Public Safety Systems
CISA’s "Secure by Design" guidance has made it clear: the use of AI in Operational Technology (OT) environments (like water, power, and physical safety systems) introduces real-world risk.
This is not just about chatbots. Shadow AI is now entering school safety platforms, public works systems, and emergency alerting tools. Without proper controls, this opens the door to:
- Unauthorized automation of critical systems
- Vulnerable third-party APIs
- Failure to meet minimum security baselines or audit standards
Unlike consumer tools, enterprise-grade AI includes vendor agreements, audit trails, and control over where data lives. If public sector teams are embedding AI into operational workflows without IT review, it creates systemic risk.
Shadow AI Audit: Identifying Unauthorized Tools in Your Organization
Use these questions to identify the AI "blind spots" in your organization:
- Do we have an Acceptable Use Policy aligned to CISA’s AI Cybersecurity Playbook?
- Can our firewall detect unauthorized traffic to tools like OpenAI, Otter.ai, or Anthropic?
- Are staff using AI meeting assistants in spaces where attorney-client or union-related privacy should apply?
- Have we provided a secure, agency-approved alternative that meets compliance standards?
How to Secure AI Use with Governance and Policy
AI use in public sector environments isn’t slowing down, and trying to block it entirely only drives it further underground. The goal is not to shut down innovation, but to provide a secure, governed lane where it can thrive without compromising compliance or safety.
Recommended steps:
- Conduct an AI Discovery Audit: Identify which tools are in use across your network, even informally
- Develop a Policy Framework: Use the NIST AI RMF and CISA guidance to define responsible use
- Deploy Secure Alternatives: Choose enterprise-grade AI solutions that offer data controls, audit trails, and vendor accountability
Strategic AI governance gives you the ability to lead innovation safely, not react to it after a breach or compliance failure.
Frequently Asked Questions: Shadow AI in the Public Sector
-
What is "Shadow AI" and how does it differ from Shadow IT?
Shadow IT refers to unauthorized hardware or software (like an unapproved messaging app). Shadow AI is the use of unauthorized Artificial Intelligence tools. It is significantly higher risk because AI models often "learn" from the data users input. If a staff member pastes sensitive resident data into a free chatbot, that data may be used to train future versions of the model, leading to a permanent, irreversible data leak.
-
Does using AI meeting notetakers violate FOIA or public record laws?
In many jurisdictions, yes. AI-generated transcripts and recordings created by unauthorized third-party tools are often considered public records. Because these transcripts are "verbatim" and stored on external servers, they are subject to discovery and FOIA requests. Unlike traditional minutes, these files may capture "off-the-record" comments or privileged attorney-client discussions, creating significant legal liability for the agency.
-
How can our agency comply with the NIST AI Risk Management Framework?
The NIST AI RMF is the 2026 gold standard for public sector governance. It focuses on four core functions: Govern, Map, Measure, and Manage. Compliance starts with "Mapping" every AI tool currently in use and "Measuring" the risk to data privacy (FERPA/HIPAA). Maverick Networks helps agencies implement these frameworks by providing secure, enterprise-grade AI alternatives that keep data isolated.
-
Is there an "AI-Safe" way to handle student or patient data?
To safely use AI with regulated data (FERPA/HIPAA), you must use tools that offer Zero Data Retention (ZDR) or are covered by a signed Business Associate Agreement (BAA). These agreements legally ensure the provider does not use your data for model training and maintains the administrative safeguards required by federal law. Standard consumer chatbots typically do not provide these protections
-
How do we stop Shadow AI without stifling innovation?
Blocking every AI domain is often counterproductive and drives users to even less secure "workarounds." The most effective strategy is to provide a Secure Agency Alternative. By deploying an approved, enterprise-grade AI platform that staff want to use, you bring innovation into the light while maintaining the oversight, audit logs, and security guardrails required for public service.
.png)