AI Safety

5 AI Risks Every Texas Government Employee Must Understand Before Using AI Tools

9 min read

From hallucinated legal citations to biased decision-making, AI tools present real dangers in government operations. Understand the risks before they become headlines.

AI Is Already in Your Office - Are You Ready for the Risks?

Artificial intelligence is quickly becoming a standard tool in government offices across Texas. Employees are using AI to draft reports, summarize lengthy documents, respond to constituent inquiries, analyze data, and streamline routine tasks. The productivity benefits are real, and they are significant.

But so are the dangers.

Without proper training, government employees can inadvertently create legal liability, expose confidential data, produce discriminatory outcomes, and undermine public trust - all by using AI tools that appear helpful on the surface. These are not hypothetical concerns. They are documented, real-world problems that have already caused serious consequences in both the public and private sectors.

This is precisely why the Texas Legislature passed Texas Government Code Section 2054.5193, which requires annual AI awareness training for all state and local government employees who use a computer for 25% or more of their duties. The law exists because AI tools are powerful - and power without understanding is a liability.

Here are five AI risks that every Texas government employee must understand before using AI tools in their work.

Risk 1: AI Hallucinations and Fabricated Information

"Hallucination" is the term used when an AI model generates information that sounds authoritative and well-sourced but is partially or entirely fabricated. This is not a bug or a rare glitch. It is a fundamental characteristic of how large language models work. These systems predict the most likely next word in a sequence based on patterns in their training data. They do not "know" facts. They generate plausible-sounding text, and sometimes that text contains completely invented content.

The most well-known example occurred in 2023 when a New York attorney used ChatGPT to prepare a legal brief in the case of Mata v. Avianca. The AI generated six fake court case citations, complete with realistic case names, docket numbers, and judicial quotes. None of the cases existed. The attorney submitted the brief to federal court without verifying the citations and was sanctioned by the judge.

Now consider the government context. Imagine an employee at a Texas state agency using an AI tool to draft a policy memo. The AI cites a specific section of the Texas Administrative Code to support a recommendation - but that section does not say what the AI claims, or it does not exist at all. If that memo influences a policy decision, the consequences could range from embarrassing corrections to flawed regulations that affect thousands of residents.

AI hallucinations can produce fabricated statistics for budget reports, nonexistent regulatory requirements in compliance documents, incorrect dates and deadlines for legal filings, and invented precedents in legal analyses. The danger is that hallucinated content often looks indistinguishable from accurate content.

What to do: Treat every AI-generated output as a draft that requires human verification. Cross-check all facts, citations, statistics, and legal references against official sources before including them in any government document. Never assume AI output is accurate simply because it sounds confident.

Risk 2: Bias and Discriminatory Outcomes

AI models learn from data, and that data reflects the world as it has been - not as it should be. When training data contains historical biases related to race, gender, age, disability, socioeconomic status, or geography, the AI model absorbs and reproduces those biases. In many cases, it amplifies them.

This is not a theoretical problem. Research has repeatedly demonstrated that facial recognition systems perform significantly worse on people with darker skin tones - a finding documented by MIT researcher Joy Buolamwini and others. Predictive policing algorithms have been shown to disproportionately target communities of color by learning from arrest data that already reflects decades of biased policing practices. Hiring screening tools have been caught penalizing resumes that contain words associated with women or specific ethnic groups.

For Texas government employees, the implications are direct and serious. Consider an agency using AI to help screen applications for a benefit program. If the model was trained on historical approval data that reflects past discrimination - even unintentional discrimination - the AI may systematically rank certain demographic groups lower. The result is a government system that perpetuates the very inequities it should be working to eliminate.

The legal exposure is substantial. Government agencies are subject to Title VI of the Civil Rights Act, the Americans with Disabilities Act, the Texas Human Rights Act, and numerous other anti-discrimination statutes. An AI tool that produces biased outcomes does not shield the agency from liability. The agency remains responsible for every decision it makes, regardless of whether an algorithm was involved.

Bias can appear in AI-assisted hiring recommendations, benefit eligibility screening, resource allocation across districts, risk scoring for inspections or enforcement, and even the language AI uses when drafting constituent communications.

What to do: Evaluate AI recommendations for patterns of bias before acting on them. Ask whether certain groups are consistently treated differently. Ensure that AI-assisted decisions are reviewed by qualified humans who can identify discriminatory patterns that the tool cannot see in itself.

Risk 3: Data Privacy and Confidentiality Breaches

One of the most common and most dangerous mistakes government employees make with AI tools is pasting sensitive information into publicly available systems like ChatGPT, Google Gemini, or similar services. Many employees do not realize that the text they enter may be stored by the AI provider, used to train future models, or potentially accessible to other users.

Government agencies handle multiple categories of sensitive data. There is personally identifiable information (PII) such as Social Security numbers, addresses, and dates of birth. There is protected health information (PHI) governed by HIPAA. There is criminal justice information governed by CJIS policies. There are attorney-client privileged communications, internal investigative files, and data classified as confidential or restricted under Texas Government Code Chapter 552 (the Public Information Act).

When an employee copies a constituent complaint containing personal details and pastes it into a public AI tool asking the AI to draft a response, that constituent's private information has potentially been exposed to a third-party system with no obligation to protect it under Texas law. When a staff member uploads an internal policy draft to an AI summarizer, the contents of that draft may be incorporated into the AI provider's training data and could surface in outputs given to other users.

Samsung learned this lesson in 2023 when engineers pasted proprietary source code into ChatGPT, resulting in a significant intellectual property exposure. Government agencies face the same risk, but with the added responsibility of protecting the public's data.

Texas has specific data privacy requirements under the Texas Identity Theft Enforcement and Protection Act, the Texas Privacy Protection Act, and various federal regulations that apply to state operations. Unauthorized disclosure of protected data - even accidental disclosure through an AI tool - can trigger mandatory breach notification requirements, investigations, and legal consequences.

What to do: Never enter confidential, restricted, or personally identifiable information into any public AI tool. Before using any AI service, check whether your agency has approved it for use and understand what data classification levels it can handle. When in doubt, do not paste it. Use only agency-approved AI tools for sensitive work, and always follow your agency's data classification policies.

Risk 4: Security Vulnerabilities and AI-Powered Threats

AI does not only create risks through its outputs. It also introduces new categories of cybersecurity threats that government employees must be prepared to recognize.

AI-generated code with hidden flaws. Developers and IT staff who use AI to generate code may receive outputs that contain security vulnerabilities. AI coding assistants can produce code that works correctly on the surface but contains SQL injection vulnerabilities, improper input validation, hardcoded credentials, or insecure authentication logic. If that code is deployed in a government system without thorough security review, it creates attack surfaces that malicious actors can exploit.

AI-powered phishing attacks. Traditional phishing emails often contained spelling errors, awkward grammar, and generic language that trained employees could spot. AI has changed that calculus. Attackers now use AI to generate highly personalized, grammatically perfect phishing messages that mimic the writing style of specific individuals. A government employee might receive an email that appears to come from their supervisor, uses the right terminology, references a real project, and asks them to click a link or provide credentials. These AI-crafted attacks are significantly harder to detect than their predecessors.

Deepfake threats. AI can generate realistic fake audio and video of real people. This technology has already been used in fraud schemes where attackers created deepfake audio of a CEO to authorize a wire transfer. In a government context, deepfakes could be used to impersonate elected officials, fabricate evidence, spread disinformation about government programs, or manipulate public meetings conducted over video calls.

Social engineering at scale. AI enables attackers to automate and personalize social engineering campaigns across hundreds or thousands of government employees simultaneously. Instead of sending one generic phishing email to an entire agency, an attacker can use AI to craft unique, targeted messages for each employee based on publicly available information.

What to do: Maintain healthy skepticism toward all digital communications, even those that appear legitimate. Follow your agency's cybersecurity protocols rigorously. Report suspicious emails, calls, and messages to your IT security team. If you use AI to generate code, ensure it undergoes the same security review as manually written code. Stay current on your agency's cybersecurity training alongside AI awareness training.

Risk 5: Over-Reliance and Loss of Human Judgment

Perhaps the most insidious risk of AI adoption is what researchers call "automation complacency" - the gradual tendency to trust automated outputs without questioning them. When an AI tool consistently produces reasonable-looking results, users naturally begin to lower their guard. They stop checking. They stop thinking critically. They start treating AI suggestions as decisions rather than as inputs to a decision.

This is particularly dangerous in government, where decisions directly affect people's lives, livelihoods, and rights. A caseworker who relies on an AI recommendation to deny a benefit application without independent review has effectively allowed an algorithm to make a government decision. A permit reviewer who accepts AI-generated analysis without verifying the underlying data has delegated a regulatory function to a machine. In both cases, the human-in-the-loop principle - the idea that a qualified person must meaningfully review and approve AI-influenced decisions - has been violated in practice, even if it exists in policy.

The human-in-the-loop principle is central to responsible AI use in government, and it is explicitly addressed in the DIR certification standards. It means more than simply having a human click "approve" on an AI recommendation. It means the human must understand the recommendation, evaluate it critically, consider context that the AI cannot access, and be prepared to override the AI when their professional judgment says the AI is wrong.

Over-reliance also erodes institutional expertise over time. If employees stop performing tasks because AI handles them, the organization gradually loses the human knowledge needed to evaluate whether the AI is performing correctly. This creates a dangerous dependency where the agency cannot function without the AI tool and lacks the expertise to recognize when the tool is failing.

What to do: Use AI as a tool to inform your judgment, not replace it. Maintain your professional skills and subject matter expertise. Question AI outputs, especially when they affect people's rights or access to government services. Remember that you - not the AI - are accountable for every decision you make or approve in your role as a public servant.

The Bottom Line: AI Is a Tool, Not a Decision-Maker

5 AI Risks Every Government Employee Must Know

  1. Hallucinations - AI can fabricate facts, citations, and statistics that look completely real.
  2. Bias and discrimination - AI trained on biased data produces biased outcomes that can violate civil rights laws.
  3. Data privacy breaches - Entering sensitive data into public AI tools can permanently expose confidential information.
  4. Security vulnerabilities - AI enables more sophisticated cyberattacks and can introduce flaws into government systems.
  5. Over-reliance - Deferring to AI without critical review undermines human judgment and government accountability.

These five risks are the reason Texas enacted mandatory AI awareness training for government employees. The law is not about preventing agencies from using AI. It is about ensuring that employees understand both the benefits and the dangers so they can use these tools responsibly.

AI can make government more efficient, more responsive, and more effective - but only when the people using it understand its limitations. Proper training is the difference between AI as a productivity tool and AI as a liability.

The DIR-certified AI Awareness Training from Evolve AI Institute covers all five of these risks in depth, along with practical strategies for responsible AI use in government settings. The course takes about one hour to complete and satisfies the annual training requirement under Texas Government Code Section 2054.5193.

Your agency's compliance deadline is August 31, 2026. Do not wait until the risks in this article become headlines with your agency's name attached.

Get Your Team Compliant Today

Our DIR-certified AI awareness training takes about one hour to complete and is fully self-paced. Certificates are issued instantly upon passing.

Individual and agency-wide enrollment available. Volume discounts for 50+ employees.

Related Articles