Skip to main content
Categories

Published: 10/13/25

Author:

Artificial intelligence (AI) continues to be a focal point for policy debates, legal disputes, and legislative action over the past year, both in North Carolina and across the United States. The pace of AI development keeps accelerating exponentially, forcing lawmakers, courts, and government agencies to consider carefully how they will regulate or use these tools. This post highlights some of the most significant AI developments from the past twelve months at the federal, state, and local levels.

President Trump Signs Executive Orders and Issues an AI Action Plan.

One of President Trump’s early actions in office was revoking President Biden’s executive order on AI (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence) and signing a new executive order on AI, “Removing Barriers to American Leadership in Artificial Intelligence” (EO 14179). This initial executive order on AI stated, “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security,” and directed federal agencies and officials to develop and submit an AI action plan to the President to achieve that policy goal.

On July 23, 2025, the White House released “America’s AI Action Plan” and President Trump signed three executive orders addressing AI development, procurement, and infrastructure. The AI Action Plan states that to build and maintain American AI infrastructure, “we will continue to reject radical climate dogma and bureaucratic red tape…[s]imply put, we need to ‘Build, Baby, Build!’” Along those same lines, a core focus of the Action Plan is the elimination of “burdensome AI regulations,”  including directing federal agencies that have AI-related discretionary funding programs to ensure “that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.”

The President’s July 23 executive orders on AI include “Accelerating Federal Permitting of Data Center Infrastructure,” “Promoting the Export of the American AI Technology Stack,” and “Preventing Woke AI in the Federal Government.” The first two executive orders in that list focus on accelerating the development of AI data centers in the United States and the global export of American AI technologies, while the third order requires federal agency heads to only procure large language models (LLMs) that (1) are “truthful in responding to user prompts seeking factual information or analysis” and (2) are “neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.” 

Federal Agencies Accelerate Use of AI.

In July, a report from the U.S. Government Accountability Office showed that AI use within federal agencies expanded dramatically from 2023 to 2024.  The number of reported AI use cases from 11 selected federal agencies rose from 571 in 2023 to 1,110 in 2024, while generative AI use cases grew nearly nine-fold across these same agencies, from 32 in 2023 to 282 in 2024. And the trend continues in 2025. For example, earlier this year, the U.S. Food and Drug Administration (FDA) announced the launch of Elsa, an LLM–powered generative AI tool designed to assist FDA employees with reading, writing, and summarizing documents. In June, the U.S. State Department announced it will use a generative AI chatbot, StateChat (developed by Palantir and Microsoft), to select foreign service officers who will participate on panels that determine promotions and moves for State Department employees. And in September, the U.S. General Services Administration (GSA) announced an agreement with Elon Musk’s xAI, which will enable all federal agencies to access Grok AI models for only $0.42 per organization. For an example of how dozens of different AI use cases might exist within a single federal agency, you can explore the Department of Homeland Security’s AI Use Case Inventory.

Governor Stein Signs an Executive Order on AI.

On Sept. 2, 2025, Governor Stein signed Executive Order No. 24, “Advancing Trustworthy Artificial Intelligence That Benefits All North Carolinians.”  The Executive Order establishes the North Carolina AI Leadership Council, which is tasked with advising the Governor and state agencies on AI strategy, policy, and training. The Executive Order also establishes the North Carolina AI Accelerator within the North Carolina Department of Information Technology to serve as “the State’s centralized hub for AI governance, research, partnership, development, implementation, and training.” Finally, the Executive Order requires each Cabinet agency to establish an Agency AI Oversight Team that will lead AI-related efforts for the agency, including submitting proposed AI use cases to the AI Accelerator for review and risk assessment.

State Agencies Issue AI Guidance.

The N.C. Department of Information Technology has developed the North Carolina State Government Responsible Use of Artificial Intelligence Framework to guide state agencies in their development, procurement, and use of AI systems and tools. The Framework applies to “all systems that use, or have the potential to use, AI and have the potential to impact North Carolinians’ exercise of rights, opportunities, or access to critical resources or services administered by or accessed through the state.” The Framework only applies to “state agencies,” as defined in G.S. 143B-1320(a)(17), meaning it does not apply to the legislative or judicial branches of government or the University of North Carolina.

Meanwhile, the N.C. Department of Public Instruction continues to provide updated AI guidance for North Carolina’s public schools through a document titled “Generative AI Implementation Recommendations and Considerations for PK-13 Public Schools.” This living document is routinely updated to incorporate new technological and legal developments, including addressing deepfakes, AI-specific cybersecurity threats, and federal and state compliance requirements for schools.

State Treasurer Pilots ChatGPT Enterprise for State Employees.

In March 2025, the N.C. Department of State Treasurer launched a 12-week pilot program in partnership with OpenAI, during which 36 employees implemented ChatGPT Enterprise into their regular workflows. In the final report on the pilot, participating employees reported an estimated time-savings of 30–60 minutes per day and had largely positive experiences using ChatGPT. However, the report also notes challenges with ChatGPT, including occasionally producing incorrect facts or citations, generating responses that needed tone refinement to suit particular audiences, underperforming on specialized tasks like coding and math-heavy work, and producing tables, forms, or templates with inconsistent formatting. The report highlights that employees most frequently used ChatGPT for drafting professional communications, reports, and memos; translating technical documentation into plain language; brainstorming content for policy documents and training materials; summarizing legal text, multi-page reports, and public submissions; and asking clarifying questions when researching complex or unfamiliar topics.

Deepfake Legislation at the State and Federal Level.

For the past several years, lawmakers in Congress and state legislatures across the country have struggled to reach consensus on how to address some of the potential harms caused by generative AI. One issue that has driven some bipartisan policymaking at both the federal and state level is the need to address AI-generated child sex abuse material (CSAM) and nonconsensual deepfake pornography.

Last year the General Assembly enacted Session Law 2024-37, which revised the criminal offenses related to sexual exploitation of a minor effective December 1, 2024. The definition of “material” that applies across these statutes now includes “digital or computer-generated visual depictions or representations created, adapted, or modified by technological means, such as algorithms or artificial intelligence.” See G.S. 14-190.13(2). S.L. 2024-37 also created a new criminal offense, found in G.S. 14-190.17C—”obscene visual representation of sexual exploitation of a minor.” This new offense criminalizes distribution and possession of material that (1) depicts a minor engaging in sexual activity (as defined in G.S. 14-190.13(5)), and (2) is obscene (as defined in G.S. 14-190.13(3a)). Importantly, it is not a required element of the offense that the minor depicted actually exists, meaning this crime applies to material featuring a minor that is entirely AI-generated.

S.L. 2024-37 also addressed the nonconsensual distribution of explicit AI images of adults by modifying the disclosure of private images statute (G.S. 14-190.5A), such that the statute’s definition of “image” now includes “a realistic visual depiction created, adapted, or modified by technological means, including algorithms or artificial intelligence, such that a reasonable person would believe the image depicts an identifiable individual.”

Congress also addressed the issue of deepfake pornography and AI-generated CSAM this year. In April, Congress passed the “TAKE IT DOWN Act,” which was signed into law on May 19, 2025. The Act creates seven different criminal offenses, including use of “an interactive computer service” to “knowingly publish” an “intimate visual depiction” or a “digital forgery” of an identifiable individual. The Congressional Research Service’s summary of the new law, including an analysis of the potential First Amendment challenges the law may face, is available at this link.

AI Hallucinations Persist (and Perhaps are Getting Worse).

“Hallucinations”—inaccurate, false, or misleading statements created by generative AI models—continue to persist. As reported by Forbes and the New York Times earlier this year, some of the recent “reasoning” large language models actually hallucinate more than previous models, including OpenAI’s o3 and o4-mini models hallucinating between 33% and 79% of the time on OpenAI’s own accuracy tests. OpenAI’s latest model, GPT-5, shows improvement on this front, but only when web browsing is enabled. According to OpenAI’s accuracy tests, GPT-5 hallucinates 47% of the time when not connected to web browsing, but produced incorrect answers 9.6% of the time when the model has web browsing access.

Mistakes made by generative AI can create problems for both government agencies and their vendors. Last week, the AP reported that financial services firm Deloitte is partially refunding the $290,000 it was paid by the Australian government for a report that appeared to contain multiple AI-generated errors. One researcher found at least 20 errors in Deloitte’s report, including misquoting a federal judge and making up nonexistent books and reports. Deloitte’s revised version of the report disclosed that Azure OpenAI GPT-4o was used in its creation.

The “hallucination” problem is particularly concerning when lawyers and court officials use generative AI for legal research or writing without verifying the accuracy of the finished product. This month, Bloomberg Law reported that courts have issued at least 66 opinions thus far in which an attorney or party has been reprimanded or sanctioned over the misuse of generative AI. Many of these cases have involved attorneys filing documents with the court that contain fake, nonexistent case citations, sometimes leading to Rule 11 sanctions. Moreover, two federal judges have come under scrutiny this year after publishing (and subsequently withdrawing) opinions that appeared to possibly contain generative AI hallucinations, including factual inaccuracies, improper parties, and misstated case outcomes.  

These accuracy concerns also extend to witnesses who may use generative AI in preparing their testimony. In one particularly ironic example from a Minnesota case regarding regulation of AI deepfakes, Kohls v. Ellison, the court found that a Stanford AI misinformation specialist’s expert witness declaration cited to fake, non-existent articles. The author of the declaration admitted that GPT-4o likely hallucinated the citations. To quote Judge Provinzino’s ruling on the declaration, “One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles. Indeed, the Court would expect greater diligence from attorneys, let alone an expert in AI misinformation at one of the country’s most renowned academic institutions.”

Ethics Opinion Issued for North Carolina Lawyers.

Speaking of lawyers using AI, the North Carolina State Bar released 2024 Formal Ethics Opinion 1 last November, discussing the professional responsibilities of lawyers when using artificial intelligence in a law practice. The opinion analyzes how using AI implicates attorneys’ duties of competency, confidentiality, and client communication under the North Carolina Rules of Professional Conduct. Among other things, the opinion cautions lawyers to “avoid inputting client-specific information into publicly available AI resources” due to some of the data security and privacy issues with generative AI platforms. Which leads us to….

Ongoing Data Security and Privacy Issues with Generative AI.

As highlighted by the ethics opinion described above, the default setting of many publicly available generative AI tools (e.g., ChatGPT) is to train the underlying large language model on the inputs inserted or uploaded to the tool by individual users. I’ve warned in a prior blog post that government officials and employees should not insert confidential information into publicly available generative AI tools (and this is now reflected in NCDIT’s guidance for state agencies as well).

Beyond that fundamental risk, other unique data security concerns continue to emerge, even for generative AI users who have paid accounts or enterprise-level tools. Journalists reported this August that private details from thousands of ChatGPT conversations were “visible to millions” through appearing in Google search results, due to an option that allowed individual ChatGPT users to make the chat discoverable when generating a link to share a chat. OpenAI removed this feature after backlash, describing it as a “short-lived experiment.”

Another potential data security risk emerges when AI tools have access to private data and the ability to communicate that data externally. For example, a few weeks ago Anthropic launched a new feature for its Claude AI assistant that allows users to generate Excel spreadsheets, PowerPoint presentations, Word documents, and PDF files within the context of a chat with Claude. Anthropic’s own support guidance warns users that enabling this file-creation feature means that “Claude can be tricked into sending information from its context …to malicious third parties.” Because the file-creation feature gives Claude internet access, Anthropic warns that “it is possible for a bad actor to inconspicuously add instructions via external files or websites that trick Claude” into downloading and running untrusted code for malicious purposes or leaking sensitive data. Agentic AI web browsers also remain particularly vulnerable to prompt injection attacks.

Potential Wiretap Law Violations?

In a blog post on generative AI policies last year, I warned that local governments should be careful when using some AI meeting transcription and summarization tools in light of the potential to violate North Carolina’s wiretapping law (G.S. 15A‑287). This August, a putative class-action lawsuit filed in federal court in California alleges that Otter.ai—a popular automated notetaking tool—“deceptively and surreptitiously” records private conversations in virtual meetings in violation of state and federal wiretap laws. According to the complaint filed in the lawsuit, “if the meeting host is an Otter accountholder who has integrated their relevant Google Meet, Zoom, or Microsoft Teams accounts with Otter, an Otter Notetaker may join the meeting without obtaining the affirmative consent from any meeting participant, including the host.”

Local AI Data Center Debates.

Generative AI needs a massive amount of computing power, which requires the construction of data centers that use significant amounts of electricity and water. According to a recent Congressional Research Service report on data centers, “some projections show that data center energy consumption could double or triple by 2028, accounting for up to 12% of U.S. electricity use.” While these data centers can bring significant economic investment to communities, their electricity consumption may also be raising energy costs for local residents. According to a recent Bloomberg data analysis on data centers and power bills in the United States, “electricity now costs as much as 267% more for a single month than it did five years ago in areas located near significant data center activity.”

Like other communities across the United States, local governments in North Carolina are wrestling with tough decisions and policy debates regarding the construction of data centers. For example, a proposal to build a 50-acre data center in Edgecombe County recently led to a packed town council meeting in Tarboro where residents voiced their concerns about the data center’s environmental impacts and proximity to people’s homes (and the council ultimately voted to deny the special use permit for the center’s development). Likewise, a proposal to build a data center on a 190-acre property near Apex has led to public pushback from Wake County residents concerned about increased energy costs, noise pollution, and light pollution. And in Person County, residents are raising questions and concerns about whether Microsoft intends to build a data center on a 1350-acre piece of property it purchased in Person County last year. Earlier this year, a bill was introduced in the General Assembly (H1002) that would prohibit public utilities from passing along to rate payers costs associated with increased fuel requirements and grid upgrades if those costs were attributable to electricity demands from commercial data centers, but the bill did not advance beyond the House.

Multiple Lawsuits Alleging Harm to Minors.

According to a recent study from Common Sense Media, 72% of teenagers say they have used an AI chatbot “companion” at least once, while 52% of teens are “regular users” of AI companions. The potential harms from those interactions with generative AI are beginning to come to light. Over the past 12 months, multiple parents across the country have filed lawsuits alleging that generative AI chatbots encouraged their teenage children towards suicide. This includes a lawsuit against OpenAI filed by the parents of 16 year-old Adam Raine, with evidence that ChatGPT discouraged him from seeking help from his parents after he expressed suicidal thoughts, gave him instructions on suicide methods, and even offered to write his suicide note for him. Another lawsuit was filed in Florida by the mother of Sewell Setzer III, a teenager who died by suicide at age 14 after extensive conversations with a Character.AI chatbot. Setzer’s mother testified in a recent Senate hearing that the chatbot engaged in months of sexual roleplay with her son and falsely claimed to be a licensed psychotherapist. And in September, the Social Media Victims Law Center filed lawsuits on behalf of three different minors, each of whom allegedly experienced sexual abuse or died of suicide as a result of interactions with Character.AI.  

It appears possible that some of these risks were not unknown to the companies that created these tools. As Reuters reported in August, a leaked internal Meta document discussing standards for the company’s chatbots on Facebook, WhatsApp and Instagram stated that it was permissible for the chatbots to engage in flirtatious conversations with children.  Meta’s policy document stated, for example, “It is acceptable to engage a child in conversations that are romantic or sensual” (and provided examples of what would be acceptable romantic or sensual conversations with children). This came after an article from the Wall Street Journal reporting that Meta’s chatbots would engage in sexually explicit roleplay conversations with teenagers.

What’s Next?

Like other states across the country, I suspect we will see more attempts to regulate various aspects of AI development or usage in North Carolina. In 2025 alone, multiple bills  were introduced in the General Assembly that addressed various AI-related issues, including deepfakes (H375), data privacy (H462, S514), algorithmic “rent fixing” (H970), use of AI algorithms in healthcare insurance decision-making (S287, S315, S316), electricity demands of data centers (H638, H1002), standards for AI instruction in schools (S640), cryptographic authentication standards for digital content (S738), AI robocalls (H936),  studying AI and the workforce (S746), AI chatbots (S514), safety and security requirements for AI developers (S735), AI research hubs (H1003), and online child safety (S722). None of these bills were ultimately enacted, but it seems likely we will see more efforts at legislative action around AI issues over the next few years as the technology continues to evolve.

This blog post is published and posted online by the School of Government for educational purposes. For more information, visit the School’s website at www.sog.unc.edu.

Coates Canons
All rights reserved.