Developing Guidelines for the Use of Generative Artificial Intelligence in Local Government
Published: 03/14/24
Author Name: Kristi Nickodem
Over the past few months, I’ve talked to a number of local government employees who are beginning to experiment with generative artificial intelligence (AI) tools like ChatGPT (among many others) to do their work. Some local governments in North Carolina are also beginning to develop guidelines to safeguard against some of the risks associated with employees using these tools. This post recommends some best practices and key considerations for local governments involved in developing guidance or policies around employee usage of generative AI.
For a basic overview of the potential risks posed by generative AI tools, please read our previous blog post on generative AI and local government. The remainder of this post will refer to guidelines or guidance documents that could be developed by local governments on generative AI usage, though the same considerations would apply if a local government decided to adopt a formal policy on the topic.
1. Define the scope. Local governments must begin by deciding what type(s) of AI they want to address in their guidelines. “Artificial intelligence” is an incredibly broad term that covers a wide array of different technologies. It could theoretically cover everything from the GPS system in a county vehicle to an algorithm-based candidate screening tool used by a municipality. Generative AI—technology that creates new content resembling human-created content (including text, images, voices, and videos) in response to user prompts—poses some new and unique risks. Local governments should consider creating some guidelines specifically concerning the use of generative AI tools, even if those guidelines are ultimately incorporated into a broader policy or guidance document regarding technology use by local government employees.
2. Protect confidential and sensitive information. Some local government employees frequently deal with information that is confidential under state or federal law, or sensitive information that is shielded from disclosure under North Carolina’s public records law. Guidelines on generative AI usage should warn employees not to enter confidential or sensitive information into any publicly accessible generative AI tools and provide examples of such information. Worded another way, local government employees should avoid inserting any information into a generative AI tool that they would not disclose to a member of the public in response to a public records request. Ideally, guidelines should provide specific examples of confidential information that might be handled by local government employees, such as county and municipal personnel information (G.S. 153A-98; G.S. 160A-168); protected health information held by HIPAA-covered entities; substance use disorder patient information (42 C.F.R. Part 2); social services information (G.S. 108A-80); and communicable disease information (G.S. 130A-143), among others.
3. Require fact-checking. Guidelines on generative AI usage should require local government employees to fact-check outputs from text generation tools (e.g., ChatGPT, Gemini, Copilot). Text generation tools are notoriously prone to “hallucinations,” meaning they will sometimes confidently assert facts that are not true or even make up fake citations or sources to support those false statements. Any text outputs created by generative AI tools should be reviewed by a human for accuracy before being used in internal or external communications. As the State of Ohio succinctly warns in in its AI policy, “AI outputs shall not be assumed to be truthful, credible, or accurate.”
4. Require transparency around recording and transcription. Some generative AI tools, such as Otter.ai’s OtterPilot, allow an individual to send an AI chatbot to “attend” an online meeting. The chatbot can record audio from the meeting and transcribe a summary of the meeting—all without the individual who “sent” the chatbot being present. This type of technology poses at least two distinct risks. First, an employee using a chatbot or other AI tool to record a meeting that the employee is not attending could potentially run afoul of North Carolina’s wiretapping law (G.S. 15A‑287) by “intercepting” the communications in the meeting, unless an actual participant in the meeting agrees to the recording. There are meaningful distinctions in what constitutes unlawful interception of “oral” communications vs. “electronic” communications (per the definitions of those terms in G.S. 15A-286), but that’s a topic for a different day. Second, an AI tool that records or transcribes a meeting may have created a public record that is subject to public disclosure under North Carolina’s public records law (G.S. Chapter 132)—even if the recording was of an internal staff meeting that would not otherwise be subject to North Carolina’s open meetings law. Local governments may want to consider using language similar to a provision in San Francisco’s generative AI guidelines, which warns employees not to “conceal use of Generative AI during interaction with colleagues or the public, such as tools that may be listening and transcribing the conversation or tools that provide simultaneous translation.”
5. Address public records issues. Under North Carolina’s public records law, any record made or received “in connection with the transaction of public business” is a public record subject to disclosure upon request, unless an exception applies (G.S. 132-1). Accordingly, just like emails and other types of digital records, records created using generative AI to conduct public duties (including both prompts and outputs) may potentially be subject to public records access and retention requirements. If local governments allow employees to use publicly available generative AI tools for some work-related duties, they may want to consider requiring employees to create separate accounts on these tools for work-related purposes, to ensure that personal and public records are not commingled in a way that will make public records retention or retrieval challenging down the road. Prior to providing guidelines to employees, local governments may want to consult with records management analysts at the State Archives of North Carolina regarding retention requirements and strategies.
6. Clarify which use cases are prohibited. There may be certain uses of generative AI that a local government wants to prohibit all together. For example:
- It may be unwise to use generative AI to draft sensitive or high-profile external-facing communications. Consider the cautionary tale of Vanderbilt University, which faced backlash after using ChatGPT to draft a statement consoling its campus community in the wake of a mass shooting.
- Likewise, a local government might consider restricting the use of generative AI tools to make employment decisions, including using these tools in applicant screening, performance evaluations, and/or disciplinary actions. EEOC guidance has explained how AI screening tools could potentially run afoul of the Americans with Disabilities Act, while a recent Bloomberg experiment demonstrated how ChatGPT may reflect racial bias in screening candidate resumes.
- A local government may also want to prohibit employees from using generative AI tools in a way that impersonates a real person. This could include prohibiting the production of “deepfake” videos, audio recordings, or photos of real public officials, public employees, or members of the public. Some local governments may also want to go further by prohibiting employees from generating a fake image, audio recording, or video purporting to be a real person at all, even if not a specific one (this appears to be San Francisco’s approach in its generative AI guidelines).
7. Suggest permitted use cases. Just as local governments should outline prohibited use cases, they should also consider recommending examples of allowable use cases for generative AI. For example, New Jersey’s generative AI policy for state employees describes six broad potential use cases for generative AI, while also providing “dos and don’ts” for each example. Alternatively, a local government might decide that acceptable use cases will be determined on a department-by-department basis, instead of in a more broadly applicable guidance document.
8. Review model guidelines, but don’t assume they fit your jurisdiction or employees. A number of jurisdictions across the country have developed guidelines or policies concerning generative AI over the past year. Cities with publicly posted guidelines include Boston, Tempe, San Jose, San Fransisco, Seattle, and Long Beach, among others. States with publicly posted policies include Ohio, Kansas, New Jersey, Utah, Vermont, and Washington. For those involved in drafting guidelines or policies, it may be helpful to compare and contrast how these jurisdictions have addressed certain issues in different ways. Keep in mind, however, that these documents do not incorporate or reflect North Carolina law. They also may contain provisions or approaches that simply are not workable or desirable for employees in your particular city or county.
9. Be thoughtful about what disclosure and citation requirements are needed. Many generative AI policies and guidance documents I’ve referenced above require some sort of disclosure from employees every time they use a generative AI tool to perform their work. Some policies require employees to log their usage of generative AI tools, while others also require an employee to put some type of statement on their work product stating the generative AI tool that they used to create it and how the tool was used. These types of requirements are undoubtedly well-intentioned, but consider how they may create implementation problems in practice if not carefully crafted. Will employees have to put such statements on internal communications, external communications, or both? What if an employee uses generative AI for initial idea generation or a first draft, but does a substantial amount of work to subsequently build on that idea or refine that first draft? If an employee goes through dozens of iterations of various prompts into a generative AI tool to create a final product, should every iteration be reported to a supervisor? Vague requirements around disclosure or citation will inevitably lead to many questions around implementation. Consider that local governments generally do not require employees to report or cite every time they use other types of technology to do their daily work. If local governments decide to treat generative AI tools differently, they should thoughtfully determine clear parameters about when disclosure is required and to whom. Another approach might be to have department heads or supervisors approve specific types of use cases, rather than requiring a task-by-task or item-by-item report from employees.
10. Use plain language and emphasize key takeaways. As a lawyer, I’ll readily admit that lawyers often overcomplicate language in policies and guidance documents. Complex language can sometimes be helpful or even necessary for legal purposes, but it often defeats the purpose of ensuring employees actually understand how to comply with a policy. Guidelines for local government employees on generative AI should be as straightforward and easy to understand as possible. Some jurisdictions across the country (see examples above) have wisely paired their guidelines or policies with a “Top 3” or “Top 8” list of things employees need to remember when using generative AI.
11. Involve multiple stakeholders. Issues around generative AI may impact a wide number of officials and employees in county and municipal government. The process of drafting guidelines or policy should ideally involve input from people with expertise across different disciplines, including consulting with legal counsel and IT. A county or municipality may want to begin by asking department heads and other supervisors how generative AI tools are being used by their employees (if at all). A communications employee using generative AI to create pictures for a county website will likely use different tools and pose different potential risks than a human resources employee using generative AI to write performance evaluations for county employees. Likewise, a municipal IT employee using generative AI to write code may be using different tools and posing different potential risks than a municipal attorney using generative AI for legal research. Desired use cases and associated risks will vary widely from department to department and employee to employee. Similarly, using publicly available tools may pose different risks than using customized solutions from a vendor (depending on the type of vetting involved in IT procurement) or those built in-house. Understanding these differences from a legal, technical, and practical perspective is essential to drafting any guidance or policy around generative AI.
12. Train employees. Any local government guidance document or policy on generative AI should ideally be accompanied by training for employees. Some local government employees will not understand how basic elements of generative AI tools work, which is an important part of understanding their limitations and risks. For example, an employee may be unaware that typing a question into a generative AI tool like ChatGPT functions very differently than typing the same question into a search engine, like Google. Similarly, some employees may be unaware that their prompts into some generative AI tools may be used as training data by the companies that own these tools. Training might also aim to increase awareness around the recent proliferation in AI-enhanced scams that could target local government employees, such as phone scams using AI-generated voice clones, sophisticated email phishing attacks, and the use of deepfake videos to commit fraud.
Is your local government working on developing guidelines around generative AI usage? If so, I’d love to hear about it.
1
Coates’ Canons NC Local Government Law
Developing Guidelines for the Use of Generative Artificial Intelligence in Local Government
Published: 03/14/24
Author Name: Kristi Nickodem
Over the past few months, I’ve talked to a number of local government employees who are beginning to experiment with generative artificial intelligence (AI) tools like ChatGPT (among many others) to do their work. Some local governments in North Carolina are also beginning to develop guidelines to safeguard against some of the risks associated with employees using these tools. This post recommends some best practices and key considerations for local governments involved in developing guidance or policies around employee usage of generative AI.
For a basic overview of the potential risks posed by generative AI tools, please read our previous blog post on generative AI and local government. The remainder of this post will refer to guidelines or guidance documents that could be developed by local governments on generative AI usage, though the same considerations would apply if a local government decided to adopt a formal policy on the topic.
1. Define the scope. Local governments must begin by deciding what type(s) of AI they want to address in their guidelines. “Artificial intelligence” is an incredibly broad term that covers a wide array of different technologies. It could theoretically cover everything from the GPS system in a county vehicle to an algorithm-based candidate screening tool used by a municipality. Generative AI—technology that creates new content resembling human-created content (including text, images, voices, and videos) in response to user prompts—poses some new and unique risks. Local governments should consider creating some guidelines specifically concerning the use of generative AI tools, even if those guidelines are ultimately incorporated into a broader policy or guidance document regarding technology use by local government employees.
2. Protect confidential and sensitive information. Some local government employees frequently deal with information that is confidential under state or federal law, or sensitive information that is shielded from disclosure under North Carolina’s public records law. Guidelines on generative AI usage should warn employees not to enter confidential or sensitive information into any publicly accessible generative AI tools and provide examples of such information. Worded another way, local government employees should avoid inserting any information into a generative AI tool that they would not disclose to a member of the public in response to a public records request. Ideally, guidelines should provide specific examples of confidential information that might be handled by local government employees, such as county and municipal personnel information (G.S. 153A-98; G.S. 160A-168); protected health information held by HIPAA-covered entities; substance use disorder patient information (42 C.F.R. Part 2); social services information (G.S. 108A-80); and communicable disease information (G.S. 130A-143), among others.
3. Require fact-checking. Guidelines on generative AI usage should require local government employees to fact-check outputs from text generation tools (e.g., ChatGPT, Gemini, Copilot). Text generation tools are notoriously prone to “hallucinations,” meaning they will sometimes confidently assert facts that are not true or even make up fake citations or sources to support those false statements. Any text outputs created by generative AI tools should be reviewed by a human for accuracy before being used in internal or external communications. As the State of Ohio succinctly warns in in its AI policy, “AI outputs shall not be assumed to be truthful, credible, or accurate.”
4. Require transparency around recording and transcription. Some generative AI tools, such as Otter.ai’s OtterPilot, allow an individual to send an AI chatbot to “attend” an online meeting. The chatbot can record audio from the meeting and transcribe a summary of the meeting—all without the individual who “sent” the chatbot being present. This type of technology poses at least two distinct risks. First, an employee using a chatbot or other AI tool to record a meeting that the employee is not attending could potentially run afoul of North Carolina’s wiretapping law (G.S. 15A‑287) by “intercepting” the communications in the meeting, unless an actual participant in the meeting agrees to the recording. There are meaningful distinctions in what constitutes unlawful interception of “oral” communications vs. “electronic” communications (per the definitions of those terms in G.S. 15A-286), but that’s a topic for a different day. Second, an AI tool that records or transcribes a meeting may have created a public record that is subject to public disclosure under North Carolina’s public records law (G.S. Chapter 132)—even if the recording was of an internal staff meeting that would not otherwise be subject to North Carolina’s open meetings law. Local governments may want to consider using language similar to a provision in San Francisco’s generative AI guidelines, which warns employees not to “conceal use of Generative AI during interaction with colleagues or the public, such as tools that may be listening and transcribing the conversation or tools that provide simultaneous translation.”
5. Address public records issues. Under North Carolina’s public records law, any record made or received “in connection with the transaction of public business” is a public record subject to disclosure upon request, unless an exception applies (G.S. 132-1). Accordingly, just like emails and other types of digital records, records created using generative AI to conduct public duties (including both prompts and outputs) may potentially be subject to public records access and retention requirements. If local governments allow employees to use publicly available generative AI tools for some work-related duties, they may want to consider requiring employees to create separate accounts on these tools for work-related purposes, to ensure that personal and public records are not commingled in a way that will make public records retention or retrieval challenging down the road. Prior to providing guidelines to employees, local governments may want to consult with records management analysts at the State Archives of North Carolina regarding retention requirements and strategies.
6. Clarify which use cases are prohibited. There may be certain uses of generative AI that a local government wants to prohibit all together. For example:
- It may be unwise to use generative AI to draft sensitive or high-profile external-facing communications. Consider the cautionary tale of Vanderbilt University, which faced backlash after using ChatGPT to draft a statement consoling its campus community in the wake of a mass shooting.
- Likewise, a local government might consider restricting the use of generative AI tools to make employment decisions, including using these tools in applicant screening, performance evaluations, and/or disciplinary actions. EEOC guidance has explained how AI screening tools could potentially run afoul of the Americans with Disabilities Act, while a recent Bloomberg experiment demonstrated how ChatGPT may reflect racial bias in screening candidate resumes.
- A local government may also want to prohibit employees from using generative AI tools in a way that impersonates a real person. This could include prohibiting the production of “deepfake” videos, audio recordings, or photos of real public officials, public employees, or members of the public. Some local governments may also want to go further by prohibiting employees from generating a fake image, audio recording, or video purporting to be a real person at all, even if not a specific one (this appears to be San Francisco’s approach in its generative AI guidelines).
7. Suggest permitted use cases. Just as local governments should outline prohibited use cases, they should also consider recommending examples of allowable use cases for generative AI. For example, New Jersey’s generative AI policy for state employees describes six broad potential use cases for generative AI, while also providing “dos and don’ts” for each example. Alternatively, a local government might decide that acceptable use cases will be determined on a department-by-department basis, instead of in a more broadly applicable guidance document.
8. Review model guidelines, but don’t assume they fit your jurisdiction or employees. A number of jurisdictions across the country have developed guidelines or policies concerning generative AI over the past year. Cities with publicly posted guidelines include Boston, Tempe, San Jose, San Fransisco, Seattle, and Long Beach, among others. States with publicly posted policies include Ohio, Kansas, New Jersey, Utah, Vermont, and Washington. For those involved in drafting guidelines or policies, it may be helpful to compare and contrast how these jurisdictions have addressed certain issues in different ways. Keep in mind, however, that these documents do not incorporate or reflect North Carolina law. They also may contain provisions or approaches that simply are not workable or desirable for employees in your particular city or county.
9. Be thoughtful about what disclosure and citation requirements are needed. Many generative AI policies and guidance documents I’ve referenced above require some sort of disclosure from employees every time they use a generative AI tool to perform their work. Some policies require employees to log their usage of generative AI tools, while others also require an employee to put some type of statement on their work product stating the generative AI tool that they used to create it and how the tool was used. These types of requirements are undoubtedly well-intentioned, but consider how they may create implementation problems in practice if not carefully crafted. Will employees have to put such statements on internal communications, external communications, or both? What if an employee uses generative AI for initial idea generation or a first draft, but does a substantial amount of work to subsequently build on that idea or refine that first draft? If an employee goes through dozens of iterations of various prompts into a generative AI tool to create a final product, should every iteration be reported to a supervisor? Vague requirements around disclosure or citation will inevitably lead to many questions around implementation. Consider that local governments generally do not require employees to report or cite every time they use other types of technology to do their daily work. If local governments decide to treat generative AI tools differently, they should thoughtfully determine clear parameters about when disclosure is required and to whom. Another approach might be to have department heads or supervisors approve specific types of use cases, rather than requiring a task-by-task or item-by-item report from employees.
10. Use plain language and emphasize key takeaways. As a lawyer, I’ll readily admit that lawyers often overcomplicate language in policies and guidance documents. Complex language can sometimes be helpful or even necessary for legal purposes, but it often defeats the purpose of ensuring employees actually understand how to comply with a policy. Guidelines for local government employees on generative AI should be as straightforward and easy to understand as possible. Some jurisdictions across the country (see examples above) have wisely paired their guidelines or policies with a “Top 3” or “Top 8” list of things employees need to remember when using generative AI.
11. Involve multiple stakeholders. Issues around generative AI may impact a wide number of officials and employees in county and municipal government. The process of drafting guidelines or policy should ideally involve input from people with expertise across different disciplines, including consulting with legal counsel and IT. A county or municipality may want to begin by asking department heads and other supervisors how generative AI tools are being used by their employees (if at all). A communications employee using generative AI to create pictures for a county website will likely use different tools and pose different potential risks than a human resources employee using generative AI to write performance evaluations for county employees. Likewise, a municipal IT employee using generative AI to write code may be using different tools and posing different potential risks than a municipal attorney using generative AI for legal research. Desired use cases and associated risks will vary widely from department to department and employee to employee. Similarly, using publicly available tools may pose different risks than using customized solutions from a vendor (depending on the type of vetting involved in IT procurement) or those built in-house. Understanding these differences from a legal, technical, and practical perspective is essential to drafting any guidance or policy around generative AI.
12. Train employees. Any local government guidance document or policy on generative AI should ideally be accompanied by training for employees. Some local government employees will not understand how basic elements of generative AI tools work, which is an important part of understanding their limitations and risks. For example, an employee may be unaware that typing a question into a generative AI tool like ChatGPT functions very differently than typing the same question into a search engine, like Google. Similarly, some employees may be unaware that their prompts into some generative AI tools may be used as training data by the companies that own these tools. Training might also aim to increase awareness around the recent proliferation in AI-enhanced scams that could target local government employees, such as phone scams using AI-generated voice clones, sophisticated email phishing attacks, and the use of deepfake videos to commit fraud.
Is your local government working on developing guidelines around generative AI usage? If so, I’d love to hear about it.