The recent explosion of generative AI tools (e.g., ChatGPT, Bard, DALL-E 2) is quickly impacting every sector of society and local government is no exception. These tools offer benefits for the government workforce—including the potential to enhance workplace efficiency and creativity—but they also carry legal and ethical risks. This post discusses some risks and rewards of using generative AI in the local government context, along with recommending some best practices around the use of generative AI tools.
What is generative AI?
Generative AI is a form of artificial intelligence which can generate various forms of new content, such as audio, text, programming code, video, and images. Generative AI utilizes machine learning algorithms, such as Generative Adversarial Networks (GANs), to learn from existing datasets and then create new data that is modeled after the patterns, structures, and statistics of the existing datasets. Early precursors to generative AI have existed since the 1960s, but the emergence of new commercially available tools and technologies have rapidly accelerated the popularity and usage of generative AI over the past five years. Tools incorporating artificial intelligence have already been used by local governments in a variety of ways over the past decade, including the use of chatbots with Natural Language Processing (NLP), predictive analytic tools, smart infrastructure, video surveillance with facial recognition, and virtual assistants. Some of these AI tools are now also starting to integrate generative AI into their existing platforms as a means of improving their outputs.
How does it work?
While the processes used by generative AI tools are based on highly complex technical algorithms, the following non-technical example may help to explain how generative AI works. Imagine an artist who sits in a room full of the world’s most renowned paintings. The artist studies each painting deeply for hours a day, developing an intimate understanding of linework, brush strokes, color theory, and other key elements found in those renowned pieces of art. Then the artist goes into a studio and applies all of that knowledge to a blank canvas, creating a painting that resembles the pieces the artist studied, but that is not an exact replica. The new painting is original and unique, but leverages and mimics the same underlying structure, techniques, and concepts gathered from the existing works of art. The artist has taken data inputs from an existing dataset (the renowned works of art) and used those inputs to generate a new, unique output on the blank canvas. Generative AI uses the same basic concept of deep inspection and learning from existing data sources to create new content modeled after the original data.
What is the benefit of using generative AI tools?
These tools can serve as a quick way to generate ideas and spark creativity when trying to get started on a task that involves some mental effort. For example, generative AI tools can generate a plan of action for a project, build a meeting agenda, write simple memos or emails, draft a speech, summarize complex information, write and fix code, and create slide decks for presentations.
What are some potential risks of using generative AI tools?
- Text generator AI tools may provide inaccurate or false information. AI tools that generate text, like ChatGPT, sometimes give false or inaccurate answers to questions, including making up facts. ChatGPT’s own FAQs acknowledge that the tool “will occasionally make up facts or ‘hallucinate’ outputs.” When asked if a particular output is true or accurate, these tools will sometimes even double down on the false information by making up false sources for the information or confidently asserting that it is accurate. This is a lesson that two New York attorneys learned the hard way when they were sanctioned for submitting a brief to the court that contained fake citations from nonexistent cases (all of which were made up by ChatGPT).
- Text generator AI tools may provide defamatory outputs. The potential for generative AI tools to make up facts extends to making up false facts about real individuals, including false, libelous statements. OpenAI (the creator of ChatGPT) is currently facing a defamation lawsuit in Georgia state court from radio host Mark Walters. The lawsuit alleges that ChatGPT falsely stated to a third party that Walters was the subject of a legal complaint that accused him of defrauding and embezzling funds. Likewise, the Washington Post reported on a case in which ChatGPT made up a fake Washington Post story that accused a real law professor of sexually harassing a student.
- Generative AI tools may produce outputs that are biased or promote harmful stereotypes. In early 2023, OpenAI’s CEO acknowledged that ChatGPT “has shortcomings around bias.” Bloomberg recently reported about the potential for racial stereotypes to be perpetuated by AI image generation tools like Stable Diffusion. Bias has long been a problem in artificial intelligence across a variety of different domains. For example:
-
- In 2018, Reuters reported that Amazon abandoned an experimental AI recruiting tool after finding that it showed substantial bias against women (due to being trained primarily on male resumes).
- In 2019, the academic journal Science published peer-reviewed research showing that a commercial algorithm used by health care systems to guide health decisions falsely concluded that Black patients were healthier than equally sick white patients, and thus routinely predicted that Black patients needed a lower level of care.
- In 2022, research conducted by Carnegie Mellon University showed that Alleghany County, Pennsylvania’s child welfare screening algorithm had a pattern of flagging a disproportionate number of Black children for investigation, when compared with white children. In January 2023, the AP reported that the United States Justice Department has been involved in investigating the AI screening tool following concerns that the tool unfairly targeted parents with disabilities.
- It’s unclear who owns the output of generative AI tools. When an image generation tool like Stable Diffusion or DALL-E 2 creates a new image, who owns the copyright to that image? Multiple lawsuits have been filed against companies that own AI image generation tools, claiming that these companies improperly used copyrighted images to “train” their AI tools without the owner’s permission. Likewise, comedian Sarah Silverman and other authors recently sued Meta and OpenAI for copyright infringement, claiming their books were used without permission to train generative AI tools like ChatGPT. The legal risks around using the outputs of these tools are still unknown.
- Confidential and sensitive data entered into generative AI tools may be disclosed to third parties. When you type a prompt into a consumer generative AI tool like ChatGPT, you are disclosing the information in that prompt to the company that owns the generative AI tool. For example, OpenAI states that it will use consumer prompts in ChatGPT and DALL-E to improve its services and will “share [user] content with a select group of trusted service providers.” OpenAI also explicitly warns users not to share sensitive information in conversations with ChatGPT, because it is “not able to delete specific prompts from your history.”
What are some best practices for using generative AI in the local government workplace?
- Create and implement a generative AI policy. Due to the popularity of some generative AI tools, it’s likely that a number of employees in many local government agencies are already using these tools to help perform their work. Local governments should be proactive in crafting policies around the use of these tools, including addressing any limitations on how employees may use them. For those looking for a starting point in developing a policy, it may be helpful to look to this generative AI policy recently developed by the City of Boston.
- Use generative AI to help you think, not think for you. Generative AI tools can be an excellent way to “jumpstart” ideas for a research project, written document, creative presentation, or other work product, but these tools are not a substitute for human thought, discretion, judgment, and innovation.
- Check AI outputs for factual inaccuracies. As described above, generative AI tools sometimes provide inaccurate statements of fact, and if prompted, will even make up sources for those inaccurate statements. Some AI tools (such as ChatGPT) are also built using outdated data, meaning they are not continually scraping the internet or other sources for current, updated information. Employees and officials should not rely on a generative AI tool’s output as the final or authoritative source of knowledge for a particular fact or assertion.
- Check AI outputs for inadvertent bias. As described earlier, generative AI tools are built by using data sets that reflect human biases. While an output from a generative AI tool may appear neutral and objective, that output could be generated based on data sets that reflect bias, discrimination, or harmful stereotypes.
- Don’t type or copy confidential or sensitive information into third-party generative AI tools. When you type or copy confidential information into a third-party generative AI platform (ChatGPT, Google Bard, etc.), you are disclosing that information to a third-party, which may violate state or federal confidentiality laws. Depending on the terms and conditions of the platform, the company that owns the generative AI tool may own and be able to use any data you input into the tool. Employees who frequently work with information that is confidential under federal or state law—such as personnel information, social services information, substance use disorder treatment information, or health information—should be especially vigilant about what information they type or copy into generative AI tools. For example, third-party generative AI tools should not be used to analyze or summarize reports or documents that contain confidential information.
- Don’t delegate final decision-making authority to generative AI tools. Government officials and employees make important decisions about the fundamental rights of citizens, including their rights to life, liberty, and property. These decisions require substantial human involvement and should not be outsourced to generative AI tools.
What’s next?
Generative AI tools are quickly evolving and improving. Their capabilities will continue to expand in new and powerful ways. Simultaneously, the legal landscape around these tools is also rapidly evolving and filled with potential pitfalls. Local government officials should track developments around these tools carefully and adapt their policies and practices accordingly.
Is your local government using generative AI tools? Have you developed a policy around the use of generative AI? If so, we would love to hear about it.