Employer Policy Considerations Following the Rise of ChatGPT

CATEGORY: Blog Posts
CLIENT TYPE: Public Employers
PUBLICATION: California Public Agency Labor & Employment Blog
DATE: Jul 18, 2023

Since its November 2022 launch, ChatGPT, an artificial intelligence (AI) chatbot, has garnered significant international attention. By January 2023, ChatGPT had an estimated 100 million monthly active users. Given its extensive adoption, it is likely your agency’s employees have used or are currently using ChatGPT personally, or even in the workplace. Employers should implement policies and guidelines for any usage of ChatGPT, or other AI chatbots, in the workplace.

  1. Policy Cautioning Limitations of ChatGPT 

Employers should first understand how ChatGPT works. Despite our science-fiction fantasies, ChatGPT is not currently a source compiling the entirety of humanity’s knowledge. Instead, ChatGPT and AI chatbots use large language models to provide responses to a user’s prompt. When ChatGPT responds, it does not pull facts or run an internet search – it predicts strings of words based on the data and information uploaded to the program (or what is commonly referenced as the information the AI chatbot has been “trained on”). Because ChatGPT does not rely on facts but merely predicts strings of words, users may receive an intelligent-sounding, polished, but completely factually inaccurate response.

Employers should develop guidelines and policies prohibiting any user from relying on the accuracy of any response provided by ChatGPT. As a tale of caution, a thirty-year practicing attorney in New York used ChatGPT to write his legal brief. The problem? ChatGPT fabricated the cases the attorney cited, resulting in a scathing order by the judge and widespread embarrassment for the attorney and his firm.

In addition, ChatGPT has only been trained on information up to 2021. This means that ChatGPT can also provide outdated responses. Employers should prohibit employees from relying on ChatGPT for research or as legal or expert advice. Employers should direct employees to always double (and triple) check any information generated by ChatGPT.

2. Acceptable Use Policy

Employers should also outline clear guidelines and policies regarding employees’ authorization to use ChatGPT in the workplace. In doing so, employers should consider and refer to any computer-use or other relevant technology policies already instituted.

As one option, employers could bar employees from using ChatGPT in any circumstance in the workplace. Alternatively, employers could restrict employees from using ChatGPT for only certain tasks. Employers could also limit employee’s use of ChatGPT by prohibiting them from using workplace login credentials (for example, their work email addresses), or prohibiting employees from using ChatGPT on workplace devices. Employers should implement policies that are clear, thorough, and applied consistently.

3. Educate and Train

Employers should educate and train their employees about the previously discussed limitations of ChatGPT. ChatGPT responses certainly sound legitimate, and may easily fool untrained employees. Employers should educate employees to warn them about the limitations of ChatGPT to prevent inaccuracies and mistakes. Employers should also stay current on the latest iterations of AI chatbots like ChatGPT, or any other technological advances, to ensure all employees are properly trained on any resources used in the workplace.

4. Watch for New Legislation

Further, employers should monitor legislation that may affect the use of AI in the workplace. For instance, there are efforts on a federal level by the National Telecommunication and Information Administration, as well as other agencies, to “create a cohesive and comprehensive federal government approach to AI-related risks and opportunities.” This session, California State Legislators have also proposed (but have not passed) several bills related to the regulation of AI. For instance, the proposed SB313 would require any state agency using generative AI to communicate with members of the public to provide additional notice. New legislation on the use of AI is likely to emerge and may impact employers.

5. Safeguarding Data and Confidential Information

Further, employers should create policies and guidelines instructing employees about how to protect data and personal, confidential, or private information when using ChatGPT. A general rule of thumb is to train employees to treat any information provided to ChatGPT as if it will be posted on a public website.

Thinking through how this may arise in the workplace, some recommend using ChatGPT to assist with summarizing meeting notes or analyzing large amounts of data. However, if an employee attempts to summarize, for instance, meeting notes from an interactive process meeting to accommodate an employee with a disability, or data that includes confidential employee information like social security numbers, then uploading the information to ChatGPT could violate relevant privacy and data protection regulations and laws. Employers should institute policies and guidelines prohibiting employees from uploading confidential or private information or data into ChatGPT.

As they have for decades, employers will continue to reckon with new challenges posed by technological advances. AI chatbots and ChatGPT can be a great resource for employees and employers alike. Employers should establish clear, thorough, and  consistently applied policies for their employees. These policies should be adaptable and continuously evaluated. Trusted legal counsel can help employers navigate implementing new policies and guidelines so employers and employees can embrace the benefits of technological advances.

View More News

Blog Posts
Policies Every Agency Should Have in Their Personnel Rules
California Public Agency Labor & Employment Blog
Client Update for Public Agencies, Fire Watch, Public Education Matters
County Ordered To Bargain The Effects Of A Surveillance Technology Ordinance