WORK WITH US
The Adoption of Artificial Intelligence Tool and Technology and the Duty to Bargain
The Fourth Industrial Revolution will be an age of automation and analytics powered by Artificial Intelligence (“AI”). It promises futuristic realities with deep-level analytics, next-level automation, and omnipresent algorithmic workplace monitoring.
As AI tools and technology becomes more affordable and widely available, it is likely that public agencies, like other employers, will adopt AI in order to operate in a more efficient and cost-effective manner. As the public agency use of AI increases, it is unavoidable that the technology will impact employee working conditions and the method by which they perform their jobs and potentially, in the longer term, the nature of services provided by public employees and the size of public agencies.
These considerations should both interest and concern management. As public agencies begin to consider how to leverage AI, management must consider how to do so to the benefit the public and how to do so while accounting for and complying with their statutory collective bargaining obligations.
This article addresses situations in which public agencies may use AI and how the adoption and implementation of such tools and technology will affect agencies’ duty to bargain changes related to wages, hours, terms and conditions of employment.
AI WILL CHANGE THE WAY WE WORK – GET USED TO IT!
The role of automation and the use of analytics in the workplace is not new. However, the 2021 release of OpenAI’s ChatGPT ushered in a new era of AI with a more powerful product and new possibilities and use cases. While the AI tools available today are even more powerful than the 2021 version of ChatGPT, the adoption of such technology and its application by public agencies will likely be slowed, but not necessarily stopped, by applicable statutory obligations that require negotiation on certain effects or impacts of management decisions, if not the decisions themselves.
EMPLOYEE CONCERNS ABOUT AI
Despite the barriers to the immediate adoption of AI by public agencies, many public employees are reasonably concerned about the introduction of AI into the workplace.
Public employees, like employees in the private sector, fear that AI will result in mass layoffs and workforce reductions or job replacement, re-categorization, or re-assignment. Public employees are also concerned about less fundamental, but nevertheless significant changes to working conditions precipitated by AI, such as mandatory training on and use of AI and AI-powered surveillance.
As witnessed in recent and high profile labor disputes with longshoreman on the East Coast and earlier strikes in Hollywood with screen actors and writers, use of automation and AI and demands by workers for protections against such technology were key issues in bargaining.
While fears about widespread layoffs and workforce reductions in the public sector are likely overstated at present and not a certainty in the immediate future, one thing that is certain is that change is coming to workplaces, including public agency workplaces. A recent study predicted that, while only nine percent (9%) of jobs presently face a high risk of reduction or replacement due to automation or AI, approximately 60% of jobs involve duties, functions or tasks that could be automated or performed using AI tools or technology.
LEGISLATIVE AND EXECUTIVE RESPONSE TO PUBLIC AGENCY USE OF AI
In California, the Legislative and Executive branches are beginning to grapple with the use of AI in workplaces, including public agency workplaces.
The recently concluded legislative term included a number of AI-related bills, including several that, if enacted, would have affected public agencies and their use of AI. One bill, Senate Bill 1220, proposed to protect public employee jobs by prohibiting public agencies from using AI tools and technologies to automate functions and tasks performed by employees in call centers.
Governor Newsom’s veto message regarding this bill was instructive. In the message, he stated the following:
Technology can and should enhance the experience of the workforce – by making work more efficient and pushing us to attain new heights of achievement and innovation. At the same time, we must consider appropriate guardrails and control the risks posed by this technology.
Governor Newsom then explained that he signed Executive Order N-12-23 to develop responsible AI deployment in the state and that the state would be issuing forthcoming criteria to evaluate the impact of AI on public employees.
As a result of the Governor’s veto, public agencies may continue to deploy AI tools and technology to perform the duties, functions and tasks of public call center workers. However, this is the beginning of the story and not the end and there is likely to be similar, or even more expansive, legislative introduced in the future that is designed to protect public employee jobs and regulate the use of AI by public agencies.
RESPONSE BY MANAGEMENT AND THE DUTY TO MEET AND CONFER
While public agencies have the right and obligation to direct their workforces, this right is not without limitation.
Public agencies have an obligation to meet and confer in good faith with employee organizations regarding changes to wages, hours, and other terms and conditions of employment that affect the employees that they represent.
When it comes to decisions to adopt new AI tools and technology, public agencies must carefully consider whether the decision affects a matter within the scope of representation. Agencies must refrain from making any change to the terms or conditions of represented employees’ employment without notifying the employee organization of the change and negotiating the change. (County of Santa Clara (2022) PERB Decision No. 2820-M.) Agencies that unilaterally make such changes risk the employee organization filing an unfair labor practice and the Public Employment Relations Board (“PERB”) undoing the change, potentially at great expense to the agency.
As AI tools and technology become more widespread and widely available, public agencies that adopt such tools and technology should be mindful of their the duty to meet and confer with employee organizations before implementing any decisions that affect or might affect the terms and conditions of employees’ employment.
CONSIDERATIONS BEFORE DECIDING TO IMPLEMENT AI
Once a public agency makes a decision to implement an AI tool that effects a matter within the scope of representation, the agency must provide the employee organization with notice and an opportunity to meet and confer regarding the changes to matters within the scope of representation. (Gov. Code §§ 3501, 3505.)
Adopting and implementing AI tools will likely require employers to engage in meet and confer on a wide range of subjects, including, but not limited to, where the agency intends to impose new training requirements related to AI (See City of Sacramento (2020) PERB Decision No 2745-M, pp. 17-20) or where the agency intends to use AI to monitor the workplace and worker conduct and productivity. (See Rio Hondo Community College District (2013) PERB Decision No. 2313, pp. 14-16.)
Where the underlying decision is a non-negotiable management right or prerogative, a public agency may seek clarification from the employee organization as to what exactly the employee organization proposes to bargain in order to determine whether the subject identified by the employee organization is negotiable.
In Compton Community College District (1989) PERB Decision No. 720, pp. 14-15, PERB stressed that an employer may implement a nonnegotiable management decision prior to completing effects bargaining in the following circumstances:
- The implementation date is based on an immutable deadline “or an important managerial interest, such that a delay in implementation beyond the date chosen would effectively undermine the employer’s right to make the nonnegotiable decision”;
- The employer provides sufficient notice of the decision and advance notice of the implementation date “to allow for meaningful negotiations prior to implementation; and
- The employer negotiates in good faith prior to implementation and continues to negotiate in good faith after implementation as to those subjects not necessarily resolved by virtue of the implementation.
Public agencies should be prepared to provide notice of decisions involving the adoption and implementation of requirements related to AI or technology that relies on AI and be prepared to engage in meaningful negotiations on such decisions, or the effects or impacts of that decision.
FUTURE CHANGES TO WORK AND WORKFORCES
Looking forward, public agencies should consider more profound changes that AI may have on public agencies, including the services performed by public employees and the size of public agency workforces.
The adoption and use of AI tools and technology may, at some point, cause or result in layoffs, work force reductions and restructurings in public employee job classifications that perform functions and tasks that lend themselves to automation or performance by AI, such as call center workers.
Such significant changes will undoubtedly cause employee organizations to request to negotiate public agency decisions to adopt tools or technologies that have such a disruptive capacity or, at a minimum, the effects or impacts of the decisions to do so.
Moving forward and into this new age, it is more important than ever for public agencies to remember that old rules still apply and they must discharge their statutory obligations as they relate to these new and powerful AI tools and technology.
This blog was authored by LCW Partner Jack Hughes and Graduate Law Clerk C. Michael Humbles.