Synchrony Law - HR Law Solutions

View Original

Generative AI in the workplace

27 July 2023

The latest version of OpenAI’s ChatGPT text generator has generated widespread speculation about the impact of AI tools on the world of work.

In this article, we examine some scenarios in which employees might use generative AI tools. These and other possible uses could have significant legal ramifications for an organisation.

We would advise businesses to consider possible risks carefully and set out guidelines or policies for staff on the use of ChatGPT and similar text generation tools at work. These may take the form of a total prohibition or, alternatively, a set of conditions for the acceptable use of such tools, along with information about risks.

Employers’ liability

An employer may be vicariously liable for an employee’s use of ChatGPT and other similar AI tools, if these were deemed to have taken place in the course of employment. For example, if an employee generated text using ChatGPT that resulted in a professional negligence or defamation claim, the employer would potentially be liable for this.

Employers should carefully consider the level of risk involved in mandating the use of ChatGPT (or a similar tool) for any given task. Depending on the circumstances and the level of risk involved, a court could find that an employer was under a duty to indemnify its employees against claims resulting from the use of such tools.

Employers may therefore wish to be particularly vigilant about the possibility of employees using tools like ChatGPT to produce client- or public-facing materials, including marketing materials and business pitches.

Copyright and intellectual property considerations

The copyright status of works produced by AI systems such as ChatGPT is unclear. This will be an area of concern for any business producing material protected by copyrights, patents or other intellectual property rights. The Copyright, Designs and Patents Act 1988 provides that, in the case of a literary or artistic work which is computer-generated, the author shall be taken to be the person ‘by whom the arrangements necessary for the creation of the work are undertaken’. How this is applied to AI-generated works is untested and may need to be resolved in litigation.

An employer should also be wary of employees inputting its own intellectual property or confidential information into AI text generation tools. This might be done inadvertently in providing sales data for the production of a sales report, for example. Confidential information input into an AI tool may be stored by its proprietor, used to train its algorithm or provided to unknown third parties.

Accuracy

AI text generators have often been reported as ‘hallucinating’ material. For example, there have been at least two reported instances of ChatGPT fabricating lists of legal cases, including fictional summaries and plausible citations to reputable legal databases. ChatGPT has also been reported as wrongly claiming authorship of texts.

The apparent inaccuracy of the current generation of AI tools means they may be of questionable value for corporate uses. For example, the use of an AI tool to create an internal presentation or report may introduce, for some organisations, an unacceptable element of risk. Editing and fact-checking AI output is possible in principle; however, these are traditionally regarded as specialized skills, are generally highly labour intensive, and may be difficult to carry out in practice given the uncertainty about the provenance of AI-generated material.

Integrity and accountability

A broader concern relates to the change in an organisation’s culture which may take place with the introduction of AI. If an AI tool is used to produce a presentation or report, it seems possible the nominal author, having delegated much of the task to the AI function, will consequently develop less understanding of the material and lack in ‘ownership’ of the finished product. These are nebulous qualities but are often regarded as critical to the success of an organisation. Some organisations may question whether consensus will be sufficiently ‘hammered out’ when part of the usual decision-making process – drafting a report, for example – is bypassed, with the accountability of the individuals involved seemingly reduced. The opposing argument is that an AI tool might produce a valuable insight that would otherwise have gone unnoticed.

Similarly, it may be difficult to imagine producing some business documents and communications, such as a product update tailored to a particular client, without the benefit of extensive personal experience, especially where a long-standing business relationship is involved.

Employers will necessarily be wary of the illicit use of AI tools for purposes to which they are not suited, especially since identifying AI-generated output is often difficult. An organisation in which the use of AI tools has become widespread without a full assessment of the possible impact is more likely to encounter difficulties with performance management and legal and reputational risk. We would advise attempting to bring the issue into the open at the earliest opportunity so that concerns can be addressed and policies receive employee buy-in.

Data protection under the UK GDPR

Special caution will be required in processing personal data. In the UK GDPR, personal data is any information relating to an identifiable person. If an employee inputs personal data into an AI tool, this usage will meet the definition of data processing under the UK GDPR. This raises a number of legal concerns which can be complex for an employer to negotiate. For an example of a (non-text-generative) AI tool used in an HR recruitment context which was judged by the Information Commissioner’s Office (ICO) to be broadly compliant, see the second link at the end of this article.

The most significant risk when processing personal data is a data breach. ChatGPT, for example, stores any input by default, uses it to train its model and may disclose it to third parties in future output; all three would appear to constitute data breaches unless explicit consent or another lawful basis for processing applies. It is now possible to opt out of ChatGPT’s retention and repurposing of user input; we would advise any employer allowing the use of ChatGPT to enforce this opt-out as a matter of policy. However, the disclosure to the AI tool may itself still constitute a data breach: ‘breach’ is broadly defined in the UK GDPR to include ‘access’.

Secondly, any AI output pertaining to personal data would potentially fall under UK GDPR Article 22, which confers the individual right not to be subject to any legally or similarly significant decision based solely on automatic processing. The most straightforward way for employers to avoid falling under this prohibition is to incorporate human decision-making into any AI decision-making process in such a way that it is subsequent to AI involvement and, as the ICO states in its guidance, ‘relate[s] to the actual outcome’. However, it may not be easy to demonstrate compliance with this principle without requiring employees to heavily scrutinize even the most plausible-seeming AI output. This means that, in some contexts, the use of AI tools may not be labour-saving.

How we can help

For further information or to take legal advice, please contact the team at Synchrony Law.

This article is for general information only and does not constitute legal or professional advice. Please note that the law may have changed since this article was published.

External publications

ICO Guidance on AI and data protection

ICO MeVitae Artificial Intelligence (AI) Data Protection Audit Report