Deep dive: Generative AI and employment law risk

13 July 2023

The latest version of OpenAI’s ChatGPT text generator has generated widespread speculation about the impact of AI tools on the world of work. In this article, we examine some legal considerations pertaining to AI text generation, taking ChatGPT as an example. Many of these considerations would apply to any similar tool.

For the reasons we set out below, the use of generative AI tools by employees could have significant legal ramifications for an organisation. We would advise businesses to consider these impacts carefully and set out guidelines or policies for staff on the use of ChatGPT and similar text generation tools at work. These may take the form of a total prohibition or, alternatively, a set of conditions or principles for the acceptable use of such tools, along with information about risks.

Employers’ liability

In order to use ChatGPT or any of OpenAI’s tools, it is necessary to create a user account and agree to the standard terms of use. These terms were last updated on 14 March 2023 and currently contain clauses:

  • indemnifying OpenAI against any claims, losses or expenses relating to use of their platforms

  • disclaiming all warranties, including any warranties of fitness for sale, fitness for purpose, satisfactory quality and non-infringement of third-party rights

  • limiting OpenAI’s liability for all damages to US $100 per claim

An employer may be vicariously liable for an employee’s use of ChatGPT and other similar AI tools, if these were deemed to have taken place in the course of employment. For example, if an employee generated text using ChatGPT that resulted in a professional negligence or defamation claim, the employer would potentially be liable for this.

Since ChatGPT is marketed for consumer use, and, in any case, OpenAI explicitly disclaims warranties or representations regarding its fitness for use in a commercial (or any) setting, employers should carefully consider the level of risk involved in mandating its use for any given task. Depending on the circumstances and the level of risk involved, a court could find that an employer was under a duty to indemnify its employees against claims resulting from the use of such tools.

Employers may therefore wish to be particularly vigilant about the possibility of employees using tools like ChatGPT to produce client- or public-facing materials, including marketing materials and business pitches.

Copyright and intellectual property considerations

The copyright status of works produced by AI systems such as ChatGPT is unclear. This will be an area of concern for any business producing material protected by copyrights, patents or other intellectual property rights. The Copyright, Designs and Patents Act 1988 provides at s.9(3) that, in the case of a literary or artistic work which is computer-generated, the author shall be taken to be the person ‘by whom the arrangements necessary for the creation of the work are undertaken’. How this is applied to AI-generated works is untested and may need to be resolved in litigation.

In 2021, the Government consulted on reducing the scope of copyright protection for AI-generated works or removing copyright protection altogether. It concluded that there was no convincing rationale for changing the law. Nevertheless, Government statements on 26 May 2023 indicated an intention to produce new regulations on the use of AI, so the law may be reconsidered again in the near future.

Generally, the presumption in English law is that where a work is produced in the course of employment, the copyright belongs to the employer, subject to any agreement to the contrary. This is enshrined in the 1988 Act; many employment contracts also contain clauses stipulating that copyrights or patents produced by the employee will vest in the employer. However, whether such provisions would protect an employer’s copyright in text produced by the employee via an AI text generator is unclear. OpenAI’s standard terms of use, for example, purport to vest copyright in the user, except in any case in which identical text is provided to multiple users. Leaving aside the question, discussed below, of whether OpenAI would in fact own the necessary copyrights in the first place, this provision appears to make the copyright subject to potentially insoluble factual uncertainty.

It has been argued that AI text generators on some occasions simply reproduce existing human-authored text, which may be subject to copyright or other legal provisions. For example, the plaintiffs in a current California lawsuit (Doe v GitHub, Inc. and others) argue that AI code-generation tools produced by GitHub and OpenAI reproduce well-known human-authored sequences of code in violation of open-source license terms.

Finally, an employer should be wary of employees inputting its own intellectual property or confidential information into AI text generation tools. As we discuss below, this may result in the material being stored by the proprietor of the AI tool, used to train its algorithm or provided to unknown third parties.

Accuracy

Even if the source of the code in the Doe v GitHub case is proved, the procedural opacity of AI systems means that, in many cases, it may be difficult to judge the provenance of AI-generated output. This is complicated by the fact that AI text generators have often been reported as ‘hallucinating’ material with an authorial degree of verisimilitude. In one instance reported by Littler Mendelson P.C. in their May 2023 report An Overview of the Employment Law Issues Posed by Generative AI in the Workplace, ChatGPT provided a list of six legal cases with a similar fact pattern to a US client’s case; all six included detailed factual summaries and citations to a reputable legal database. All six cases were fabrications.

Similarly, on 27 May 2023, the New York Times reported on a lawyer who was ordered to appear before a New York court to explain the citation of non-existent cases in a court filing. The lawyer had conducted his research on ChatGPT, which fabricated six cases, again complete with summaries and citations.

It should be noted that the inaccuracy of ChatGPT extends to authorship and copyright issues. On 16 May 2023, Forbes provided sentences from two academic papers (both published before ChatGPT existed) and a sample of Forbes material to ChatGPT with the accompanying question: did ChatGPT write this? In each case, ChatGPT claimed that it was the author.

The apparent inaccuracy of the current generation of AI text generation tools means they may be of questionable value for many corporate uses. For example, the use of an AI text generation tool to create an internal presentation or report would seem to introduce an element of risk which some organisations would not countenance as part of their decision-making processes. Editing and fact-checking AI output is possible in principle; however, these are traditionally regarded as specialized skills, are generally highly labour intensive, and may be difficult to carry out in practice given the uncertainty about the provenance of AI-generated material.

Integrity and accountability

A broader concern relates to the change in an organisation’s culture which may take place with the introduction of AI. In the example given above, where an AI tool is used to produce a presentation or report, it seems possible the nominal author, having delegated much of the task to the AI function, will consequently develop less understanding of the material and lack in ‘ownership’ of the finished product. These are nebulous qualities but are often regarded as critical to the success of an organisation. Some organisations may question whether consensus will be sufficiently ‘hammered out’ when part of the usual decision-making process – drafting a report, for example – is bypassed, with the accountability of the individuals involved seemingly reduced. The opposing argument, of course, is that an AI tool might produce a valuable insight that would otherwise have gone unnoticed.

In roles requiring research, writing or other creative and critical skills, ChatGPT appears to present an ambivalent phenomenon: the possibility of a ‘shortcut’ which might, over time, diminish employee ability by taking over traditionally personal tasks. However, even if, in the long term, such skills become less highly valued, the current state of the technology does not seem obviously to permit this. The copy produced by ChatGPT, for example, often reads as basically competent but has also been criticized as bland, impersonal or lacking in finesse. It is unclear to what extent brands for whom distinctive marketing is a touchstone, for example, will choose to rely on AI.

Similarly, it may be difficult to imagine producing some business documents and communications, such as a product update tailored to a particular client, without the benefit of extensive personal experience, especially where a long-standing business relationship is involved. Where the context is reducible to quantifiable information, it is easier to imagine AI involvement, but where there is more intangible context, it is difficult to see how this could be condensed into a ‘prompt’ for an AI tool. To give a simple example, an AI-generated sales pitch to a new contact would likely be highly inappropriate for a client of long standing.

Employers will necessarily be wary of the illicit use of AI tools for purposes to which they are not suited, especially since identifying AI-generated output is often difficult. An organisation in which the use of AI tools has become widespread without a full assessment of the possible impact is more likely to encounter difficulties with performance management and legal and reputational risk. We would advise attempting to bring the issue into the open at the earliest opportunity so that concerns can be addressed and policies receive employee buy-in.

Data protection under the UK GDPR

Special caution will also be required in any context in which employers are processing personal data. In the UK GDPR, personal data is any information relating to an identifiable person. If an employee inputs personal data into an AI tool, this usage will meet the definition of data processing under the UK GDPR. This raises a number of legal concerns which can be complex for an employer to negotiate. For an example of a (non-text-generative) AI tool used in an HR recruitment context which was judged by the Information Commissioner’s Office (ICO) to be broadly compliant, see the second link at the end of this article.

The most significant risk when processing personal data is a data breach. ChatGPT, for example, stores any input by default, uses it to train its model and may disclose it to third parties in future output; all three would appear to constitute data breaches unless explicit consent or another lawful basis for processing applies. It is now possible to opt out of ChatGPT’s retention and repurposing of user input; we would advise any employer allowing the use of ChatGPT to enforce this opt-out as a matter of policy. However, the disclosure to the AI tool may itself still constitute a data breach: ‘breach’ is broadly defined in the UK GDPR to include ‘access’.

Secondly, any AI output pertaining to personal data would potentially fall under UK GDPR Article 22, which confers the individual right not to be subject to any legally or similarly significant decision based solely on automatic processing. The most straightforward way for employers to avoid falling under this prohibition is to incorporate human decision-making into any AI decision-making process in such a way that it is subsequent to AI involvement and, as the ICO states in its guidance, ‘relate[s] to the actual outcome’.

In practice, meeting this condition may not be straightforward. The overriding principle of the UK GDPR is that any processing of personal information should be ‘fair, lawful and transparent’, but the inherent lack of transparency in AI processes may challenge all three provisions. An employee will be able to meaningfully alter AI output only to the extent that they (i) understand the rationale for the output or (ii) can otherwise assess its viability using independent criteria. However, much recent commentary revolves around AI’s lack of amenability to this kind of interrogation.

This is because AI output is not, strictly speaking, reasoning, but a simulation of reasoning. It may, for example, plausibly but erroneously conflate completely unrelated ideas without providing any record of their source. AI output can therefore benefit from the content, style and format of reliable sources without adhering to the principles by which they are produced. Compliance with the GDPR, in other words, may mean requiring employees to heavily scrutinize even the most plausible-seeming AI output. This means that the idea that AI will save intellectual labour and reduce costs is to some extent at odds with the GDPR’s requirement that people are meaningfully involved in decision-making processes. The incongruity between the opacity of AI ‘reasoning’ and the legal requirement for transparency may therefore make it burdensome for employers to demonstrate meaningful compliance.

Conclusion

Third-party AI text generation tools currently present a number of legal and commercial risks to employers. To some extent the excitement around their potential is in tension with the complexity of the legal situation. The risk appears to be lowest when AI tools are used for labour-intensive background or research tasks which are of little professional development value to the employee, in which completeness is more important than accuracy (at least at a first pass, e.g., listing competitor examples) and which the employer has no intention of publishing under copyright. However, even in these instances, the tendency towards fabrication introduces additional risk; the level of checking required may mean that the commercial value, at this stage, will remain questionable to some employers. Others may judge that the level of risk can be reasonably well managed. In any event, employers should bear in mind that they may be liable for the acts of their employees, and that having policies in place for what is acceptable in the course of employment will be the best course to mitigate this risk.

How we can help

For further information or to take legal advice, please contact the team at Synchrony Law.

This article is for general information only and does not constitute legal or professional advice. Please note that the law may have changed since this article was published.

External publications

ICO Guidance on AI and data protection

ICO MeVitae Artificial Intelligence (AI) Data Protection Audit Report

Chris Tutton