Do you need a policy for the use of AI at work?
6 December 2024
Rapid changes are underway in the world of work due to the availability of increasingly sophisticated artificial intelligence (AI) products to employees and contractors. As with any innovation, it can take some time fully to understand the benefits and risks, and for policies and procedures to catch up.
Employers and recruitment agencies are increasingly using AI in recruitment; and employees are making use of AI to carry out their jobs, sometimes on their own initiative. While the advantages can be transformational, using AI is not without risk. Now might be a good time to consider developing a policy on the use of AI by employees, and to ascertain how AI is used by recruitment agencies or other third parties closely involved in your business.
In this article, we highlight key areas for employers to address from an employment law perspective. In addition to these issues, businesses should be aware of other areas of legal risk outside the scope of this article, including privacy and data protection, commercial contract law, and intellectual property.
Discrimination and bias in HR decisions
Just like the human decision-making on which it is based, AI decision-making can display bias and discrimination in sifting job applications, with the result that the best candidates are not put forward. Even if you engage a recruitment agency, your business could be liable to claims for discrimination based on the agency’s use of AI, for example, if a suitable candidate is not shortlisted.
Employers should ensure safeguards are in place, for example by ensuring that any recruitment agency is aware of the employer’s equality and diversity recruitment policy. You should seek assurance from the agency about the steps they take to minimise bias and confirm that there is sufficient human involvement to minimise risks. You could also try to negotiate indemnities from the agency against claims based on the use of AI, albeit an agency may not be willing to change its standard terms and conditions.
Employers who use AI in decisions relating to other HR functions, such as performance management and performance-related pay awards, should be transparent about the use of AI and the safeguards in place. In particular, an employee should have the right to speak to a manager who can explain any decision. This should help maintain trust between the employer and the workforce and mitigate feelings of alienation.
Use of AI for work tasks
In the absence of any clear instructions or policy from their employer, many employees are already embracing AI in the performance of their job, sometimes without the knowledge of their manager. AI can make some work tasks quicker and may be more accurate than an employee, but this is not always the case. Employers need to take risks into account. Depending on the work task, these may involve:
“hallucinations”, in which generative AI creates convincing-sounding falsehoods;
breaches of confidentiality; and
ineffective use of prompts, resulting in poor-quality or inaccurate work.
What employers should be doing
To minimise risks and allow employees to make the most of AI, employers should:
assess the nature of their business and the tasks employees perform, and decide broadly on areas in which the deployment of AI may be acceptable;
decide how AI should or should not be used;
address any barriers to adoption by providing support and training;
plan for the significant training which may be needed to reskill some employees as their jobs are transformed by AI;
facilitate team meetings to share AI knowledge and best practice, to ensure team members are not left behind as the team’s skills and the technology develop; and
provide staff with a clear written policy, and explain the policy to staff.
What should a policy on the use of AI contain?
While the content should be tailored to your business, generally any policy should set out:
lists of AI tools employees can and cannot use;
specified tasks for which the use of AI is or is not acceptable;
a wording to the effect that the employee remains ultimately responsible for their work and must check any AI output;
guidelines to avoid breaches of copyright; and
measures to protect confidential information and personal data.
Supporting employees through change
Employees may feel anxious about being left behind or replaced by AI. While there may be job losses in some sectors, this is not inevitable in every business. Businesses that do not adapt may be less competitive.
Where appropriate, employees should be reassured that by adapting to changes and harnessing the benefits of AI they can help the business remain competitive. A clear policy, which is properly implemented with appropriate training, will help the workforce to understand what is expected from them and how to use AI effectively.
How we can help
We can work with you to prepare a bespoke AI policy to effectively manage the risks of employee AI use. For further information, please contact the team at Synchrony Law.
This article is for general information only and does not constitute legal or professional advice. Please note that the law may have changed since this article was published.