Is your business ready for AI?

Is your business ready for AI?

Whether you choose to embrace it or not, we have truly entered the age of artificial intelligence (AI).

While the long-term implications for society are unknown, what is clear is that organisations need to act now to consider the legal and ethical implications of using AI in their business.

Assuming you run a normal business and are considering using publicly-available tools such as ChatGPT for work purposes, here are 10 things to think about before you dive in:

1. Do you know what AI is and what its limitations are?

Don’t be alarmed by the technical terms such as ‘machine learning’ and ‘generative AI.’

Firstly, you need to identify the type of AI tool you plan to use, understand how it works and get an idea of its limitations. The most important point is that AI is not infallible, and its output may not be 100% accurate. AI ‘hallucinations’ can be amusing, but you and your employees need to know why they happen, be willing to check sources and generally take everything the tools offer to you with a pinch of salt.

Also, because of the way they were trained, AI systems can be guilty of bias. Much of that bias comes from us – from the data we put on the internet. If the internet only shows images of middle-aged white men as CEOs, or women as nurses, then the AI may assume that to be a rule. Attempts to combat this have led to absurdities, such as AI image generators asked to create images of German WWII soldiers in uniform producing surprisingly ethnically diverse pictures.

2. Have you considered how you plan to use AI?

There is a difference between using ChatGPT to answer general questions and using AI as a tool to sift through hundreds of CVs and decide who you should hire (if so, beware of bias – see above).

Different use cases pose different ethical and legal considerations.

3. If you plan to use AI for HR and recruitment purposes, you should consider whether there are employment or equality law implications.

Bias, discrimination, automated decision making or a lack of transparency in the AI system can all pose challenges when dealing with employees or potential recruits.

4. Does your business operate in any particular regulatory regime that imposes additional considerations?

For example, lawyers who plan to use AI must consider client confidentiality and whether their clients are prepared to accept the output of the AI system without professional oversight (hint – not a lot).

Public sector bodies also need to consider a host of public law issues.

5. Have you considered the implications for data protection, confidentiality and privacy?

These matters require careful thought because using AI usually involves sharing data.

What data do you propose to upload? Where will the AI system process the data you put into it? Will the AI ‘learn’ from your data – if so, you can’t put anything confidential or private in there.

As a starting point, if you plan to put any intellectual property, or confidential, commercially sensitive or personal data into the AI system, then generally speaking your use of the system and your data needs to be walled off from the wider world and ideally held in the UK or EEA.

However, the wider point is that you are already on the back foot if your UK GDPR compliance is not up-to-date.

6. Have you reviewed your existing contracts to check whether the wording has any bearing on what you plan to do?

Few contracts will say “you may not use AI”, but many will say “you can only use the personal data we provide to you in this way” (and not mention AI). Many will require that personal data be kept in the UK or EEA.

In short, if you input personal data into a cloud computing AI system that runs from a data centre in Texas, you may be in breach of contract. You may also be breaking data protection law.

7. Have you reviewed and updated your internal policies and your internal and external privacy notices to reflect what you propose to do?

Guidelines for employees on the use of AI is a good start.

8. Cyber security

While this is a wider issue, it comes into sharp focus alongside the use of AI. You should develop a robust management plan for dealing with cyber and personal data breaches. This plan needs to sit outside the company servers in case these are compromised. Game plan a cyber attack that completely shuts you out of your computer systems, and work out what you need hosted elsewhere to get your business back up and running.

9. AI tools come with technology services contracts or licences, so there may be important intellectual property or other data protection issues to be considered.

The contract may set out who owns the intellectual property rights in whichever content the AI tool creates for you based on your prompts, or it may contain important disclaimers and limitations of liability.

10. Finally, if you haven’t done any of this yet – or don’t plan to – I’m afraid you’re not off the hook.

Your employees may already be using AI at work, as ChatGPT has a free – albeit limited – version.

You will need to move quickly to provide basic guidelines on the safe and proper use of AI in the meantime. The most important point here is to clarify what information they can and cannot upload into the AI system, and to set out what they can and cannot use AI for (see above).

Whether you’re just getting started on your AI journey or are about to procure an expensive AI system to revolutionise your operations, take legal advice from the outset to ensure your business is protected.

Legal Disclaimer