On 18 June 2024, the BBC reported that Nvidia had overtaken Microsoft as the world’s most valuable company, with an estimated value of $3.34 trillion. Riding the crest of the wave of enthusiasm for artificial intelligence (AI), the BBC reported that it was an ‘AI Frenzy’, and Chris Penrose, global head of business development for telco at Nvidia, suggested that ‘next year the race to $4 trillion market cap in tech will be front and centre between Nvidia, Apple and Microsoft.’.

Can anyone consider Mr Penrose to be wrong? Relegated to third place, with a pitiful estimated market capitalisation of $3.28 trillion, Apple Inc CEO Tim Cook announced on 11 June 2024 that Apple voice assist function, Siri will incorporate OpenAI’s ChatGPT to enhance its personalised AI system, ‘Apple Intelligence’. This is a massive turnaround for Apple which previously would not even allow applications to be installed on its products if these were not sold through the App Store, nor allow any web browser other than Safari, for fear of ‘security concerns’. Now third-party AI is being incorporated into Apple operating systems.

So, what’s the problem? If Nvidia is the most valuable company in the world, and Apple’s ‘security concerns’ are no more, why not use AI for everything? Are fears of SkyNet causing nuclear war and sending Terminators to kill us all simply a science fiction? Or are there actual real-world concerns from barristers using AI in their work?

Here are the five biggest risks for those considering using AI within their practice, and some tricks to overcome them. (Hint: The biggest tip is to read the Bar Council IT Panel Guidance – Considerations when using ChatGPT and Generative AI Software based on large language models)

1. Lack of transparency

Lack of transparency in AI systems, particularly in deep learning models that can be complex and difficult to interpret, is a pressing issue. This opaqueness obscures the decision-making processes and underlying logic of the technology. Elon Musk, upon hearing of the Apple and OpenAI partnership commented: ‘Apple has no clue what’s actually going on once they hand your data over to OpenAI.’ While Mr Musk may be known for making comments intended to grab attention, ‘lack of explain-ability’ is a similar concern for the Bar Council IT Panel:

‘Like a number of AI tools, generative deep learning AI LLMs are often considered “heavy black-box” models, because it difficult to understand the internal decision-making processes or provide clear explanations for the output.’

A lack of transparency can manifest as a problem in two ways: the person who misunderstands generative AI; and the person who distrusts generative AI.

ChatGPT is not a person! You are not speaking to an individual who is helpfully undertaking your research for you and providing you with the ‘right’ answer. And there are numerous anecdotes of people, including lawyers, asking generative AI for assistance and being provided with information which is simply wrong.

The Bar Council IT Panel cleverly describes ChatGPT, and similar tools, as ‘... a very sophisticated version of the sort of predictive text systems that people are familiar with from their email and chat apps on their smart phones, in which the algorithm predicts what the next word is likely to be.’ If you understand that this is a tool which can, for example, assist in drafting, rather than a research assistant, then a lack a transparency may not be a problem. Any statement contained within text which may sound persuasive because of its bold interpretation of the facts can be treated with a healthy level of scepticism and be checked for accuracy.

Scepticism which goes beyond the ‘healthy’, however, may extend into complete mistrust. Generative AI systems do learn. They are initially trained, through the programming of their designers and from large data models, and then learn through the inputs they receive; both from the individual user and from the collective. The AI’s learning may create bias (see below) but the AI is not intentionally trying to trick you. There is no conspiracy. And (currently) the AI is not sentient. Not everything generated by an AI is correct but nor is it incorrect and/or seeking to take over the world.

2. Bias and discrimination

AI systems can inadvertently perpetuate or amplify social biases due to biased training data or algorithmic design. ‘Training’ generally entails trawling through the internet (a large data model) to gather information such as language patterns and word associations. This will inevitably result in the biases which can be found on the internet (and there are a few) being incorporated into the AI’s training.

OpenAI has indicated that safeguards are in place to prevent ChatGPT from adopting stereotypes which could be sexist or racist; but the lack of transparency within the AI modelling, and a lack of research and testing on how outputs may be adopting bias or discriminatory language, means that it is difficult to assess the effectiveness of these safeguards.

It has been suggested that ‘unbiased algorithms and diverse training data sets’ need to be adopted to ensure that there is no bias or discrimination. This type of thinking seems to, again, misunderstand the nature of the AI and the concepts of bias and discrimination. There is unlikely to be an ‘unbiased algorithm’. And the training data which is searched will incorporate the perspectives of the authors and designers. Users of AI need to be aware that these biases will exist and take this into account when considering the output. In the same way that someone knows that the Daily Telegraph will have a different discourse to the Daily Mirror, when using AI a user will need to be conscious that the output provided may have an inadvertent perspective.

3. Privacy concerns

It is inherent within generative AI that data inputs will be collected and used for future outputs. Barristers must be particularly vigilant that legally privileged or confidential information is not shared with the AI model. The extent to which this could be adopted by the AI and reproduced in another user’s output should not be underestimated. Asking a question which includes material covered by LPP is not dissimilar to sending the privileged material to an unknown third party who may or may not publish it on the internet depending on their whim. It is not a good idea.

Barristers also need to comply with their regulatory duties in relation to the protection of personal data. Anonymisation is certainly recommended.

4. Ethical dilemmas

In addition to the potential breach of the Core Duties and Code of Conduct which may occur through breaching confidentiality (Core Duty 6 and rule rC15.5 of the Code of Conduct), barristers should consider the ethical dilemma of over-reliance on AI. Misleading the court because you have failed to check the accuracy of an output will likely lead to disciplinary action. But even if everything is correct, to what extent should a barrister rely upon AI as their own work?

The IT Panel highlights intellectual property infringement and brand association as a legal violation which may inadvertently arise through using AI. But again, even if there is no third party information contained within the output, is there an ethical issue in passing-off an AI’s output as original content?

There is nothing improper about using a reliable AI tool but this should be used responsibly.

5. Security risks

It was previously highlighted in Counsel magazine that AI could be used by hackers to generate yet more sophisticated spear-phishing attacks (see ‘Cyber resilience: learning from the past, present and future’, Sam Thomas, Counsel March 2024). However, users should be aware that the promise of the ‘next best AI’ could be a trick to obtain data. General cyber awareness is always encouraged as is using reliable and trusted AI.

As the Bar Council IT Panel highlights, irresponsible use of AI can lead to serious and embarrassing consequences, including claims for professional negligence, breach of contract, breach of confidence, defamation, data protection infringements, infringement of IP rights (including passing off claims), and damage to reputation; as well as breaches of professional rules and duties, leading to disciplinary action and sanctions. There are clear risks to using AI but there are also benefits, which will no doubt increase as the prevalence and reliability of AI increases over time.