Unless you’ve been completely off-grid for the past 12 months or so, you’ve likely encountered the deluge of news, articles, explainers, and enthusiastic LinkedIn posts about the wonders and/or terrors of Generative AI.

If you have been offline and missed it all, then congratulations! It’s been a lot! This bit is for you. The AI savvy/weary may skip ahead:

The term ‘artificial intelligence’ (AI) has been in use since the 1950s and refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human-like understanding, reasoning, learning, and problem-solving.

Generative AI (Gen AI) is a type of AI that can create or generate content, such as text, images, or other data, by learning from large datasets and producing novel outputs based on observed patterns. Popular examples include: ChatGPT, Claude and Gemini.

There are probably as many Gen AI evangelists as there are prophets of doom, but in between the two camps is a DMZ populated by many more wary adopters, curious skeptics and AI casuals. It is increasingly unrealistic to think that students, pupils, barristers, or indeed law librarians, won’t be using Gen AI. Quite the opposite. Leveraging these new tools is fast becoming a marketable skill. However, as useful as Gen AI can be, there is a significant degree of risk attached to employing it in your studies and practice.

Minimising the risks

Here are eight tips to help minimise the risks associated with using Gen AI:

  1. Conduct your due diligence. With the proliferation of Gen AI tools, some are bound to be (to use the correct InfoSec terminology) ‘really dodgy’. Research who or what is behind your chosen application. Is it well-established or a new kid on the block? What are other people saying about it?
  2. What training data does your chosen model use? For example, ChatGPT’s training data includes everything publicly available on the internet, from peer-reviewed articles to Reddit threads(!), while Lexis+ AI relies on Lexis+ content.
  3. Read the small print and adjust privacy settings where possible. Understand the terms of use. Will your data be reused? Could your input end up in someone else’s output? Can you opt out of model-training?
  4. Avoid inputting sensitive data, and comply with data protection laws and best practice. Don’t input third-party content without consent.
  5. Cross-examine your model. Remember, Gen AI isn’t actually intelligent. ‘Prompt engineering’ is the art of carefully crafting your questions. Contextualise and refine your instructions to improve the relevance of the output. Try telling your model that it is a lawyer.
  6. Interrogate the output. Check facts and follow up leads using alternative sources. Gen AI presents us with the ‘black box’ problem, meaning you don’t necessarily know how it has arrived at its answer. It also has a tendency to ‘hallucinate’ or make things up, even inventing fake references and case citations. Always double-check the accuracy and currency of the output and be on the lookout for potential biases.
  7. Disclose and be transparent about using AI-generated content. Avoid simply copying and pasting. If you do, use footnotes to indicate this.
  8. Maintaining high ethical standards is crucial for professional and academic integrity. Read the relevant guidance published by professional bodies such as the Bar Council and course providers’ statements on the use of Gen AI.

Gen AI is just another tool to be leveraged, albeit carefully. Investing time in mastering this new skill and learning more about risks and effective use is key. Explore a curated list of online courses, many of which are free, here.