*/
The European Union Artificial Intelligence Act (the ‘EU AI Act’) is the world’s first regulation overseeing AI. The EU AI Act applies extraterritorially. It impacts individuals and entities outside the EU if their AI systems are placed on the EU market, put into service in the EU, or produce outputs used within the EU. This ensures that non-EU providers and deployers comply with the Act’s requirements when their AI systems affect individuals or entities in the EU, regardless of where the systems are developed or operated.
Based on this comprehensive AI framework, barristers advising EU-linked clients or handling cross-border disputes must grasp its risk-based approach. This regulation, which establishes stricter rules for developing, deploying, and using AI systems, will be fully enforced by August 2026.
The EU AI Act introduces strict rules on data protection, algorithmic transparency and accountability, significantly impacting barristers advising on EU-linked cases. Understanding the Act is essential for barristers to fulfil their professional duties and ensure compliance when advising EU-affiliated clients on high-risk AI tools, algorithmic transparency, and data governance.
Even though the United Kingdom has left the European Union, barristers advising clients with ties to the European Union or working on cross-border disputes with EU entities or individuals should be cognisant of the Act’s ramifications on their professional responsibilities.
This article examines the intersection of the EU AI Act with the ethical and professional responsibilities of barristers using AI. Knowing the Act’s rules and consequences will help barristers ensure they meet the requirements when advising UK-based clients on links to EU legal matters.
The EU AI Act covers all AI systems that impact people in the EU, regardless of their origin or deployment location. For barristers, this creates two primary areas of concern.
First, when they advise clients with EU operations, they must ensure that any high-risk AI tools used in legal processes, such as advanced predictive analytics systems or AI-powered contract review platforms, comply with the Act’s strict requirements. These tools, which must adhere to the stringent obligations under Annex III of the Act, are considered high-risk due to their potential to significantly impact legal outcomes.
Second, barristers using generative AI systems that can forge new content based on patterns in existing data for drafting submissions or predictive tools for case outcomes must ensure their use aligns with transparency rules under Article 52 if their work impacts EU-linked matters.
Article 52 mandates that users of generative AI systems disclose when artificial intelligence has created or influenced content. This means barristers must inform the courts and their clients whenever such tools have been utilised in case preparation or advocacy efforts.
The Bar Council’s 2024 guidance on generative AI outlines several practical risks associated with its use.
Barristers must be aware of the potential dangers of large language models (LLMs) and advanced AI systems. Biases, inadvertent confidentiality violations, the potential for producing false information and hallucinations – in which the model generates inaccurate or deceptive outputs – are common concerns. Barristers should use these tools with caution and remain vigilant of the possible risks and ramifications. Naturally, we should also ensure that the tool is never offered with legally privileged content. These recommendations align directly with the ethical duties found in Core Duty 7 (competence) and Core Duty 6 (confidentiality) in the Bar Standards Board (BSB) Handbook.
One of the most significant provisions of the EU AI Act for barristers is Article 52, discussed above, which requires users of generative AI systems to disclose when content has been created or influenced by artificial intelligence. For barristers, this means explicitly informing courts and clients when AI has assisted in drafting submissions or legal arguments for cases linked to EU jurisdictions. Failure to make such disclosures could breach Core Duty 7, which requires barristers to maintain public trust and confidence in their profession. Presenting unverified assertions generated by AI as factual evidence could also violate procedural rules such as CPR PD 57AC.
To comply with this obligation, the author suggests all barristers combine an ‘AI Use Statement’ with any pleadings or submissions associated with EU jurisdictions. An ‘AI Use Statement’ delineates the tools employed and assesses how the outputs were validated against a credible source for accuracy. Barristers should explicitly outline whether specific examples of ‘high-risk’ systems – such as predictive analytics used in litigation – are expressly listed in Annex III of the EU AI Act or inferred based on general criteria provided within its framework. This process ensures adherence to professional and ethical standards and compliance with the legal requirements set forth by the EU AI Act.
Under Article 10, the EU AI Act establishes demanding criteria for data governance for high-risk AI systems. Barristers utilising artificial intelligence tools should ensure that client data entered into these systems is secured by the standards established by the EU AI Act and the UK General Data Protection Regulation. Entering client data into generative artificial intelligence systems without adequate protection, for example, could lead to unauthorised access to and/or disclosure of confidential information. The Bar Council’s guidance highlights this issue, reminding barristers of a duty owed to clients under Core Duty 6 to uphold client confidentiality. This closely parallels Article 10 of the EU AI Act, which requires high-risk artificial intelligence systems to adopt measures to ensure data integrity and safety. In practice, barristers should use trusted vendors who can demonstrate proper data protection, encryption and appropriate certifications, such as ISO 27001.
Article 10(5) permits the processing of special categories of personal data to the extent strictly necessary for detecting and correcting biases in high-risk AI systems. While this provision addresses risks of prejudice or discrimination inherent in training datasets or system designs, it applies only under strict safeguards. It is not a blanket requirement for all high-risk AI systems. For barristers relying on predictive analytics tools or other forms of legal technology, this obligation intersects directly with Core Duty 2 (acting in clients’ best interests) and Core Duty 5 (honesty and integrity). Barristers must critically assess whether AI tools in their practice perpetuate biases that could negatively impact case strategy or judicial outcomes. This requirement is particularly relevant in areas like immigration law or criminal sentencing, where systemic biases may exist within historical datasets used to train these systems.
Barristers representing clients in the EU must ensure that their clients comply with obligations under the EU AI Act, establishing a framework for using AI. Barristers should consider reviewing their client’s technology to determine risk categorisation under the EU AI Act (banned AI, high-risk AI, low-risk applications). By reviewing AI deployment with clients, barristers can adequately advise them on compliance obligations. Additionally, barristers should validate outputs from generative AI tools against reputable sources before using them in submissions or client advice – for example, by comparing results against trusted legal databases like LexisNexis or Westlaw – and document these steps clearly within case files.
Barristers should revise engagement agreements to explicitly include provisions addressing data security, confidentiality obligations, and liability for compliance costs. These revisions should explicitly outline the use of AI systems, ensuring transparency about potential risks, responsibilities, and regulatory requirements under the EU AI Act. Such agreements should also include provisions for indemnification in cases of regulatory breaches or data security incidents arising from AI systems. Barristers can use ‘sandboxes’ provided by regulatory bodies under the Act to safely test new applications before deployment.
Although a UK equivalent framework has not yet been created, barristers can engage in practices aligned with new Bar Council guidance on documenting how AI supported case preparation while demonstrating accountability.
The EU AI Act marks a worldwide change in regulated use across various sectors – including legal services – and barristers must act accordingly. Monitoring developments within UK regulatory frameworks (anticipated post-2025 election) will be essential.
Barristers can affirm their commitment to lifelong learning by engaging with initiatives such as the Bar Council’s guidance on generative AI tools and other professional development programs while supporting ethical best practices for integrating AI into legal procedures.
The EU AI Act represents a transformative shift in regulating AI, introducing new legal and ethical dimensions to cross-border legal practice, and allowing barristers in their interactions with UK/EU legal matters to promote ethical and responsible AI use within the legal profession while ensuring compliance with evolving regulations.
Barristers can enhance their advocacy and safeguard client interests by actively addressing algorithmic bias, data security, and transparency in using and evaluating AI tools. The Act challenges legal practitioners interacting with the EU to integrate innovative tools into their work responsibly, ensuring that AI complements – rather than compromises – their professional integrity and commitment to justice.
The European Union Artificial Intelligence Act (the ‘EU AI Act’) is the world’s first regulation overseeing AI. The EU AI Act applies extraterritorially. It impacts individuals and entities outside the EU if their AI systems are placed on the EU market, put into service in the EU, or produce outputs used within the EU. This ensures that non-EU providers and deployers comply with the Act’s requirements when their AI systems affect individuals or entities in the EU, regardless of where the systems are developed or operated.
Based on this comprehensive AI framework, barristers advising EU-linked clients or handling cross-border disputes must grasp its risk-based approach. This regulation, which establishes stricter rules for developing, deploying, and using AI systems, will be fully enforced by August 2026.
The EU AI Act introduces strict rules on data protection, algorithmic transparency and accountability, significantly impacting barristers advising on EU-linked cases. Understanding the Act is essential for barristers to fulfil their professional duties and ensure compliance when advising EU-affiliated clients on high-risk AI tools, algorithmic transparency, and data governance.
Even though the United Kingdom has left the European Union, barristers advising clients with ties to the European Union or working on cross-border disputes with EU entities or individuals should be cognisant of the Act’s ramifications on their professional responsibilities.
This article examines the intersection of the EU AI Act with the ethical and professional responsibilities of barristers using AI. Knowing the Act’s rules and consequences will help barristers ensure they meet the requirements when advising UK-based clients on links to EU legal matters.
The EU AI Act covers all AI systems that impact people in the EU, regardless of their origin or deployment location. For barristers, this creates two primary areas of concern.
First, when they advise clients with EU operations, they must ensure that any high-risk AI tools used in legal processes, such as advanced predictive analytics systems or AI-powered contract review platforms, comply with the Act’s strict requirements. These tools, which must adhere to the stringent obligations under Annex III of the Act, are considered high-risk due to their potential to significantly impact legal outcomes.
Second, barristers using generative AI systems that can forge new content based on patterns in existing data for drafting submissions or predictive tools for case outcomes must ensure their use aligns with transparency rules under Article 52 if their work impacts EU-linked matters.
Article 52 mandates that users of generative AI systems disclose when artificial intelligence has created or influenced content. This means barristers must inform the courts and their clients whenever such tools have been utilised in case preparation or advocacy efforts.
The Bar Council’s 2024 guidance on generative AI outlines several practical risks associated with its use.
Barristers must be aware of the potential dangers of large language models (LLMs) and advanced AI systems. Biases, inadvertent confidentiality violations, the potential for producing false information and hallucinations – in which the model generates inaccurate or deceptive outputs – are common concerns. Barristers should use these tools with caution and remain vigilant of the possible risks and ramifications. Naturally, we should also ensure that the tool is never offered with legally privileged content. These recommendations align directly with the ethical duties found in Core Duty 7 (competence) and Core Duty 6 (confidentiality) in the Bar Standards Board (BSB) Handbook.
One of the most significant provisions of the EU AI Act for barristers is Article 52, discussed above, which requires users of generative AI systems to disclose when content has been created or influenced by artificial intelligence. For barristers, this means explicitly informing courts and clients when AI has assisted in drafting submissions or legal arguments for cases linked to EU jurisdictions. Failure to make such disclosures could breach Core Duty 7, which requires barristers to maintain public trust and confidence in their profession. Presenting unverified assertions generated by AI as factual evidence could also violate procedural rules such as CPR PD 57AC.
To comply with this obligation, the author suggests all barristers combine an ‘AI Use Statement’ with any pleadings or submissions associated with EU jurisdictions. An ‘AI Use Statement’ delineates the tools employed and assesses how the outputs were validated against a credible source for accuracy. Barristers should explicitly outline whether specific examples of ‘high-risk’ systems – such as predictive analytics used in litigation – are expressly listed in Annex III of the EU AI Act or inferred based on general criteria provided within its framework. This process ensures adherence to professional and ethical standards and compliance with the legal requirements set forth by the EU AI Act.
Under Article 10, the EU AI Act establishes demanding criteria for data governance for high-risk AI systems. Barristers utilising artificial intelligence tools should ensure that client data entered into these systems is secured by the standards established by the EU AI Act and the UK General Data Protection Regulation. Entering client data into generative artificial intelligence systems without adequate protection, for example, could lead to unauthorised access to and/or disclosure of confidential information. The Bar Council’s guidance highlights this issue, reminding barristers of a duty owed to clients under Core Duty 6 to uphold client confidentiality. This closely parallels Article 10 of the EU AI Act, which requires high-risk artificial intelligence systems to adopt measures to ensure data integrity and safety. In practice, barristers should use trusted vendors who can demonstrate proper data protection, encryption and appropriate certifications, such as ISO 27001.
Article 10(5) permits the processing of special categories of personal data to the extent strictly necessary for detecting and correcting biases in high-risk AI systems. While this provision addresses risks of prejudice or discrimination inherent in training datasets or system designs, it applies only under strict safeguards. It is not a blanket requirement for all high-risk AI systems. For barristers relying on predictive analytics tools or other forms of legal technology, this obligation intersects directly with Core Duty 2 (acting in clients’ best interests) and Core Duty 5 (honesty and integrity). Barristers must critically assess whether AI tools in their practice perpetuate biases that could negatively impact case strategy or judicial outcomes. This requirement is particularly relevant in areas like immigration law or criminal sentencing, where systemic biases may exist within historical datasets used to train these systems.
Barristers representing clients in the EU must ensure that their clients comply with obligations under the EU AI Act, establishing a framework for using AI. Barristers should consider reviewing their client’s technology to determine risk categorisation under the EU AI Act (banned AI, high-risk AI, low-risk applications). By reviewing AI deployment with clients, barristers can adequately advise them on compliance obligations. Additionally, barristers should validate outputs from generative AI tools against reputable sources before using them in submissions or client advice – for example, by comparing results against trusted legal databases like LexisNexis or Westlaw – and document these steps clearly within case files.
Barristers should revise engagement agreements to explicitly include provisions addressing data security, confidentiality obligations, and liability for compliance costs. These revisions should explicitly outline the use of AI systems, ensuring transparency about potential risks, responsibilities, and regulatory requirements under the EU AI Act. Such agreements should also include provisions for indemnification in cases of regulatory breaches or data security incidents arising from AI systems. Barristers can use ‘sandboxes’ provided by regulatory bodies under the Act to safely test new applications before deployment.
Although a UK equivalent framework has not yet been created, barristers can engage in practices aligned with new Bar Council guidance on documenting how AI supported case preparation while demonstrating accountability.
The EU AI Act marks a worldwide change in regulated use across various sectors – including legal services – and barristers must act accordingly. Monitoring developments within UK regulatory frameworks (anticipated post-2025 election) will be essential.
Barristers can affirm their commitment to lifelong learning by engaging with initiatives such as the Bar Council’s guidance on generative AI tools and other professional development programs while supporting ethical best practices for integrating AI into legal procedures.
The EU AI Act represents a transformative shift in regulating AI, introducing new legal and ethical dimensions to cross-border legal practice, and allowing barristers in their interactions with UK/EU legal matters to promote ethical and responsible AI use within the legal profession while ensuring compliance with evolving regulations.
Barristers can enhance their advocacy and safeguard client interests by actively addressing algorithmic bias, data security, and transparency in using and evaluating AI tools. The Act challenges legal practitioners interacting with the EU to integrate innovative tools into their work responsibly, ensuring that AI complements – rather than compromises – their professional integrity and commitment to justice.
Chair of the Bar sets out a busy calendar for the rest of the year
AlphaBiolabs has announced its latest Giving Back donation to RAY Ceredigion, a grassroots West Wales charity that provides play, learning and community opportunities for families across Ceredigion County
Rachel Davenport, Co-founder and Director at AlphaBiolabs, outlines why barristers, solicitors, judges, social workers and local authorities across the UK trust AlphaBiolabs for court-admissible testing
A £500 donation from AlphaBiolabs is helping to support women and children affected by domestic abuse, thanks to the company’s unique charity initiative that empowers legal professionals to give back to community causes
Casey Randall of AlphaBiolabs discusses the benefits of Non-Invasive Prenatal Paternity testing for the Family Court
Philip N Bristow explains how to unlock your aged debt to fund your tax in one easy step
Come in with your eyes open, but don’t let fear cloud the prospect. A view from practice by John Dove
Timothy James Dutton CBE KC was known across the profession as an outstanding advocate, a dedicated public servant and a man of the utmost integrity. He was also a loyal and loving friend to many of us
Lana Murphy and Francesca Perera started their careers at the Crown Prosecution Service before joining chambers. They discuss why they made the move and the practicalities of setting up self-employed practice as qualified juniors
As threats and attacks against lawyers continue to rise, a new international treaty offers a much-needed safeguard. Sarah Kavanagh reports on the landmark convention defending the independence of lawyers and rule of law
Author: Charlotte Proudman Reviewer: Stephanie Hayward