Lawyers are embracing artificial intelligence (AI) at an unprecedented rate, driven by its promise of efficiency in tasks like contract drafting and legal research. However, this enthusiasm has birthed a dangerous trend known as ‘shadow AI’, where personal or unapproved AI tools are being used for work tasks without oversight. According to Axiom’s 2024 View from the Inside Report, 83% of in-house counsel use AI tools not provided by their organisations – 47% of which operate without any governance policies. Stanford University’s 2025 AI Index Report highlights a 56% rise in AI-related incidents globally, with data leaks a primary concern. In the UK, Kiteworks research found that 47% of organisations admit they cannot track sensitive data exchanges involving AI, amplifying the risk of breaches. Alarmingly, one-third of European organisations estimate that 6% to 15% of their sensitive data could be exposed through AI interactions, yet only 22% have implemented technical controls to block unauthorised AI tool access.

All this comes at a time when the regulatory landscape continues to shift. The UK’s General Data Protection Regulation, aligned with the EU’s GDPR, imposes stringent obligations on data processing, storage and cross-border transfers, with fines up to £17.5 million or 4% of global annual turnover for violations. The UK’s forthcoming Cyber Security and Resilience Bill, building on the EU’s Network and Information Systems (NIS2) Directive and work carried out by the previous government, will signal increased scrutiny of AI governance and greater regulation.

Legal and compliance risks

The legal and compliance risks of ungoverned AI use are profound. Data protection violations top the list. The UK GDPR requires organisations to establish a lawful basis for processing personal data, adhere to data minimisation principles and ensure security by design. Yet, when lawyers upload client data to consumer AI tools like ChatGPT or Claude, they relinquish control over that information.

Confidentiality and privilege concerns are equally grave. Legal professional privilege, a bedrock of legal practice, can be waived when communications are shared with third-party AI providers. Trade secrets, merger strategies and intellectual property face similar risks, as AI platforms may inadvertently expose proprietary information through model outputs or data breaches, violating confidentiality agreements.

Building a compliant AI framework

To mitigate these risks, organisations must establish a robust AI governance framework. Comprehensive AI usage policies should outline acceptable tools, data handling protocols and consequences for non-compliance, addressing confidentiality, privilege and data security. Regular risk assessments are vital to identify vulnerabilities, while a formal approval process ensures only secure, compliant AI platforms are used.

Technical controls are critical. Data classification systems should identify sensitive information before it is processed by AI tools. Access controls, such as role-based permissions and monitoring, can prevent unauthorised use of consumer AI platforms. An approved list of enterprise-grade AI tools, designed with legal and compliance requirements in mind, ensures efficiency without sacrificing security. These tools must integrate with existing cybersecurity infrastructure and incorporate data loss prevention measures to protect sensitive information.

Training and awareness underpin effective governance. Mandatory training for all lawyers and staff should cover the technical and legal risks of AI, including GDPR obligations and professional regulatory requirements.

Time to act

The urgency is undeniable. Organisations must act now to audit AI usage, implement controls and educate. By balancing innovation with risk management, lawyers can protect sensitive data, uphold client trust and navigate a complex regulatory landscape. The legal profession is built on trust and diligence. In the AI era, these principles demand proactive governance to ensure technology serves as a tool for progress, not a source of peril.


References
Axiom’s View from the Inside Report 2024
Stanford’s AI Index Report 2025
Kiteworks AI Data Security and Compliance Reality Report
See also Bar Council guidance on generative AI for the Bar 2024