Artificial intelligence (AI), particularly generative AI and machine learning systems, is rapidly transforming commercial operations across industries. As AI is increasingly embedded into core business functions and used to deliver services, in-house legal teams must review and update existing commercial contracts to reflect emerging legal, regulatory, and operational risks.
This article outlines five key contractual clauses that in-house legal teams should prioritise when negotiating or updating AI-related contracts:
- Intellectual property
- Data usage rights
- Audit and transparency
- Liability allocation
- Compliance with emerging regulation.
1. Intellectual Property (IP)
There are two key aspects to consider:
IP infringement: Many AI models rely on large training datasets, including content scraped from the internet that may be protected by a range of IP rights, mainly copyright but also trademarks, trade secrets, and database rights. However, under current UK and EU law, the legal status of training AI models on such data without authorisation is unresolved, and case law is emerging. To mitigate legal and commercial risks associated with IP infringement, contracts should include:
- broad indemnities covering (1) use of the AI system and outputs; (2) any third-party IP infringement claims arising from the use of outputs or training data; and (3) any breaches of data protection rights or confidentiality.
- warranties confirming that: (1) training data was lawfully obtained; (2) the supplier has the relevant rights or licences to use the training data; (3) the AI system has been designed and tested with the aim of producing accurate outputs and mitigating material bias to the extent reasonably possible; and (4) the supplier shall conduct reasonable ongoing testing to assess the accuracy, safety and suitability of the AI system for its intended use.
IP ownership: UK and EU copyright laws currently do not recognise AI-generated works as protectable unless a human can be identified as the author. Therefore, IP ownership terms for AI outputs must be clearly defined by contract, as default legal protection is uncertain. Contracts should expressly cover:
- Ownership of AI-generated outputs (even if not copyrightable)
- Assignment of any resulting IP rights (e.g., software enhancements)
- Licensing rights if ownership is retained by the supplier.
2. Data usage rights
AI systems rely on large datasets to train AI systems and improve its overall performance, some of which may include personal or confidential business information. This raises potential risks of data leakage, unauthorised use and breaches of data protection laws. Contracts should include:
- A clear obligation that data is processed strictly in accordance with the agreed purpose and data protection laws, such as the UK GDPR or EU GDPR.
- A prohibition on using any client-supplied personal or confidential data to train, fine-tune, or improve AI models (whether proprietary or third-party), unless anonymised and with specific prior written consent.
- A requirement for the supplier to maintain adequate technical and organisational measures to prevent data leakage or unauthorised reidentification.
3. Audit rights and transparency
Under the EU AI Act, high-risk AI systems must meet transparency and explainability obligations. Even where systems fall outside this scope, data protection laws such as the EU/UK GDPR Art. 5(1)(a) and Art. 13–15 require transparency when personal data is involved. In-house legal teams should ensure:
- Contracts include rights to audit, inspect, or request documentation demonstrating AI performance, testing, and compliance with regulatory duties.
- The supplier must provide traceability and documentation of training sources, logic, and decision-making processes if outputs affect legal or contractual decisions.
4. Liability allocation
AI systems may produce misleading, inaccurate or biased outputs, or even generate entirely fabricated content (so called “hallucinations”) that appear factually plausible. While “hallucinations” is a commonly used industry term, it has no established legal definition, and reliance on such outputs may lead to legal risk depending on the context. The legal question of who is liable - the developer, supplier, deployer or user - remains unsettled and will depend on the specific contractual, technical, and regulatory context.
Liability for AI-related harm is likely to be governed by a combination of general contract law, national tort law, and evolving product liability frameworks. Contracts must allocate:
- Specific responsibilities for AI outputs, especially where outputs are used for decision-making (e.g. HR, credit, insurance).
- Exclusions and caps on liability that do not unlawfully exclude liability for personal injury, fraud, or wilful misconduct.
- Requirements to maintain professional indemnity or cyber insurance covering AI-related risks and third-party claims.
5. Compliance with Emerging Regulations
The EU AI Act, which entered into force on 1 August 2024, imposes obligations on AI system providers, deployers, and users, especially for high-risk use cases. The UK is also developing a principle-based and sector-led regulatory framework. As developments continue to be made, in-house legal teams should ensure:
- Compliance clauses explicitly reference the EU AI Act, UK AI policy principles or sector guidance and future amendments to data protection or product liability laws.
- A commitment to maintain ongoing compliance with applicable cross-border regulatory standards, especially where data flows or AI services are international.
- A mechanism to review and update contract terms as new regulation enters into force.
Comment
Contractual silence on AI risk allocation leaves businesses exposed.
While the five areas outlined here are key, each AI use case will raise distinct legal considerations. Where AI outputs affect individuals’ rights or regulatory obligations, contractual terms must anticipate liability, data governance, and evolving legal frameworks.
Related items: First steps to compliance: meeting early obligations under the EU AI Act