Commercial Contracts in the Age of AI: Balancing Innovation and Risk

Richard Riley
Recent years have seen exponential growth in the adoption of artificial intelligence tools and systems across almost every industry. From predictive analytics to generative drafting tools, AI is no longer experimental for many businesses – it is becoming embedded in core commercial operations and those that fail to engage with it may risk losing competitive advantage.
While AI presents significant opportunities for innovation, particular care is required when drafting the commercial contracts that govern its supply and use. These contracts form the backbone of business relationships and must balance flexibility and innovation with appropriate legal and commercial protections.
Development of AI
Commercial contracts can be structured to encourage the ongoing improvement of AI tools and systems, enable responsible data sharing, and support collaborative development between parties. In practice however, many organisations continue to rely on outdated or inappropriate templates that were designed for traditional software or services.
Legacy contract templates often fail to address issues that are specific to AI solutions, such as the use of data for model training, the transparency of algorithmic decision-making, and rapidly evolving regulatory requirements. Tailoring contracts to the specific nature of the AI solution, its intended use cases and its risk profile is therefore essential.
Emerging Risks in AI Driven Contracts
Whether it is small (but important) changes to traditional software contracts or drafting entirely new and bespoke clauses, some key risks and concerns that will need to be addressed are as follows:
- Data Privacy and Security – AI systems often process large volumes of personal and/or sensitive data. Contracts should clearly identify the categories of data involved, allocate responsibility for compliance, and set out detailed requirements for data handling, storage and transfer, including compliance with applicable data protection legislation such as the UK GDPR. Consideration should also be given to obligations for data minimisation, encryption, breach notification, and audit rights to ensure ongoing compliance and transparency.
- Intellectual Property (IP) – Key questions arise in relation to what data (personal or otherwise) may be used to train the AI, whether the system operates on a closed or open loop basis, and who owns or may exploit AI generated outputs. Contracts should clearly define who owns the training data, any AI generated results, and any improvements or derivative works. Contracts should include appropriate warranties as to the right to use training data and underlying technology, indemnities for intellectual property infringement, and clear provisions dealing with the allocation and treatment of IP rights on termination.
- Bias and Ethical Compliance – AI models can perpetuate biases present in training data, potentially leading to discrimination claims. Contracts should include obligations relating to bias testing, monitoring and reporting, together with appropriate risk allocation and indemnities for claims arising from discriminatory or unlawful outcomes.
- Human Oversight – While AI can automate many processes, human judgment remains important – especially for decisions with legal, financial, or reputational impact or which concern personal data. Contracts should require human review of critical outputs and provide mechanisms for intervention or override where necessary.
- Regulatory Uncertainty – The legal and regulatory framework for AI continues to evolve, often lagging behind technological developments. Contracts should therefore include change in law and change control mechanisms that allow the parties to adapt to new or updated regulatory requirements. This is particularly important for businesses operating across multiple jurisdictions, where regulatory approaches may diverge. Consider including cooperation clauses to ensure both parties remain compliant as the legal landscape shifts.
- Performance and Reliability – AI systems may not always perform as expected and contracts should set clear performance standards, service levels, and remedies for failure to meet them.
- Transparency and Explainability – Opaque AI systems can make it difficult to understand or challenge decisions. Contracts should require suppliers to provide sufficient documentation and explanations of how the AI system works, especially where outputs affect rights or obligations – though this will need to be balanced against the supplier’s trade secrets and commercially sensitive information relating to the AI system.
Practical Drafting Tips
Before finalising any AI-related contract, commercial and legal teams should work closely with their organisation’s technical stakeholders to understand the system’s capabilities, limitations and intended use cases. In practice, commercial pressures to conclude a deal can sometimes result in insufficient scrutiny of whether the contractual framework is genuinely fit for purpose over the life of the relationship.
Contact us
AI is becoming ever more pervasive in both everyday life and commercial activity. Businesses that embrace AI responsibly, by integrating innovation with carefully drafted contracts and ongoing legal oversight, will be well placed to harness its benefits while managing associated risks.
If you would like to discuss how to safeguard your business when developing, procuring or deploying AI solutions, please contact a member of our commercial team.
| 
