Key Words
GDPR- General Data Protection Act.
Indemnity- Security or protection against a loss.
Litigation- Process of taking legal action.
Arbitrage- Near simultaneous purchase and sale of an asset in different markets.
What’s Happening?
AI investment is expected to reach about £2.2trillion from 2025-2029. A huge part of this is expected to go towards the production of data centres due to the huge computing power and access to data that AI requires. This has resulted in rapid AI development and integration across multiple sectors in different countries. This has fuelled debates on how AI should be regulated, with some countries adopting a more rigid structure, and some adopting a more lenient structure to allow for greater growth.
The UK is leaning towards the latter, with the government aiming to become a leading economy for AI. Therefore, they are easing regulations, easing planning rules for faster construction, and reforming the grid connection process from ‘first come first serve’ to one that prioritises important projects. Unlike the EU who have published the ‘EU AI act’ introducing strict rules depending on the risk level of the AI, the Uk is relying on existing laws and regulatory guidance. The UK has however, published the AI white-paper (non-statute), which provides a non-binding framework AI should aim to meet, ranging from safety and security to contestability and governance.
The Legal Issues
Data protection
AI systems require large datasets to train models. Under the GDPR, organisations must ensure data is collected lawfully and used for a specific purpose. With AI collecting vast amounts of data, individuals may not understand their data is being collected. GDPR also gives users the right to erasure, but this becomes difficult if the AI has already been trained with that data. Thirdly, GDPR requires organisation to only collect necessary data, which is not how AI training works. Finally, data collected from the EU could be stored in America, however, GDPR restricts transfer of data outside of the EU without adequate protection.
Bias
If datasets have bias in them, then the system may replicate these biases, which would create legal issues under discrimination law.
Liability
If AI systems make incorrect decisions or breach a contract, it must be decided who would be held accountable.
Why This Matters For Lawyers
AI Regulatory Compliance
Lawyers need to advice companies on how they can structure themselves and their products, so it aligns with regulations across different jurisdictions. Other roles include, advising on risk classifications and transparency obligations as necessary through the EU AI Act. They will also have the role of predicting possible changes and explaining how it could affect their client.
Data Protection and Privacy
Lawyers can help firms ensure they are collecting and sharing data lawfully. This is done by helping companies design compliant data collection processes, and draft privacy policies.
Risk Allocation
There are a lot of parties involved with the production and implementation of AI and data centres such as: developers, software providers, electricity providers, cloud infrastructure companies, etc. Lawyers need to draft contracts that allocate responsibility if something goes wrong (indemnity clauses).
Infrastructure and Real Estate
The construction of data centres to support AI require planning permissions, access to a large amount of electricity along with a guarantee of consistent supply, and finance agreements.
Dispute Resolution
Lawyers will have to represent companies in litigation or arbitration due to possible claims such as negligence, breach of contract, or breach of regulation.
