Artificial Intelligence
The AI revolution is underway, enhancing productivity and spurring advancements across healthcare, transportation, finance, energy, and more. Yet it also poses risks, and the global regulatory framework for AI is rapidly evolving. Our expert team is here to guide you through these regulations so that you can maximise AI’s benefits while minimising its risks.
Our Expertise
Our team has strong expertise in AI, combining a deep understanding of the legal framework with comprehensive knowledge of AI technologies. We excel in applying the law to these technologies effectively.
Whether you are an AI provider, deployer, or operator – utilizing proprietary or third-party AI solutions, and whether for internal use or integration into customer-facing products – we are here to help.
Effective AI governance
Working with clients around the world, we help organisations identify their AI objectives, define principles that will govern AI deployment, and implement accountable policies and practices to ensure responsible AI use.
Creating AI inventories
AI compliance begins with understanding what AI models and AI systems are in use. We help clients take a practical, functionality-led approach to mapping their AI, before advising on the measures they need to take for AI compliance.
Determining AI roles
AI laws place different responsibilities on providers, deployers and other operators of AI, and sometimes these roles may be difficult to identify or combined. We can help you understand your role(s) and the responsibilities they attract.
Assessing AI
riskiness
European AI law applies a risk-based approach to regulation - distinguishing between prohibited, high-risk and deception-risk AI systems, and general-purpose AI models with or without "systemic risk". We help clients make and document these assessments.
Managing AI vendor risk
We help clients undertake due diligence when buying-in new AI solutions, identifying and mitigating key risks across privacy, intellectual property, safety and security. We also help to negotiate robust but reasonable AI terms with vendors.
Promoting AI literacy
An often over-looked aspect of AI compliance is promoting AI literacy. We help clients to create and deliver AI training programs and presentations, FAQs, and playbooks to help educate all stakeholders about the benefits and risks that AI presents.
What is the AI Act?
The AI Act is a new EU law that will regulate providers and deployers of AI systems and general purpose AI models. The specific requirements that apply depend on the role you fulfil under the AI Act, and the risks that your development or use of AI presents.
Why is the AI Act described as "risk-based"?
The AI Act distinguishes between “prohibited” AI practices (which are banned), “high risk” AI systems, lower-risk AI systems and general purpose AI. The specific requirements that apply depend on the category of AI system you use, with the majority of the AI Act’s requirements applying to high risk AI systems and general purpose AI. Lower-risk AI systems are subject mainly to transparency requirements.
Does the AI Act apply only to businesses in the EU?
No. Like the GDPR, the AI Act has extraterritorial effect. While it will apply to providers or deployers of AI systems that are established in the EU, it can also apply to businesses outside the EU that put AI systems on the market in the EU or that use the output of an AI system in the EU.
What are the consequences of non-compliance?
Businesses that breach the requirements of the AI Act can be subject to fines from local regulators of up to €35m or 7% of annual worldwide turnover, whichever is higher. That’s even more than the GDPR’s 4% fines! In addition, they may face civil liability under the EU’s new AI Liability Directive and/or the Revised Product Liability Directive.
How do I begin an AI compliance journey?
The first step is to understand what AI systems you have – this involves undertaking a similar exercise to a GDPR data mapping exercise, but designed for AI systems.
Once your AI systems have been identified, they can be categorised by risk, to understand how they will be regulated under laws like the AI Act – and therefore what compliance requirements will apply to them.
Once categorised, undertake a gap assessment to understand what compliance measures already exist for those systems, and how close or apart they are from the statutory requirements of the AI Act. Appropriate remedial measures can then be implemented as necessary.
At the same time, think about what your AI mission is – that is, what are your business objectives for using AI – and the principles you will apply to the development and use of AI systems.
These can then be used to craft policies and practices that take account of your objectives, principles and statutory requirements to ensure compliant AI development and use.
Questions & Answers
Please reach out to our team if your question is not listed here. Our experts are always ready to provide the guidance and support you need.