Skip to content

Lessons Gleaned from EU's AI Ethics Evaluation Trial

Artificial Intelligence (AI) experts associated with the European Commission have created a preliminary checklist for establishing reliable AI. This checklist could potentially serve as the foundation for a fresh set of AI regulations within the EU. Criticisms have been voiced by the private...

Insights Gleaned from Implementing the European Union's AI Ethics Evaluation Framework
Insights Gleaned from Implementing the European Union's AI Ethics Evaluation Framework

Lessons Gleaned from EU's AI Ethics Evaluation Trial

The European Commission's high-level expert group on artificial intelligence (AI) has unveiled an initial assessment list for building trustworthy AI. This list, which may eventually form the basis for a new legal framework for AI in the EU, is undergoing revisions to ensure it offers developers actionable guidance.

In revising the list, EU policymakers aim to include only necessary questions and contextualise them with sectoral case studies. The focus is on providing practical, relevant guidance rather than adding redundant requirements.

One area of contention on the original list was explainability. Critics, including the private sector and NGO AlgorithmWatch, have argued that the inclusion of explainability and transparency requirements could impose undue burdens on companies and hinder the development of AI in Europe. In response, the high-level expert group is considering removing all questions relating to explainability, as it cannot be applied in all AI systems.

Instead, the group is advocating for a shift towards algorithmic accountability. This approach emphasises the responsibility of AI developers to ensure their systems are fair, transparent, and accountable, rather than requiring detailed explanations of how the AI makes decisions.

However, the section on transparency in the assessment list remains vague, with guidance on the required level and scope of transparency unclear. This is a concern for both industry and NGOs, who argue that clarity is essential to ensure compliance without imposing unnecessary burdens.

The private sector has also criticised the original list for including themes that are not relevant to all sectors or are already covered by existing EU legislation, such as the GDPR. This could cause confusion in product development and delay the adoption of AI technologies.

Initiatives such as DARPA's XAI and IBM's AI Explainability 360 are nascent research projects, and it is important to note that full explainability can be a challenge in terms of feasibility and practicality. It is impossible to have complete explanations on how the outputs of AI systems are provided.

In light of this feedback, EU policymakers should heed the advice from industry and other experts working in the field. Any future requirements for AI systems should be clear, effective, and practical to avoid imposing undue burdens on companies and holding back the development of AI in Europe.

In addition, NGO AlgorithmWatch has criticised the EU AI Act draft for lacking clear requirements on user-friendly complaint systems and transparency. They advocate for a binding national AI transparency register to complement the limited information in the EU database on high-risk AI systems. They argue that the Federal Network Agency (BNetzA) would be an ideal supervisory authority for this.

As the revisions to the EU's AI assessment list continue, it is clear that a focus on algorithmic accountability and practical guidance will be key to ensuring the successful development and implementation of AI in Europe.

Image credits: Flickr user SMPAGWU.

Read also: