ELSEC AI
Learn more about the Ethical, Legal, Socio-Economic and Cultural aspects of AI and the European Trusworthy AI Framework
European Trustworthy AI Framework
European AI Strategy
Released in 2018, the strategy defined three main pillars that should guide the future development of AI made in Europe:
(1) to boost the AI uptake, fostering collaborations among public and private sectors
(2) to tackle the socio-demographic changes, empowering citizens with digital literacy and adaptation to workplace with this new technology
(2) to ensure and ethical and legal framework
Coordinated Plan on Artificial Intelligence aims to accelerate investment in AI, implement AI strategies and programmes and align AI policy to prevent fragmentation within Europe.
The initial Plan defined actions and funding instruments for the uptake and development of AI across sectors. In parallel, Member States were encouraged to develop their own national strategies.
The Coordinated Plan of 2021 aims to turn strategy into action by prompting to:
accelerate investments in AI technologies to drive resilient economic and social recovery, aided by the uptake of new digital solutions
fully and promptly implement AI strategies and programs to ensure that the EU maximizes the advantages of being an early adopter.
align AI policy to remove fragmentation and address global challenges.
To achieve this, the updated plan establishes four key sets of policy objectives, supported by concrete actions. It also indicates possible funding mechanism, and establishes a timeline to:
set enabling conditions for AI development and uptake in the EU
make the EU the place where excellence thrives from the lab to market
The European AI strategy and the coordinated plan make clear that trust is a prerequisite to ensure a human-centric approach to AI: AI is not an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being. To achieve this, the trustworthiness of AI should be ensured. The values on which our societies are based need to be fully integrated in the way AI develops.
Therefore, there is a need for ethics guidelines that build on the existing regulatory framework and that should be applied by developers, suppliers and users of AI in the internal market, establishing an ethical level playing field across all Member States.
HLEG-AI deliverables
The European Commission appointed an independent group of experts representing industry, academia and civil society to provide advice on its artificial intelligence strategy. This High-Level Expert Group on Artificial Intelligence (HLEG-AI) has develop several documents that have contributed to define the European Trustworthy AI framework.
Ethics Guidelines for Trustworthy AI
These guidelines According to the Guidelines, trustworthy AI should be:
(1) lawful - respecting all applicable laws and regulations
(2) ethical - respecting ethical principles and values
(3) robust - both from a technical perspective while taking into account its social environment
The Guidelines put forward a set of 7 key ethical requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements. You can find out some brief definitions of the ethical requirements here.
Policy and Investment Recommendations for Trustworthy AI
Building on its first deliverable, the group put forward 33 recommendations to guide trustworthy AI towards sustainability, growth, competitiveness, and inclusion. At the same time, the recommendations will empower, benefit and protect European citizens.
The results of the work of the AI HLEG were presented at the first European AI Assembly in June 2019. Following the Assembly, the European Commission extended the group’s mandate for one more year. This extended mandate allowed the group to increase its work and pilot the Ethics Guidelines for Trustworthy AI . The mandate of the AI HLEG ended in July 2020 with the presentation of two more deliverables:
The final Assessment List for Trustworthy AI (ALTAI)
A practical tool that translates the Ethics Guidelines into an accessible and dynamic self-assessment checklist. The checklist can be used by developers and deployers of AI who want to implement the key requirements. This new list is available as a prototype web based tool and in PDF format.
Sectoral Considerations on the Policy and Investment Recommendations
The document explores the possible implementation of the recommendations, previously published by the group, in three specific areas of application: Public Sector, Healthcare and Manufacturing & the Internet of Things.
Policy initiatives
White Paper on Artificial Intelligence: a European approach to excellence and trust
To advance the transition of the Trustworthy AI Framework into regulation the Commission published, in 2020, the White Paper on Artificial Intelligence: a European approach to excellence and trust (European Commision, 2020) to set out policy options to support the European vision for digital development and to advance the European strategy for AI (European Commision, 2018), published in 2018.
Share