How the EU is Mastering the Challenge of Trustworthy Artificial Intelligence

From treating chronic diseases and reducing fatality rates in traffic accidents, to fighting climate change and anticipating cybersecurity threats, Artificial Intelligence (AI) is no longer considered a futuristic construct – it is already a reality and is helping humanity solve pressing global challenges. It significantly improves peoples’ lives, helps with day-to-day tasks and benefits society and the economy. Nevertheless, AI applications should not only be consistent with the law, but also adhere to ethical principles. The ethical dimension of AI is not a luxury feature or an add-on: it needs to be an integral part of AI development.

The European Commission recognises AI as one of the 21st century’s most strategic technologies and is therefore increasing its annual investment in AI by 70% as part of the research and innovation programme Horizon 2020, reaching €1.5 billion for the period 2018-2020. The Commission aims to foster cross-border cooperation and mobilise all players to increase public and private investments to at least €20 billion annually over the next decade.

 AI is just like any other tool – it is here to help people. It is this perspective that underpins the EU’s approach and commitment to putting it at the service of citizens and the economy. To make the most of the opportunities which AI offers and address these challenges, the Commission published a European strategy in April 2018[1]. The strategy places people at the centre of the development of AI, ensuring a human-centric approach: AI is not an end in itself, but a tool that has to serve people’s well-being.

 Europe’s approach to Artificial Intelligence shows how economic competitiveness and societal trust must start from the same fundamental values and mutually reinforce each other. The EU has a strong regulatory framework that will set the global standard for human-centric and trustworthy AI. To this end, the Commission has set up a high-level expert group on AI[2] representing a wide range of stakeholders (Member States, industry, societal actors and citizens) and has tasked it with drafting AI ethics guidelines as well as preparing a set of recommendations for broader AI policy. According to the guidelines, three components are necessary in order to achieve ‘trustworthy AI’: (1) it should comply with the law, (2) it should fulfill ethical principles and (3) it should be safe and technically robust since, even with good intentions, AI systems can cause unintentional harm.

By stepping up investment at the European level, preparing a framework of future actions, and supporting efforts of Member States to prepare for the changes and build trust in human-centric AI, Europe and its citizens should be able to shift perspective ´from fear to opportunity´. They will also be equipped to take advantage of AI and use it to co-create a society full of opportunities.

  • EC Investment in AI (2014-2020)[3] – The European Commission has already invested significant amounts in AI, cognitive systems, robotics, big data and future and emerging technologies to help Europe maintain its competitive edge.
  • AI-related Areas – Around €2.6 billion over the duration of Horizon 2020 on AI-related areas (including in robotics, big data, health, transport, future and emerging technologies).
  • Robotics – €700 million under Horizon 2020 and €2.1 billion from private investment into one of the biggest civilian research programmes of smart robots in the world.
  • Skills – €27 billion through European Structural and Investment Funds on Skills development out of which the European Social Fund invests, €2.3 billion specifically in digital skills.

[1] https://ec.europa.eu/transparency/regdoc/rep/1/2018/EN/COM-2018-237-F1-EN-MAIN-PART-1.PDF

[2] https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence

[3] http://ec.europa.eu/newsroom/dae/document.cfm?doc_id=51610

Sign up for the free insideAI News newsletter.