• 10/9/2024
  • Reading time 1 min.

AI4POL project develops tools for more efficient regulation

Artificial intelligence for AI policy

How can Europe efficiently achieve its goals in the regulation of artificial intelligence? And how can the EU itself use AI for this purpose? In the AI4POL project, an international research team will investigate whether EU regulations actually support its citizens. Furthermore, the team will develop AI tools and data science methods with which policymakers and regulators can better evaluate the effects of their legislation as well as potential threats posed by technological developments in non-democratic states.

Professor Gjergji Kasneci at TUM Astrid Eckert / TUM
Prof. Gjergji Kasneci investigates whether consumer protection works regarding AI tools in the financial sector.

European societies largely agree that the use of artificial intelligence and other digital technologies should be in line with human rights, democracy and consumer protection. In recent years, the European Union has passed comprehensive legislation that provides a framework for the use of AI, including the AI Act, the Data Act and the Digital Markets Act. How do these regulations work in practice? How do users apply the features that are intended to protect them? How does the use of AI in countries that do not share the same values threaten the goals of European policy?

An international, interdisciplinary research team will investigate these questions in the AI4POL project. The researchers aim to provide policymakers and public authorities not only with new insights, but also with AI tools and data science methods that will enable them to better monitor, evaluate and thus improve the impact of their legislation and measures.

Technology, social sciences, economics, ethics, and law

"Europe's aim is to be innovative and competitive without compromising its values. This can only be achieved with an effective regulatory framework and well-informed decision makers in politics and public administration," says project leader Prof. Jens Prüfer, Director of the Tilburg Law and Economics Center at Tilburg University. "Artificial intelligence and data science can be very helpful for such monitoring. We aim to support politicians, who, unlike big tech companies, don’t have thousands of highly skilled AI experts on staff."

AI4POL will bring together researchers from the fields of technology development, ethics, law, economics and political sciences. Partners include Tilburg University, the Technical University of Munich (TUM), the University of East Anglia (UEA), Visionary Analytics, Centerdata, the Università degli Studi di Roma Unitelma Sapienza and the TUM Think Tank. The research team will be supported by an advisory board with representatives from regulatory authorities, consumer protection agencies, civil society organizations and companies. The European Union is funding the project with 3 million euros.

How are consumer protection measures perceived?

The research team will focus on consumer protection on the one hand and the financial sector on the other. "We know little about how much people understand the information provided for online contracts, for example, and whether they simply click on 'okay' because the texts are far too long and written in legal jargon," says Gjergji Kasneci, Professor of Responsible Data Science at TUM. “We want to find out how these consumer protection measures are perceived, how companies and policymakers can get useful feedback, and how the impact of regulations, particularly around digital services and AI, can be improved. AI tools, in turn, can play a crucial role by summarizing these complicated texts in a way that's simpler and easier to understand.“

Prof. Sean Ennis, Director of the Centre for Competition Policy at UEA, adds: “AI is disrupting the way information is produced and processed. In finance, it has become an integral part of decision-making, for instance, for credit scoring, robo-advisors, or algorithmic trading. AI also plays an instrumental role in regulatory compliance, fraud detection and security. We will study both the opportunities and the risks of AI in such applications.”

European efforts could become worthless if technologies that are developed in countries with different ethical standards become mainstream. The research team will therefore develop a method to analyse AI research and development in autocratic states and quantify specific risks for Europe.

Further information and links

Technical University of Munich

Corporate Communications Center

Back to list
HSTS