NewIn: Marcello Ienca
Why we need Neuroethics
Software such as Chat GPT gives computers human-like abilities. At the same time, there have been recent breakthroughs in neurotechnology – most prominently AI-based implants allowing people to speak or walk again. Is there a connection?
Absolutely. We are currently experiencing a scientific revolution. Advances in AI and neurology are spurring each other on. They form a virtuous circle – the opposite of the vicious circle.
That sounds very optimistic. Even if you look solely at AI, there are many dangers: Lack of transparency, discrimination, flawed decisions...
Of course, there are dangers. For example, it is conceivable that AI can derive sensitive information, such as a person's sexual orientation, from neurodata. This raises new questions about self-determination. Our brain is no longer that fortress that is not available to the digital world. We increasingly have access to the neurological basis of thought processes. As a society, we must consider what we want and where we draw red lines.
And that's why we need ethics?
Yes, but for me, ethics does not just mean avoiding dangers and risks. It’s also about doing good. We do this by developing the technologies that can help the hundreds of millions of people with neurological and psychiatric disorders. Especially if we incorporate ethical considerations from the outset through human-centered technology development.
Isn't it already too late for that?
For neurotechnology: No. Regarding AI, we only acted reactively and responded to existing technologies. This time, we are acting proactively. For example, in 2019, OECD, the International Organization for Economic Cooperation and Development, established guidelines for the responsible development of neurotechnologies. I contributed to these guidelines myself. Currently, the Council of Europe and UNESCO are also developing principles on this topic.
Are the guidelines also relevant for companies? Elon Musk's company Neuralink recently announced that it had implanted a brain implant in a patient. Apple has secured a patent for measuring brain waves with AirPod headphones in 2023. That sounds rather worrying to me.
In some cases, the private sector resembles the Old Wild West. Neuralink is an example of a company seemingly having no interest in ethics. On the other hand, many other companies have set up ethics councils. Companies were also actively involved in developing the OECD guidelines. We must - and can - ensure that a majority of neurotech companies cultivate a culture of responsible innovation.
Does this mean that new laws are not necessary?
We can pass laws that regulate which products can be sold in Europe. However, compulsion is not always the only solution. It is also in the interests of companies to prevent a scandal like Cambridge Analytica in the coming years. That would have a devastating impact on the entire field.
Apart from working on the guidelines, what are you currently researching yourself?
Many things. For example, as part of a collaborative international project we are working with people with brain implants to incorporate their views (e.g. on ethical aspects) into the development of future implants. Another example: we are working with colleagues from the computer science department at TUM on the development of transparent AI for neurotechnology and privacy-preserving neural data processing.
Marcello Ienca:
“I was born in 1988 and grew up in the 1990s – a time when computers were increasingly becoming part of everyday life," says Marcello Ienca. "Even as a child, I found intelligent systems fascinating: artificial intelligence as well as the human brain. That's why I initially studied both: philosophy, computer science and psychology with a focus on cognitive science. I then combined the two in my master's and doctorate."
After studying in Rome, Berlin, New York, and Leuven, Marcello Ienca completed his doctorate at the University of Basel in 2018. After further research activities at ETH Zurich and the University of Oxford, he founded the Intelligent Systems Ethics Group at EPFL. He was appointed Professor for Ethics of AI & Neuroscience at TUM in 2023.
Technical University of Munich
Corporate Communications Center
- Paul Hellmich
- paul.hellmich @tum.de
- presse @tum.de
- Teamwebsite
Contacts to this article:
Prof. Dr. Marcello Ienca
Technical University of Munich (TUM)
Professorship of Ethics of AI and Neuroscience
Tel.: +49 89 4140 4041
marcello.ienca @tum.de