For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Paula Maria Helm

My research

Paula Helm is an Assistant Professor specializing in the fields of Critical AI Studies and Empirical Ethics. Originally trained in Anthropology and Peace and Conflict Research, her work today is situated at the convergence of STS, Media Studies and Empirical Technology Ethics. She is the scientific leader of the Empirical Ethics Research Group at the Amsterdam Institute for Advanced Studies and the coordinator a new Media Studies MA-track on Cultural Data & AI, combining critical thinking with data science.

Paula is supervising various interdisciplinary project and PhD theses at the intersection of AI development, lab ethnography and empirical ethics research, including in the Invisible Languages Project and the Hava Lab. The larger mission of Paula’s work is to move AI-Ethics from the PR- to the Engineering-, Infrastructural- and Development-Level.

IAS Fellowship

The rise of child influencers (“childfluencers”) in family vlogs has turned children into highly visible and commodified data subjects. Given the vast volume of online content, there is an increasing reliance on AI to analyze and quantify the extent of children’s online presence, producing valuable quantitative data on their screen time and exposure. Such insights provide an empirical foundation to support morally charged claims about exploitation, offering substance to often highly emotional debates.

Additionally, this data can serve as a basis for deeper qualitative analyses, exploring the causal factors behind observed correlations. For example, AI may reveal that scenes featuring children in minimal clothing tend to attract higher view counts, prompting critical questions about the reasons for this trend and the potential real-world risks associated with it. However, reliance on invasive AI tools for analysis, such as facial recognition, introduces new ethical tensions, including risks of data misuse, algorithmic opacity, and normalized surveillance, even when the analysis is intended for the public good and for making a case to protect child wellbeing.

This project addresses these issues within the emerging field of empirical AI ethics, aiming to develop an ethical framework to guide AI methods in analyzing vulnerable data subjects like child influencers. The project explores the ethical complexities of AI in this context, contributing to broader discussions on balancing public interest with protecting vulnerable groups and individuals.