For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Delfina Sol Martinez Pandiani

My research

My research combines technical expertise in AI development with a critical approach to cultural analytics. With a PhD in Computer Science, I develop and audit multimodal, explainable AI systems to analyze cultural data. I focus on operationalizing abstract social concepts such as identity, toxicity, moderation, and surveillance, while addressing biases in AI models—particularly in computer vision—and developing solutions to ensure these systems reflect diverse societal values.

Building on Donna Haraway's concept of "situated knowledge," I investigate how AI systems are taught "truth" and challenge traditional notions of objectivity, aiming to design AI models that are both socially and ethically responsible. In projects like toxic meme analysis and content moderation, I develop AI models to detect ideologies, values and harmful content, while critiquing the moral frameworks these systems operate within.

Currently, I am working on projects that balance the technical and ethical dimensions of AI, including generative AI models, toxic symbolism detection in memes, and digital literacy, especially in vulnerable contexts like children in family vlogs. My work contributes to the intersection of computer science, digital humanities, and cultural studies, ensuring that AI systems are both effective and ethically grounded.

IAS Fellowship

During the IAS fellowship, I will develop an empirical ethics framework focused on the intersection of AI-driven analysis of large-scale (monetized) personal data and the ethical challenges of working with vulnerable data subjects. The central question guiding this project is: How can AI-driven research involving vulnerable data subjects, such as child influencers, be conducted ethically and responsibly?

This project aims to create a dynamic and adaptable protocol for AI research involving vulnerable subjects, with a particular case study on "childfluencers" in European family vlogs. By identifying the ethical challenges specific to this context, we will develop tools to help researchers responsibly navigate the complexities of empirically analyzing publicly shared, monetized data. The goal is to fill the gap in existing ethical guidelines and the empirical development of AI-driven research.

A key part of this work involves using AI to identify patterns of overexposure in the content shared by childfluencers. This will provide measurable evidence of the risks to children’s online presence, such as exposure to harmful audiences and the monetization of their personal data. The research will contribute to broader discussions on child protection in the digital space and help inform policy recommendations to protect child influencers.

At the same time, the project will establish empirical ethics frameworks to address the potential harms and ethical dilemmas associated with using AI in this context. These include concerns about consent, privacy violations, emotional and social impacts, and the normalization of surveillance. By creating this framework, I aim to provide a foundation for responsible AI research that integrates both the technical and ethical dimensions of working with vulnerable data subjects in an increasingly data-driven world.