Understanding Dementia Speech Alignment with Diffusion-Based Image Generation
Mansi, Anastasios Lepipas, Soteris Demetriou, Dominika C Woszczyk, Yiying Guan
INTERSPEECH 2025 (Oral)
We investigate attribute alignment signals between text and images in diffusion-based generation for dementia speech contexts. Our work explores how generative models can inadvertently reveal sensitive cognitive information through text-to-image generation, with implications for privacy and healthcare applications. We develop novel methods to detect and analyze these alignment patterns, contributing to the understanding of information leakage in generative AI systems.
Leaky Diffusion: Attribute Leakage in Text-Guided Image Generation
Anastasios Lepipas, Mansi, Marios Charalambides, Jiani Liu, Yiying Guan, Dominika C Woszczyk, Thanh Hai Le, Soteris Demetriou
PoPETS 2025
We show and analyze attribute leakage paths in text-guided diffusion models, demonstrating how sensitive information can inadvertently leak through generative processes. Our research reveals novel attack vectors for authorship identification using text-to-image diffusion models, highlighting significant privacy concerns in current generative AI systems. We propose comprehensive analysis frameworks and mitigation strategies for these emerging security vulnerabilities.
AmalREC: A Dataset for Relation Extraction and Classification Leveraging Amalgamation of Large Language Models
Mansi, Pranshu Pandya, Mahek Bhavesh Vora, Soumya Bharadwaj, Ashish Anand
Preprint 2024
We present AmalREC, a comprehensive relation extraction dataset created by combining LLM-based approaches with template-based methods. Our dataset leverages a 6-level relation hierarchy to generate diverse and high-quality relation extraction examples. We provide detailed analysis of bias and noise across different relation buckets, comparing the effectiveness of various generation strategies and their impact on downstream performance.
On the Impact of Sparsification on Quantitative Argumentative Explanations in Neural Networks
Daniel Peacock, Mansi, Nico Potyka, Francesca Toni and Xiang Yin
3rd International Workshop on Argumentation for eXplainable AI
We investigate the impact of sparsification techniques on the quality and interpretability of quantitative argumentative explanations in neural networks. Our research explores how different sparsification methods affect the ability of neural networks to generate coherent, logical explanations for their predictions. We develop novel metrics to evaluate the argumentative quality of explanations and demonstrate that strategic sparsification can enhance both model efficiency and explanation interpretability without compromising predictive performance.