Examining the Relationship Between Privacy and Interpretability in Graph Machine Learning

Image credit: Unsplash

Abstract

Model explanations offer valuable insights into the reasoning behind a model’s predictions and build users’ trust. However, they also carry the potential of inadvertently revealing sensitive information. In this talk, we will explore the privacy risks associated with model explanations in graph neural networks (GNNs), which are powerful machine learning models for graph structured data. We will discuss the trade-offs between model accuracy, interpretability, and privacy, focusing on our proposed attacks for extracting private graphs through feature explanations. In addition, we will explore how the different classes of model explanation methods for GNNs leak varying degrees of information in reconstructing the private graph. By examining these trade-offs, we will highlight the challenges and opportunities in achieving a balance between them.

Date
Feb 8, 2024 1:00 PM
Event
TUDelft, Netherlands
Location
TUDelft, Netherlands
Delft,
Olatunji Iyiola Emmanuel (李白)
Olatunji Iyiola Emmanuel (李白)
Postdoctoral Researcher

Emmanuel’s interest is in the privacy of ML models, interpretability and fairness