Navigating the Privacy Maze: Examining the Dark Side of Model Explanations

Image credit: Unsplash

Abstract

In this talk, we will explore the privacy risks associated with model explanations in graph neural networks (GNNs), which are powerful machine learning models for structured data. While model explanations provide valuable insights and enhance user trust, they also carry the potential of inadvertently revealing sensitive information. We will discuss the trade-offs between accuracy, interpretability, and privacy, focusing on our proposed method for extracting private graphs through feature explanations. By examining these trade-offs, we will highlight the challenges and opportunities in achieving a balance between them.

Date
Aug 9, 2023 1:00 PM
Event
CISPA, Hannover
Location
CISPA, Hannover
Hannover,
Olatunji Iyiola Emmanuel (李白)
Olatunji Iyiola Emmanuel (李白)
Postdoctoral Researcher

Emmanuel’s interest is in the privacy of ML models, interpretability and fairness