Model explanations offer valuable insights into the reasoning behind a model’s predictions and build users’ trust. However, they also carry the potential of inadvertently revealing sensitive information. In this talk, we will explore the privacy risks associated with model explanations in graph neural networks (GNNs), which are powerful machine learning models for graph structured data. We will discuss the trade-offs between model accuracy, interpretability, and privacy, focusing on our proposed attacks for extracting private graphs through feature explanations. In addition, we will explore how the different classes of model explanation methods for GNNs leak varying degrees of information in reconstructing the private graph. By examining these trade-offs, we will highlight the challenges and opportunities in achieving a balance between them.