In this seminar, we will explore the privacy risks associated with model explanations in graph neural networks (GNNs), which are powerful machine learning models for structured data. While model explanations provide valuable insights and enhance user trust, they also carry the potential of inadvertently revealing sensitive information. We will discuss the trade-offs between accuracy, interpretability, and privacy, focusing on our proposed method for extracting private graphs through feature explanations. By examining these trade-offs, we will highlight the challenges and opportunities in achieving a balance between them.