An end-to-end framework for privacy risk assessment of AI models
11th August, 2022
Authors: Abigail Goldsteen, Shlomit Shachor, and Natalia Raznikov
We present a first-of-a-kind end-to-end framework for running privacy risk assessments of AI models that enables assessing models from multiple ML frameworks, using a variety of low-level privacy attacks and metrics. The tool automatically selects which attacks and metrics to run based on answers to questions, runs the attacks, summarizes and visualizes the results in an easy-to-consume manner.
The work was presented at the 15th ACM International Conference on Systems and Storage (SYSTOR ’22). Association for Computing Machinery, New York, NY, USA.
Reference to the publication https://dl.acm.org/doi/abs/10.1145/3534056.3534998