This post summarizes the SotA on the Privacy and Security of Machine Learning from my naive understanding. Corrections are highly appreciated.
**USENIX2021: **
Tracks:
Machine Learning: Backdoor and Poisoning
Machine Learning: Adversarial Examples and Model Extraction
Adversarial Machine Learning: Defenses
Machine Learning: Privacy Issues
Systematic Evaluation of Privacy Risks of Machine Learning Models
Keywords: Bayes’ theorem, class-dependent threshold
Defeating DNN-Based Traffic Analysis Systems in Real-Time With Blind Adversarial Perturbations
Keywords: constrained domain, DNN, network traffic, Tor
Extracting Training Data from Large Language Models
Keywords: Privacy, GPT-2 (only public data – safe attack), model extraction, sensitive data, model/neural network memorization, differential privacy
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Others
CADE: Detecting and Explaining Concept Drift Samples for Security Applications
Keywords: concept drift, android malware, distance-based explanation
Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps
Keywords: model protection, mobile, apps