Martín Abadi, Ulfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang (Google); On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches; arXiv:1708.08022; 2017-08-28; 5 pages.

## Abstract

The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy. However, older ideas about privacy may well remain valid and useful. This note reviews two recent works on privacy in the light of the wisdom of some of the early literature, in particular the principles distilled by Saltzer and Schroeder in the 1970s.

## References

- J. H. Saltzer, M. D. Schroeder, “The protection of information in computer systems,”
*Proceedings of the IEEE*, vol. 63, no. 9, pp. 1278–1308, 1975. DOI:10.1109/PROC.1975.9939 . - W. H. Ware, “Security and privacy: Similarities and differences,” in
*Proceedings of the April 18-20, 1967, Spring Joint Computer Conference*, ser. AFIPS ’67 (Spring). ACM, 1967, pp. 287–290. DOI:10.1145/1465482.1465525. - R. Turn, W. H. Ware, “Privacy and security in computer systems,” Jan. 1975. P5361.
- Erlingsson, V. Pihur, A. Korolova, “RAPPOR: randomized U. aggregatable privacy-preserving ordinal response,” in
*Proceedings of the 21st ACM SIGSAC Conference on Computer and Communications Security*, ACM, 2014, pp. 1054–1067. DOI:10.1145/2660267.2660348. - M. Abadi, A. Chu, I. J. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, L. Zhang, “Deep learning with differential privacy,” in
*Proceedings of the 23rd ACM SIGSAC Conference on Computer and Communications Security*, 2016, pp. 308–318. DOI:10.1145/2976749.2978318. - N. Papernot, M. Abadi, U. war, “Semi-supervised knowledge transfer for deep learning from private training data,” CoRR, vol. arXiv:1610.05755, 2016, performed at the 5th International Conference on Learning Representations, 2017.
- Y. LeCun, Y. Bengio, G. Hinton, “Deep learning,” In
*Nature*, vol. 521, pp. 436–444, 2015. - I. Goodfellow, Y. Bengio, A. Courville, Deep Learning. MIT Press, 2016. deeplearningbook.org.
- C. Dwork, F. McSherry, K. Nissim, A. D. Smith, “Calibrating noise to sensitivity in private data analysis,” in
*Proceedings of the Theory of Cryptography, Third Theory of Cryptography Conference*, TCC 2006, 2006, pp. 265–284. DOI:10.1007/11681878 14. - S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, A. D. Smith, “What can we learn privately?” In
*SIAM Journal Computing*, vol. 40, no. 3, pp. 793–826, 2011. DOI:10.1137/090756090. - K. Chaudhuri, C. Monteleoni, A. D. Sarwate, “Differentially private empirical risk minimization,” In
*Journal of Machine Learning Research*, vol. 12, pp. 1069–1109, 2011. - D. Kifer, A. D. Smith, A. Thakurta, “Private convex optimization for empirical risk minimization with applications to high-dimensional regression”, in
*Proceedings of the 25th Annual Conference on Learning Theory*, 2012, pp. 25.1–25.40. - S. Song, K. Chaudhuri, A. Sarwate, “Stochastic gradient descent with differentially private updates,” in
*Proceedings of the GlobalSIP Conference*, 2013. - R. Bassily, A. D. Smith, A. Thakurta, “Private empirical risk minimization: Efficient algorithms and tight error bounds,” in
*Proceedings of the 55th IEEE Annual Symposium on Foundations of Computer Science*. IEEE, 2014, pp. 464–473. DOI:10.1109/FOCS.2014.56. - R. Shokri, V. Shmatikov, “Privacy-preserving deep learning,” in
*Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security*. ACM, 2015, pp. 1310–1321. DOI:10.1145/2810103.2813687. - J. Hamm, Y. Cao, M. Belkin, http://jmlr.org/proceedings/papers/v48/hamm16.htm”>“Learning privately from multiparty data”< in
*Proceedings of the 33nd International Conference on Machine Learning*(ICML) 2016, 2016, pp. 555–563. - X. Wu, A. Kumar, K. Chaudhuri, S. Jha, J. F. Naughton, “Differentially private stochastic gradient descent for in-RDBMS analytics,” CoRR, vol. arXiv:1606.04722, 2016.
- I. Mironov, “On significance of the least significant bits for differential privacy,” in
*Proceedings of the 19th ACM SIGSAC Conference on Computer and Communications Security*. ACM, 2012, pp. 650–661. DOI:10.1145/2382196.2382264. - M. Fredrikson, S. Jha, T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in
*Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security*. ACM, 2015, pp. 1322–1333. DOI:10.1145/2810103.2813677. - R. Shokri, M. Stronati, V. Shmatikov, “Membership inference attacks against machine learning models,” CoRR, vol. arXiv:1610.05820, 2016.
- B. W. Lampson, “Protection,” Operating Systems Review, vol. 8, no. 1, pp. 18–24, 1974. DOI:10.1145/775265.775268.
- R. Gilad-Bachrach, N. Dowlin, K. Laine, K. E. Lauter, M. Naehrig, J. Wernsing, jmlr.org/proceedings/papers/v48/gilad-bachrach16.html”>“CryptoNets: Applying neural networks to encrypted data with high throughput and accuracy”, in
*Proceedings of the 33nd International Conference on Machine Learning*(ICML) 2016, 2016, pp. 201–210. - C. Zhang, S. Bengio, M. Hardt, B. Recht, O. Vinyals, “Understanding deep learning requires rethinking generalization,” CoRR, vol. arXiv:1611.03530, 2016, performed at the 5th International Conference on Learning Representations, 2017.
- A. Neelakantan, L. Vilnis, Q. V. Le, I. Sutskever, L. Kaiser, K. Kurach, J. Martens, “Adding gradient noise improves learning for very deep networks,” CoRR, 2015. arXiv:1511.06807.
- T. G. Dietterich, “Ensemble methods in machine learning,” in International workshop on multiple classifier systems. Springer, 2000, pp. 1–15.
- K. Nissim, S. Raskhodnikova, A. Smith, “Smooth sensitivity and sampling in private data analysis,” in
*Proceedings of the 39th Annual ACM Symposium on Theory of Computing*, ACM, 2007, pp. 75–84. - M. Pathak, S. Rane, B. Raj, “Multiparty differential privacy via aggregation of locally trained classifiers,” in
*Advances in Neural Information Processing Systems*, 2010, pp. 1876–1884. - I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, “Generative adversarial nets,” in
*Advances in Neural Information Processing Systems*, 2014, pp. 2672–2680. - T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, “Improved techniques for training GANs,” arXiv:1606.03498, 2016.
- J. H. Saltzer, M. F. Kaashoek,
*Principles of Computer System Design: An Introduction,/em>. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2009.* - S. L. Garfinkel,
*Design principles and patterns for computer systems that are simultaneously secure and usable*, Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 2005. - R. Smith, “A contemporary look at Saltzer and Schroeder’s 1975 design principles,” In
*IEEE Security and Privacy*, vol. 10, no. 6, pp. 20–25, Nov. 2012. DOI:10.1109/MSP.2012.85. - S. Ioffe, C. Szegedy, jmlr.org/proceedings/papers/v37/ioffe15.html”>“Batch normalization: Accelerating deep network training by reducing internal covariate shift”, in
*Proceedings of the 32nd International Conference on Machine Learning*(ICML) 2015, 2015, pp. 448–456. - C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, M. Naor, “Our data, ourselves: Privacy via distributed noise generation,” in
*Proceedings of EUROCRYPT*(EUROCRYPT). Springer, 2006, pp. 486–503. - I. Mironov, “Renyi differential privacy,” CoRR, vol. arXiv:1702.07476, 2017.
- A. Kerckhoffs, “La cryptographie militaire”, In
*Journal des sciences militaires*, vol. IX, pp. 5–38, Jan. 1883. - D. Proserpio, S. Goldberg, F. McSherry, “Calibrating data to sensitivity in private data analysis: A platform for differentiallyprivate analysis of weighted datasets,” In
*Proceedings of the Conference Sponsored by the VLDB Endowment*(VLDB), vol. 7, no. 8, pp. 637–648, Apr. 2014. DOI:10.14778/2732296.2732300.