Gu, Dolan-Gavitt, Garg (NYU) built an invisible backdoor to hack AI’s decisions | Quartz

Researchers built an invisible backdoor to hack AI’s decisions; Dave Gershgorn; In Quartz; 2017-08-24.

tl;dr → The computer’s semiotics works For The Man, which may not be you.  They trained neural networks against signals and undocumented overrides.  The lusers thought it was trained against only the honest signals inuring to their benefit. They were wrong, to their detriment.
thus →  Know your supply chain. Who are you doing business with? It was ever thus: Surviving on a Diet of Poisoned Fruit.

Original Sources

Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg; BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain; 2017-08-22; N pages; arXiv:1708.06733v1.

Mentions

  • New York University (NYU)
  • “secret” (though now promoted to the unwashed here in Quartz)
    “backdoor” (a metaphor towards entry and access)
    into software.
  • Artificial Intelligence (AI)
  • cloud provider
  • self-driving car
  • <quote>trigger (like a Post-It Note)</quote>
  • Marvin Minsky
    • “the 1950s”
  • Facebook

Who

  • Brendan Dolan-Gavitt, professor, New York University (NYU)

Abstract

Deep learning-based techniques have achieved state-of-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then fine-tuned for a specific task. In this paper we show that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has state-of-the-art performance on the user’s training and validation samples, but behaves badly on specific attacker-chosen inputs. We first explore the properties of BadNets in a toy example, by creating a backdoored handwritten digit classifier. Next, we demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classifier that identifies stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our US street sign detector can persist even if the network is later retrained for another task and cause a drop in accuracy of {25}\% on average when the backdoor trigger is present. These results demonstrate that backdoors in neural networks are both powerful and—because the behavior of neural networks is difficult to explicate—stealthy. This work provides motivation for further research into techniques for verifying and inspecting neural networks, just as we have developed tools for verifying and debugging software.

Comments are closed.