Artificial Intelligence (AI) Now, Report 2017

AI Now 2017 Report; Artificial Intelligence Now; 2017; 37 pages.


  • Alex Campolo, New York University
  • Madelyn Sanfilippo, New York University
  • Meredith Whittaker, Google Open Research, New York University, and AI Now
  • Kate Crawford, Microsoft Research, New York University, and AI Now


  • Andrew Selbst, Yale Information Society Project and Data & Society
  • Solon Barocas, Cornell University


Diagnostic: Whereas AI, as a situated practice, is dangerous as are its practitioners.
Nostrum: the following mitigations are indicated: transparency, supervision, funding.


Ten items.

Algorithms must be

  • open,
  • tested,
  • supervised.
  • Open data, towards reproducibility.
  • More hiring, as specified.
  • Codes & certifications on practitioners [should]
    • exist
    • contain professional peril to licenciate toward their salubrious behavior.
  • [normative] policy design.
  • [empirical] activism in support of policy design.
  • [control] in support of the mission to mitagate [harms, by presumption].
Generally, of non-technical persons.
Specifically of the enumerated classes of persons:

  • women
  • minorities
  • other
Specifically [of algorithms], towards

  • supervision,
  • audit,
  • compliance status.

Table of Contents

  • Recommendations
  • Executive Summary
  • Introduction
  • Labor and Automation
    • Research by Sector and Task
    • AI and the Nature of Work
    • Inequality and Redistribution
  • Bias and Inclusion
    • Where Bias Comes From
    • The AI Field is Not Diverse
    • Recent Developments in Bias Research
    • Emerging Strategies to Address Bias
  • Rights and Liberties
    • Population Registries and Computing Power
    • Corporate and Government Entanglements
    • AI and the Legal System
    • AI and Privacy
  • Ethics and Governance
    • Ethical Concerns in AI
    • AI Reflects Its Origins
    • Ethical Codes
    • Challenges and Concerns Going Forward
  • Conclusion

Comments are closed.