AI Now 2017 Report; Artificial Intelligence Now; 2017; 37 pages.
- Alex Campolo, New York University
- Madelyn Sanfilippo, New York University
- Meredith Whittaker, Google Open Research, New York University, and AI Now
- Kate Crawford, Microsoft Research, New York University, and AI Now
- Andrew Selbst, Yale Information Society Project and Data & Society
- Solon Barocas, Cornell University
Diagnostic: Whereas AI, as a situated practice, is dangerous as are its practitioners.
Nostrum: the following mitigations are indicated: transparency, supervision, funding.
- Algorithms must be
- Open data, towards reproducibility.
- More hiring, as specified.
- Codes & certifications on practitioners [should]
- contain professional peril to licenciate toward their salubrious behavior.
- [normative] policy design.
- [empirical] activism in support of policy design.
- [control] in support of the mission to mitagate [harms, by presumption].
- Generally, of non-technical persons.
Specifically of the enumerated classes of persons:
- Specifically [of algorithms], towards
- compliance status.
Table of Contents
- Executive Summary
- Labor and Automation
- Research by Sector and Task
- AI and the Nature of Work
- Inequality and Redistribution
- Bias and Inclusion
- Where Bias Comes From
- The AI Field is Not Diverse
- Recent Developments in Bias Research
- Emerging Strategies to Address Bias
- Rights and Liberties
- Population Registries and Computing Power
- Corporate and Government Entanglements
- AI and the Legal System
- AI and Privacy
- Ethics and Governance
- Ethical Concerns in AI
- AI Reflects Its Origins
- Ethical Codes
- Challenges and Concerns Going Forward
- Kate Crawford, tweet