tl;dr → Sounding the alarum; The Signal is Given! Technology! Computers! Control Them! Bad! Stop Them! The panic is upon us! Mend it! Don’t End It!
Via: A tweet, of @FrankPasquale
Oren Bracha, Frank Pasquale WHEN?
Oren Bracha, Frank Pasquale; Federal Search Commission? Access, Fairness, And Accountability In The Law Of Search; In Cornell Law Review, Volume 93 WHEN?; 62 pages (pages 1149→1149+62).
Should search engines be subject to the types of regulation now applied to personal data collectors, cable networks, or phone books? In this Article, we make the case for some regulation of the ability of search engines to manipulate and structure their results. We demonstrate that the First Amendment, properly understood, does not prohibit such regulation. Nor will such intervention inevitably lead to the disclosure of important trade secrets. After setting forth normative foundations for evaluating search engine manipulation, we explain how neither market discipline nor technological advance is likely to stop it. Though savvy users and personalized search may constrain abusive companies to some extent, they have little chance of checking untoward behavior by the oligopolists who now dominate the search market. Arguing against the trend among courts to declare search results unregulable speech, this Article makes a case for an ongoing conversation on search engine regulation.
- SEARCH ENGINES AS POINTS OF CONTROL
- A New Hope?
- The Intermediaries Strike Back
- The New Intermediaries
- Search Engine Bias
- What Is Wrong With Search Engine Manipulation?
- Why Can’t Non-Regulatory Alternatives Solve The Problem?
- Market Discipline
- The Technological Fix: Personalized Search
- Potential Obstacles To Search Engine Regulation
- Will the First Amendment Bar Effective Regulation?
- Balancing Secrecy and Transparency
- Conclusion: Toward Regulation Of Search Engine Bias
tl;dr → Regulation is indicated. Heavy regulation, a fortiori is indicated. Yet these are entities are “publishers” and First Amendment rights appertain to them. This effectively blocks their regulation. Many intricate, advanced and creative analogies have been tried, to construe search engine serivces as “not a publisher.” But to no avail. And yet “we must at least try;” maybe someone will figure out how to do it.
and → <quote>The question, then, is whether a regulatory framework, either by statute or under the common law, could be crafted as to minimize these risks while preventing improper behavior by search engines.</quote>
Commencing with the frame…
“My God, I thought, Google knows what our culture wants!” attributed to John Battelle’s boosterist paean of a decade ago.
John Battelle, The Search: How Google And Its Rivals Rewrote The Rules Of Business And Transformed Our Culture; Penguin Random House; 2005-09-06; 336 pages; ASIN:1591841410; Kindle: $14, paper: $0.10+SHT.
The case for a federal robotics commission; Ryan Calo; In Their Blog; 2014-09-15.
Ryan Calo,Assistant Professor, University of Washington School of Law
tl;dr → There outta be a law. Robots are like cars; Cars have laws. Robots are just as dangerous, only more so.
and → A new freestanding Federal Robot Commission (FTC) is warranted; made of the “best and the brightest.” Then, only then, will we be safe. These are perilous times of the new and the dangerous.
- Law & Robotics
- Driverless Cars
- Finance Algorithms
- Cognitive Radio
- Surgical Robots
- FRC (Federal Robot Commission): A Thought Experiment
- How are robots different from computers
- Answer: robots have a body, they act on “reality.”
<many-words>the difference between a computer and a robot has largely to do with the latter’s embodiment.</many-words>
Andrew Tutt; An FDA for Algorithms; In 69 Administrative Law Review 83 (2017); 2016-03-15 → 2017-04-20; 41 pages; ssrn:2747994
[545 words; his point, and he does have one… An application of a precautionary principle is indiciated; these are dangerous machines run by dangerous people.]
The rise of increasingly complex algorithms calls for critical thought about how best to prevent, deter, and compensate for the harms that they cause. This paper argues that the criminal law and tort regulatory systems will prove no match for the difficult regulatory puzzles algorithms pose. Algorithmic regulation will require federal uniformity, expert judgment, political independence, and pre-market review to prevent – without stifling innovation – the introduction of unacceptably dangerous algorithms into the market. This paper proposes that a new specialist regulatory agency should be created to regulate algorithmic safety. An FDA for algorithms.
Such a federal consumer protection agency should have three powers. First, it should have the power to organize and classify algorithms into regulatory categories by their design, complexity, and potential for harm (in both ordinary use and through misuse). Second, it should have the power to prevent the introduction of algorithms into the market until their safety and efficacy has been proven through evidence-based pre-market trials. Third, the agency should have broad authority to impose disclosure requirements and usage restrictions to prevent algorithms’ harmful misuse.
To explain why a federal agency will be necessary, this paper proceeds in three parts. First, it explains the diversity of algorithms that already exist and that are soon to come. In the future many algorithms will be “trained,” not “designed.” That means that the operation of many algorithms will be opaque and difficult to predict in border cases, and responsibility for their harms will be diffuse and difficult to assign. Moreover, although “designed” algorithms already play important roles in many life-or-death situations (from emergency landings to automated braking systems), increasingly “trained” algorithms will be deployed in these mission-critical applications.
Second, this paper explains why other possible regulatory schemes – such as state tort and criminal law or regulation through subject-matter regulatory agencies – will not be as desirable as the creation of a centralized federal regulatory agency for the administration of algorithms as a category. For consumers, tort and criminal law are unlikely to efficiently counter the harms from algorithms. Harms traceable to algorithms may frequently be diffuse and difficult to detect. Human responsibility and liability for such harms will be difficult to establish. And narrowly tailored usage restrictions may be difficult to enforce through indirect regulation. For innovators, the availability of federal preemption from local and ex-post liability is likely to be desired.
Third, this paper explains that the concerns driving the regulation of food, drugs, and cosmetics closely resemble the concerns that should drive the regulation of algorithms. With respect to the operation of many drugs, the precise mechanisms by which they produce their benefits and harms are not well understood. The same will soon be true of many of the most important (and potentially dangerous) future algorithms. Drawing on lessons from the fitful growth and development of the FDA, the paper proposes that the FDA’s regulatory scheme is an appropriate model from which to design an agency charged with algorithmic regulation.
The paper closes by emphasizing the need to think proactively about the potential dangers algorithms pose. The United States created the FDA and expanded its regulatory reach only after several serious tragedies revealed its necessity. If we fail to anticipate the trajectory of modern algorithmic technology, history may repeat itself.