Advances in AI are used to spot signs of sexuality | The Economist

Advances in AI are used to spot signs of sexuality; staff; In The Economist; 2017-09-09.
Teaser: Machines that read faces are coming

tl;dr → The machines have gaydar now; people don’t (c.f.  Studies. That. Show).
Filed: under: Not. Juvenile. At. All. <giggle>Interesting, if true.</giggle>

Original Sources
Michal Kosinski, Yilun Wang (Stanford University); Deep neural networks are more accurate than humans at detecting sexual orientation from facial images; self-published; Center for Open Science; 2017-09? ←zn79k; forthcoming (maybe), Journal of Personality and Social Psychology; Author’s Notes, last updated 2017-09-10 (when viewed on 2017-09-10) ← notes.

Abstract:

We show that faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain. We used deep neural networks to extract features from 35,326 facial images. These features were entered into a logistic regression aimed at classifying sexual orientation. Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 71% of cases for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person. Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style). Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles. Prediction models aimed at gender alone allowed for detecting gay males with 57% accuracy and gay females with 58% accuracy. Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.

Mentions

  • Physiognomy
  • Deep Neural Network (DNN)
  • Face++
  • on 130,741 images of 36,630 men and 170,360 images of 38,593 women downloaded from <where?>a popular American dating website</where?>
  • VGG-Face, a categorizer
    VGG-Face (Parkhi, Vedaldi, & Zisserman, 2015).
  • <quote>[They] used a simple prediction model, logistic regression, combined
    267 with a standard dimensionality-reduction approach: singular value decomposition (SVD). SVD is similar to principal component analysis (PCA), a dimensionality-reduction approach widely used 269 by social scientists. The models were trained separately for each gender</quote>.

Previously

In The Economist

 

Efficient Policy Learning | Athey, Wagner

Susan Athey, Sfefan Wager (Stanford); Efficient Policy Learning; working paper; 2017-02-09; arXiv:1702.02896, landing

Abstract

There has been considerable interest across several fields in methods that reduce the problem of learning good treatment assignment policies to the problem of accurate policy evaluation. Given a class of candidate policies, these methods first effectively evaluate each policy individually, and then learn a policy by optimizing the estimated value function; such approaches are guaranteed to be risk-consistent whenever the policy value estimates are uniformly consistent. However, despite the wealth of proposed methods, the literature remains largely silent on questions of statistical efficiency: there are only limited results characterizing which policy evaluation strategies lead to better learned policies than others, or what the optimal policy evaluation strategies are. In this paper, we build on classical results in semiparametric efficiency theory to develop quasi-optimal methods for policy learning; in particular, we propose a class of policy value estimators that, when optimized, yield regret bounds for the learned policy that scale with the semiparametric efficient variance for policy evaluation. On a practical level, our result suggests new methods for policy learning motivated by semiparametric efficiency theory.

Mentions

Something about minimizing regret.

Quotes

<quote> A treatment assignment policy is a mapping from characteristics of units (patients or customers) to which of a set of treatments the customer should receive. Recently, new datasets have become available to researchers and practitioners that make it possible to estimate personalized policies in settings ranging from personalized offers and marketing in a digital environment to online education. In addition, technology companies, educational institutions, and researchers have begun to use explicit randomization with the goal of creating data that can be used to estimate personalized policies.</quote> <quote> In this paper, we showed how classical concepts from the literature on semiparametric efficiency can be used to develop performant algorithms for policy learning with strong asymptotic guarantees. Our regret bounds may prove to be particularly relevant in applications since, unlike existing bounds, they are sharp enough to distinguish between different a priori reasonable policy learning schemes (e.g., ones based on inverse-propensity weighting versus double machine learning), and thus provide methodological guidance to practitioners. More generally, our experience shows that results on semiparametrically efficient estimation are not just useful for statistical inference, but are also directly relevant to applied decision making problems. It will be interesting to see whether related insights will prove to be more broadly helpful for, e.g., sequential problems with contextual bandits, or non-discrete decision making problems involving, say, price setting or capacity allocation.</quote>

Teaching A.I. Systems to Behave Themselves | NYT

Teaching A.I. Systems to Behave Themselves; Cade Metz; In The New York Times (NYT); 2017-08-13.

tl;dr → the risks of A.I.; we’re all going to die.

Mentions

  • deep neural networks
  • reinforcement learning
    like “gamification,” but for algorithms.
  • OpenAI
    • funded by Elon Musk
    • San Francisco
    • Dario Amodei, staff
  • Coast Runners
    • a video game
    • is old
    • boat-racing video game
  • DeepMind
  • Grand Theft Auto
    • a video game

Claims

  • learning algoritms are powerful
    surprising examples of achievements are cited.
  • learning algorithms are brittle and thus easily fooled
    trivial examples of mistakes aer cited.

Where

Organizations

  • Facebook
  • Google
  • Stanford University
  • University of California, Berkeley

Geography

  • San Francisco
  • London

<recall>A data scientist is a statistician who works from offices located in San Francisco, on a Macintosh computer.</quote>

Who

  • Dario Amodei, staff, OpenAI
  • Paul Christiano, staff, OpenAI
  • Jeff Dean, staff, Google
  • Ian Goodfellow, staff, Google
  • Dylan Hadfield-Menell, University of California, Berkeley (UCB).
  • Geoffrey Irving
  • Shane Legg, staff, DeepMind, of Google
  • Elon Musk
    • chief executive, Tesla
    • many other titles, roles & accolades
    • <quote>pundit, philosopher and technologist</quote>, such an accolade occurs mid-article.
      • the man, the legend, does everything.
      • starts many, finishes little.
      • punter.

Referenced

<quote>Mr. Hadfield-Menell and others at U.C. Berkeley recently published a paper</quote>, which was not cited.
Something about <quote>A machine will seek to preserve its off switch, they showed, if it is specifically designed to be uncertain about its reward function.</quote>
Apparently there was math in the output of Hadfield-Menell et al..

Previously

In The New York Times (NYT)…

In Wired

The Digital Privacy Paradox: Small Money, Small Costs, Small Talk | Athey, Catalini, Tucker

Susan Athey, Christian Catalini, Catherine Tucker; The Digital Privacy Paradox: Small Money, Small Costs, Small Talk; working paper; 2017-02-13; 32 pages; landing; Working Paper W23488; National Bureau o Gratuitously Paywall Rough Drafts (NBER); paywall.
Susan Athey, senior fellow, Stanford Institute for Economic Policy Research
Christian Catalini, MIT
Catherine Tucker, MIT

tl;dr → <quote>Consumers say they care about privacy, but at multiple points in the process they end up making choices that are inconsistent with their stated preferences.</quote>

See Item #3, How cool of a result is that? You are safe, you are loved, you are subtle, you are special. You may opt out any time. And we give back to the community.

Abstract

This paper uses data from the MIT digital currency experiment to shed light on consumer behavior regarding commercial, public and government surveillance. The set- ting allows us to explore the apparent contradiction that many cryptocurrencies offer people the chance to escape government surveillance, but do so by making transactions themselves public on a distributed ledger (a ‘blockchain’). We find three main things.

  1. First, the effect of small incentives may explain the privacy paradox, where people say they care about privacy but are willing to relinquish private data quite easily.
  2. Second, small costs introduced during the selection of digital wallets by the random ordering of featured options, have a tangible effect on the technology ultimately adopted, often in sharp contrast with individual stated preferences about privacy.
  3. Third, the introduction of irrelevant, but reassuring information about privacy protection makes consumers less likely to avoid surveillance at large.

References

  • Acquisti, A., C. Taylor, and L. Wagman (2016). The economics of privacy. In Journal of Economic Literature 54 (2), 442–492.
  • Allcott, H. and T. Rogers (2014). The short-run and long-run effects of behavioral interventions: Experimental evidence from energy conservation. In The American Economic Review 104 (10), 3003–3037.
  • Athey, S., I. Parashkevov, S. Sarukkai, and J. Xia (2016). Bitcoin pricing, adoption, and usage: Theory and evidence. SSRN Working Paper ssrn:2822729.
  • Barnes, S. B. (2006). A Privacy Paradox: Social Networking in the United States. In First Monday 11 (9).
  • Bertrand, M., D. Karlan, S. Mullainathan, E. Shafir, and J. Zinman (2010). What’s advertising content worth? evidence from a consumer credit marketing field experiment. In The Quarterly Journal of Economics 125 (1), 263–306.
  • Catalini, C. and J. S. Gans (2016). Some simple economics of the blockchain. SSRN Working Paper No. ssrn:2874598.
  • Catalini, C. and C. Tucker (2016). Seeding the s-curve? The role of early adopters in diffusion. SSRN Working Paper No. 28266749,
  • Chetty, R., A. Looney, and K. Kroft (2009). Salience and taxation: Theory and evidence. In The American Economic Review 99 (4), 1145–1177.
  • DellaVigna, S. (2009). Psychology and economics: Evidence from the field. In Journal of Economic Literature 47 (2), 315–372.
  • DellaVigna, S., J. A. List, and U. Malmendier (2012). Testing for altruism and social pressure in charitable giving. In The Quarterly Journal of Economics, qjr050.
  • DellaVigna, S. and U. Malmendier (2006). Paying not to go to the gym. The American Economic Review 96 (3), 694–719.
  • Gneezy, U. and J. A. List (2006). Putting behavioral economics to work: Testing for gift exchange in labor markets using field experiments. In Econometrica 74 (5), 1365–1384.
  • Greenstein, S. M., J. Lerner, and S. Stern (2010). The economics of digitization: An agenda for the National Science Foundation (NSF).
  • Gross, R. and A. Acquisti (2005). Information revelation and privacy in online social networks. In Proceedings of the 2005 ACM Workshop on Privacy in the Electronic Society (WPES ’05), New York, NY, USA, pp. 71–80. ACM: ACM.
  • Harrison, G. W. and J. A. List (2004). Field experiments. In Journal of Economic Literature 42 (4), 1009–1055.
  • Ho, D. E. and K. Imai (2008). Estimating causal effects of ballot order from a randomized natural experiment the california alphabet lottery, 1978–2002. In Public Opinion Quarterly 72 (2), 216–240.
  • Kim, J.-H. and L. Wagman (2015). Screening incentives and privacy protection in financial markets: a theoretical and empirical analysis. In The RAND Journal of Economics 46 (1), 1–22.
  • Landry, C. E., A. Lange, J. A. List, M. K. Price, and N. G. Rupp (2010). Is a donor in hand better than two in the bush? evidence from a natural field experiment. In The American Economic Review 100 (3), 958–983.
  • Madrian, B. C. and D. F. Shea (2001). The power of suggestion: Inertia in 401 (k) participation and savings behavior. In The Quarterly Journal of Economics 116 (4), 1149–1187.
  • Marthews, A. and C. Tucker (2015). Government Surveillance and Internet Search Behavior SSRN Working Paper No. ssrn:2412564,
  • Miller, A. and C. Tucker (2011). Can healthcare information technology save babies? In Journal of Political Economy (2), 289–324.
  • Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system.
  • Narayanan, A., J. Bonneau, E. Felten, A. Miller, and S. Goldfeder (2016). Bitcoin Cryptocurrency Technologies. Princeton University Press: Princeton NJ.
  • Posner, R. A. (1981). The economics of privacy. In The American Economic Review 71 (2), 405–409.

 Promotions

 

Big Glass Microphone is a data visualization of a 5km long fiber optic cable buried underneath Stanford University

Eric Rodenbeck (Stamen Design); Big Glass Microphone is a data visualization of a 5km long fiber optic cable buried underneath Stanford University;  In Their Blog, hosted on Medium; 2017-05-15.

Also

Big Glass Microphone, hosted at Stamen Design.

Original Sources>

Mentions

  • Big Glass Microphone
  • Distributed Acoustic Sensing technology
  • Optasense
    along rail rights-of-way

Factoids

  • 0–0.6hz, 10–20hz and so on, up to 320 Hertz.
  • 85 to 180 Hz → male voice
  • 165 to 255 Hz. → female voice

Demonstrations

images

Stanford University

Quotes

<quote>Big Glass Microphone is intended to evoke a sense of wonder about the kinds of detections and interactions that are increasingly common in our uniquitously networked society</quote>

Who

Promotions

Stanford Uni’s intro to CompSci course adopts JavaScript, bins Java Java’s days are numbered – but it’s a very large number | The Register

Stanford Uni’s intro to CompSci course adopts JavaScript, bins Java; Thomas Claburn, in San Francisco; In The RegisteR; 2017-04-24.
Teaser: Java’s days are numbered – but it’s a very large number

Original Sources

CS department updates introductory courses; Stephanie Brito; In The Stanford Daily; 2017-02-28.

Mentions

  • Eric Roberts, Emeritus Professor of Computer Science, Stanford University
  • CS 106A, The Art & Science of Java.

Quoted

  • Stephen O’Grady, co-founder, RedMonk

Stanford PDV-91 — How to Think Like a Futurist: Improve Your Powers of Imagination, Invention, and Capacity for Change

Signup

Syllabus

Promotion

Can you picture the three most important technologies in your life twenty years from today? Could you tell a vivid story about the single biggest challenge you’ll personally face five years from now? What about the biggest challenge the world will face in fifty years? Thinking about the far-off future isn’t just an exercise in intellectual curiosity. It’s a practical skill that, new research reveals, has a direct neurological link to greater creativity, empathy, and optimism. In other words, futurist thinking gives you the ability to create change in your own life and the world around you, today.In this course, you’ll learn essential habits for thinking about the future that will increase the power of your practical imagination. These futurist habits include counterfactual thinking (imagining how the past could have turned out differently); signals hunting (looking for leading-edge examples of the kind of change you want to see in the world); and autobiographical forecasting. We’ll discuss the scientific research that explains how each habit can have a positive impact on your life, from helping you become a more original thinker to making you a more persuasive communicator. By the end of this course, you will have the playful and practical tools you need to imagine how the world (and your life) could be very different—and to use your newfound imagination to create change today.

Jane McGonigal, Director of Games Research and Development, Institute for the Future

Jane McGonigal created forecasting games for partners like the World Bank, the Rockefeller Foundation, the New York Public Library, and the American Heart Association. Well known for her TED talks on creativity and resilience, she is the author of two New York Times bestselling books, Reality Is Broken and SuperBetter. She received a PhD in performance studies from UC Berkeley.

References

  • Kevin Kelly, The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, ISBN:978-0143110378,
  • Jane McGonigal, Reality is Broken, ISBN:978-0143120612,
  • Rebecca Solnit, Hope in the Dark: Untold Histories, Wild Possibilities, ISBN:1608465764
  • Kevin Kelly, The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, ISBN:978-0143110378, paperback: 2017-06-06.
  • Jane McGonigal, Reality is Broken, ISBN:978-0143120612,
  • Rebecca Solnit, Hope in the Dark: Untold Histories, Wild Possibilities, ISBN 1608465764,

Followup herein and in the notes.

De-Anonymizing Web Browsing Data with Social Networks | Su, Shukla, Goel, Narayanan

Jessica Su, Ansh Shukla, Sharad Goel, Arvind Narayanan; De-Anonymizing Web Browsing Data with Social Networks; draft; In Some Venue Surely (they will publish this somewhere, it is so very nicely formatted); 2017-05; 9 pages.

Abstract

Can online trackers and network adversaries de-anonymize web browsing data readily available to them? We show—theoretically, via simulation, and through experiments on real user data—that de-identified web browsing histories can be linked to social media profiles using only publicly available data. Our approach is based on a simple observation: each person has a distinctive social network, and thus the set of links appearing in one’s feed is unique. Assuming users visit links in their feed with higher probability than a random user, browsing histories contain tell-tale marks of identity. We formalize this intuition by specifying a model of web browsing behavior and then deriving the maximum likelihood estimate of a user’s social profile. We evaluate this strategy on simulated browsing histories, and show that given a history with 30 links originating from Twitter, we can deduce the corresponding Twitter profile more than 50% of the time. To gauge the real-world effectiveness of this approach, we recruited nearly 400 people to donate their web browsing histories, and we were able to correctly identify more than 70% of them. We further show that several online trackers are embedded on sufficiently many websites to carry out this attack with high accuracy. Our theoretical contribution applies to any type of transactional data and is robust to noisy observations, generalizing a wide range of previous de-anonymization attacks. Finally, since our attack attempts to find the correct Twitter profile out of over 300 million candidates, it is—to our knowledge—the largest-scale demonstrated de-anonymization to date.

Promotions

  • Ad Networks Can Personally Identify Web Users; Wendy Davis; In MediaPost; 2017-01-20.
    <quote> The authors tested their theory by recruiting 400 people who allowed their Web browsing histories to be tracked, and then comparing the sites they visited to sites mentioned in Twitter accounts they followed. The researchers say they were able to use that method to identify more than 70% of the volunteers.</quote>

‘Design Thinking’ for a Better You, promoting Bernard Roth’s ‘The Achievement Habit’ | NYT

‘Design Thinking’ for a Better You; Tara Parker-Pope; In The New York Times (NYT); 2016-01-04.

Book

Bernard Roth; The Achievement Habit: Stop Wishing, Start Doing, and Take Command of Your Life; HarperBusiness; 2015-07-07; 288 pages; kindle: $13, paper: $15+SHT.

Mentions

  • Bernard Roth
    • professor, engineering, Stanford
    • a founder, Hasso Plattner Institute of Design at Stanford
    • The Achievement Habit
  • Method
    1. “empathize”
      i.e. requirements extraction
    2. “define the problem”
      i.e. scope it, limit it.
    3. “ideate”
      i.e. develop the set of alternatives; e.g. brainstorm, make lists, write down ideas, generate possible solutions.
    4. prototype or plan (as appropriate)
      as such
    5. test, get feedback
      as such
  • The article reframes the method away from engineering towards social success
    • finding a spouse (getting a date),
    • weight loss
    • self-acceptance (of weight that will not be lost).