Stanford professor getting death threats over ‘gaydar’ research

NewImageHow funny. Gays brag about their “gaydar”, but put it to research and they’ll kill you. 

The New York Times takes a look at the quagmire Kosinski finds himself in following his decision to try—and, in some fashion, succeed—at building what many are referring to as “AI gaydar.” The Stanford Graduate School of Business professor tells the Times he decided to attempt to use facial recognition analysis to determine whether someone is gay to flag how such analysis could reveal the very things we want to keep private.

Now he’s getting death threats. The Times delves into the research—first highlighted by the Economist in early September—and the many bones its many critics have to pick with it.

Humans who looked at the photos correctly identified a woman as gay or straight 54% of the time, and men 61%; the program, when given five photos per person, got it right 83% of the time for women and 91% for men.

One critic explains that while 91% might sound impressive, it’s not. In a scenario where 50 out of every 1,000 people are gay, the program would identify 130 as gay (.91 correct times 50 and .09 wrong times 950); it would be right about 45 of those people, and wrong about 85.

Right-Mind