Giving new meaning to the term 'gaydar'—or perhaps to 'Turing test'?
"Deep neural networks are more accurate than humans at detecting sexual orientation from facial images" is the title of an article by Stanford University's Michal Kosinski and Yilun Wang, to be published in the Journal of Personality and Social Psychology. The abstract:
We show that faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain. We used deep neural networks to extract features from 35,326 facial images. These features were entered into a logistic regression aimed at classifying sexual orientation. Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 74% of cases for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person. Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style).
Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles. Prediction models aimed at gender alone allowed for detecting gay males with 57% accuracy and gay females with 58% accuracy. Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people's intimate traits, our findings expose a threat to the privacy and safety of gay men and women.
The images and the sexual orientation information were drawn from an online dating site. Note that the study was limited to white people from the United States, because of the relative lack of images of nonwhite gays and lesbians on the site.
As to the privacy matter noted by the last sentence of the abstract:
Previous studies found that sexual orientation can be detected from an individual's digital footprints, such as social network structure (Jernigan & Mistree, 2009) or Facebook Likes (Kosinski, Stillwell, & Graepel, 2013). Such digital footprints, however, can be hidden, anonymized, or distorted. One's face, on the other hand, cannot be easily concealed. A facial image can be easily taken and analyzed (e.g., with a smartphone or through CCTV). Facial images of billions of people are also stockpiled in digital and traditional archives, including dating platforms, photo-sharing websites, and government databases. Such pictures are often easily accessible; Facebook, LinkedIn, and Google Plus profile pictures, for instance, are public by default and can be accessed by anyone on the Internet.
Our findings suggest that such publicly available data and conventional machine learning tools could be employed to build accurate sexual orientation classifiers. As much of the signal seems to be provided by fixed morphological features, such methods could be deployed to detect sexual orientation without a person's consent or knowledge. Moreover, the accuracies reported here are unlikely to constitute an upper limit of what is possible. Employing images of a higher resolution, larger numbers of images per person, larger training set, and more powerful DNN algorithms (e.g., He, Zhang, Ren, & Sun, 2015) could further boost accuracy.
Also, the study reports which areas of the face proved to be most significant for the algorithms in distinguishing homosexuals from heterosexuals:
The most informative facial areas among men included the nose, eyes, eyebrows, cheeks, hairline, and chin; informative areas among women included the nose, mouth corners, hair, and neckline.
This is far from my area of expertise, so I can't speak to how reliable this is—but it seemed interesting enough to pass along.
UPDATE: Here is the authors' note, which offers a helpful summary and responses to questions.