Judgmental AI mirror rates how trustworthy, kind you are based – KSWO 7News | Breaking News, Weather and Sports

Posted on Posted in News

July 31, 2018 at 10:20PM
Go to the source

By Luke Dormehl

Content Provided by  

As the success of the iPhone X’s Face ID confirms, lots of us are thrilled to bits at the idea of a machine that can identify us based on our facial features. But how happy would you be if a computer used your facial features to start making judgments about your age, your gender, your race, your attractiveness, your trustworthiness, or even how kind you are?

Chances are that, somewhere down the line, you’d start to get a bit freaked out. Especially if the A.I. in question was using this information in a way that controlled the opportunities or options that are made available to you.

Exploring this tricky (and somewhat unsettling) side of artificial intelligence is a new project from researchers at the University of Melbourne in Australia. Taking the form of a smart biometric mirror, their device uses facial-recognition technology to analyze users’ faces, and then presents an assessment in the form of 14 different characteristics it has “learned” from what it’s seen.

“Initially, the system is quite secretive about what to expect,” Dr. Niels Wouters, one of the researchers who worked on the project, told Digital Trends. “Nothing more than, ‘hey, do you want to see what computers know about you?’ is what lures people in. But as they give consent to proceed and their photo is taken, it gradually shows how personal the feedback can get.”

biometric mirror judges your looks static all text 2x

As Wouters points out, problematic elements are present from the beginning, although not all users may immediately realize it. For example, the system only allows binary genders, and can recognize just five ethnicities — meaning that an Asian student might be recognized as Hispanic, or an Indigenous Australian as African. Later assessment such as a person’s level of responsibility or emotional stability will likely prompt a response from everyone who uses the device.

The idea is to show the dangers of biased data sets, and the way that problematic or discriminatory behavior can become encoded in machine learning systems. This is something that Dr. Safiya Umoja Noble did a great job of discussing in her recent book Algorithms of Oppression.

“[At present, the discussion surrounding these kind of issues in A.I.] is mostly led by ethicists, academics, and technologists,” Wouters continued. “But with an increasing number of A.I. deployments in society, people need to be made more aware of what A.I. is, what it can do, how it can go wrong, and whether it’s even the next logical step in evolution we want to embrace.”

With artificial intelligence increasingly used to make judgements about everything from whether we’ll make a good employee to our levels of aggression, devices such as the Biometric Mirror will only become more relevant

  helps readers keep tabs on the fast-paced world of tech with all the latest news, fun product reviews, insightful editorials, and one-of-a-kind sneak peeks.

http://www.kswo.com/story/38778974/judgmental-ai-mirror-rates-how-trustworthy-you-are-based-on-your-looks