AI Listened to People’s Voices. Then It Generated Their Faces.

By Mindy Weisberger, Senior Writer | June 11, 2019 06:43am ET

  • MORE
AI Listened to People's Voices. Then It Generated Their Faces.
The algorithm approximated faces based on gender, ethnicity and age, rather than individual characteristics. Credit: Oh et. al.

Have you ever constructed a mental image of a person you’ve never seen, based solely on their voice? Artificial intelligence (AI) can now do that, generating a digital image of a person’s face using only a brief audio clip for reference.

Named Speech2Face, the neural network — a computer that “thinks” in a manner similar to the human brain — was trained by scientists on millions of educational videos from the internet that showed over 100,000 different people talking.

From this dataset, Speech2Face learned associations between vocal cues and certain physical features in a human face, researchers wrote in a new study. The AI then used an audio clip to model a photorealistic face matching the voice. [5 Intriguing Uses for Artificial Intelligence (That Aren’t Killer Robots)]

The findings were published online May 23 in the preprint jounral arXiv and have not been peer-reviewed…..https://www.livescience.com/65689-ai-human-voice-face.html

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.