Here come the AI impostorsAdvances in data science and interface technologies will be major forces in a variety of ways. For instance, in the next few years we will see the emergence of a broad variety of artificially intelligent characters with which we will more frequently interact. I recently hosted a working group of experts on this subject, convening people working on artificial intelligence, interaction design, and other questions. We asked them to predict when they believe we will regularly see AI characters in our lives. The majority believed that this would take place in the next 3-5 years. These characters may be animated, taking any form, or they may be hyper-realistic, passing themselves off as human. They will engage with us in all aspects of our lives, from entertainment to education to healthcare to commerce. For many, these characters will serve as the first point of contact with the Internet, and thus with news, information and politics. There are already examples of what this world might look like. Thousands of chatbots and other automated systems that engage and sometimes frustrate us today. But consider projects like “Baby X,” out of Auckland’s Bioengineering Institute Laboratory for Animate Technologies. Baby X is a digital creature imbued with emotional characteristics and realistic expressions that responds to conversational stimuli.
Or imagine Miquela Sousa, otherwise known as “Lil Miquela,” who is billed as a Brazilian model and musician but a fictional creation of an agency. She isn’t real, but she is a legitimate celebrity, with more than a million followers on Instagram and on other social networks. It’s not hard to see the trend line. Eventually, we’ll regularly interact with a variety of digital impostors, composites of characteristics that may be designed to our individual preferences, with synthesized personalities. We will all have the ability to easily create puppets out of any person or object or voice, using deep learning methods. For instance, methods presented by Stanford researchers at the computer graphics conference SIGGRAPH this year show just how good puppeteering characters from video inputs already is. Generative adversarial networks make it possible to create incredibly difficult to discern puppets from any video. Some people have put this to use to re-animate world leaders to say what they wish; others have used the technology to create pornography “deep fakes,” as the result of these methods are now termed. More mundane applications of the technology are slowly creeping into consumer applications. Try Mug Life, which will let you turn any photo into interesting 3D animated models. Security experts fear the blurring of the line between what is real and what is synthesized, of the spread of hoaxes and lies. But these technologies will also create new means for expression, new tools to create art, and new ways to engage people. We will have to find a balance.View this post on Instagram
Checked a dream off my list when Giphy asked me to direct(!!!) a short for their film festival! It’s called #RobotProblems and it’s about a very specific personal issue. Actually, it’s a documentary. I hope Wes Anderson and Ava DuVernay aren’t, like…TOO shook right now. Enjoy! And lmk who should play my love interest when this becomes a full-length, Oscar-winning movie #giphyfilmfest