Teaching Machines to Feel

The next generation of AI will actually transcend information processing, which is quantitative nature, and step firmly into the qualitative realm.

Glimpzitai

Editor’s Note: This post is part of our Big Ideas Series, a column highlighting the innovative thinking and thought leadership at IIeX events around the world. Parry Bedi will be speaking at IIeX North America (June 13-15 in Atlanta). If you liked this article, you’ll LOVE IIeX NA. Click here to learn more.

By Parry Bedi, CEO, GlimpzIt

In the not so distant future machines will become sentient. Not in a “Terminator” kind of way, but more in a way where they genuinely feel empathy, happiness, even serenity. They are our trusted counselors; our thought partners that understand our ways of life and accompany us on the human journey.

Welcome to the age of Emotional Machines.

But isn’t Artificial Intelligence (AI) just about asking Siri for directions to the nearest Chinese restaurant? Or beating Lee Sedol in a game of Go? These are applications of AI, indeed, but this is just the beginning. The next generation of AI will actually transcend information processing, which is quantitative nature, and step firmly into the qualitative realm. Machines will be able to perform small talk, get you as a person and, perhaps more importantly, share your feelings, hopes and aspirations.

Heady stuff, “but what does MR have to do with this future?” you ask. Everything. Imparting emotions on machines first requires that we understand humans. This is where qualitative MR with its emphasis on emotions and behaviors comes in. The rich qualitative datasets we obtain through qualitative MR methods are ideal for training machine algorithms through supervised learning.

Here is a quick overview of how GlimpzIt is approaching this fascinating new space:

Just like humans, AI learns through repetition and practice (known as training in machine learning parlance). But finding accurately categorized data sets to train on has traditionally been an insurmountable challenge. Especially considering that we not only need volume and diversity but also contextual specificity in our data sets. As you can imagine, this is not an easy feat to accomplish using traditional methodologies such as focus groups or IDIs. However, thanks to two of the biggest secular trends of our times – namely the rise of the visual web and crowdsourcing – we not only get access to a trove of rich qualitative data (ie. visuals), but can also use crowdsourcing to effectively categorize it. This combination enables AI algorithms to determine when a they have made a mistake (loss function for those who are technically inclined) and learn from it.

So why start with visuals? The simple reason is that visuals are the new lingua franca of the world (one that transcends cultural, and linguistic barriers) as they are the most concise and expressive units of communication. In fact, by 2017, 74% of the traffic on the internet is projected to be visual. Is this any surprise, given the rise of platforms such as Pinterest, Instagram, Snapchat, etc?

Understanding the human context behind this language can still be challenging. While large strides have been made in the last 3 years in the area of computer vision to identify image content, machines have so far struggled to understand human emotional factors. For example, a computer today will classify an image of a bedroom closet as “Levi’s” or “Hanging Jeans” (object identification), but humans may additionally associate “Organized” or “Accessible” (sentiment attribution) to the image. Thankfully, by mining open social media platforms and crowdsourcing, we can quickly gather visual + text content that can be analyzed for emotions and behaviors as well. When processed using deep learning algorithms, this visual and text data together is used to build a dynamic ontology of meaningful insight categories, which are then auto cleansed and fed back into the system, thereby creating an ever-evolving virtuous cycle.

Even though we are still in the early stages of realizing the full potential of emotional AI, several companies today are using this technology to 1) Create products that compete on value not price and 2) refine their pitch and ad creative so their marketing content effectively conveys benefits (not simply features).

Share
You can leave a response, or trackback from your own site.

3 Responses to “Teaching Machines to Feel”

  1. Victor Crain says:

    May 29th, 2016 at 12:15 pm

    There’s been discussion of machines learning human emotions for several years. Every time I see the subject, I’m really not sure whether to laugh of cry. When one looks at the range of emotions, the size of the psychiatric handbook, and realize that most humans qualify for some kind of mental illness diagnostic code, then just what are machines being taught and by whom? A crude attempt to code emotions (and anything short of one billion lines of code probably qualifies as crude) makes the HAL scenario in the movie, “2001”, highly feasible. To borrow from Twain, where would that leave us? “Scarce, sir, mighty scarce.”

  2. Kevin Gray says:

    May 31st, 2016 at 3:26 pm

    There is persistent confusion in the mass media between our ability to design AI that can recognize (to a limited extent) human emotions (from facial expressions, for example) and machines that themselves are sentient, and can truly feel emotions and think. This (http://www.rogerschank.com/fraudulent-claims-made-by-IBM-about-Watson-and-AI) by Roger Schank is worth reading.

  3. Vic Crain says:

    May 31st, 2016 at 6:56 pm

    Thanks, Kevin!

Leave a Reply

*

%d bloggers like this: