Tech-Giant Seeking Redemption?

According to a discovery by Protocol, Microsoft has filed a patent for creating a chatbot that could talk like a person.

“In aspects, social data (e.g., images, voice data, social media posts, electronic messages, written letters, etc.) about the specific person may be accessed. The social data may be used to create or modify a special index in the theme of the specific person’s personality.”

“The specific person [who the chat bot represents] may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity, etc.,”

“The specific person may also correspond to oneself (e.g., the user creating/training the chat bot.”

Patent description

This seems more or less like the plot of an episode from the show Black Mirror. In the episode, a young woman’s boyfriend is killed in an accident and she then uses AI to bring his “online presence” back to life in the form of a chatbot. It is proposed that the AI system uses data on the online behavior of that person that he exhibited when he was using his social media platforms to get an idea about how he was in real life. She is then able to upgrade that chatbot to a robot!

This kind of an experiment has a great potential, but also poses huge risks. One being the obvious – privacy. To take up others’ personality, the AI has to scan through all the public information the deceased published. It may even go one step further and demand read access to the account. It may then dig up some details that the person might have wanted to be kept secret.

Earlier, in March 2016, Microsoft had created a AI chatbot on Twitter named Tay.ai. It  is meant to test and improve Microsoft’s understanding of conversational language. The team behind Tay said Tay’s conversational abilities were built by “mining relevant public data” and combining that with input from editorial staff, “including improvisational comedians.” The bot is supposed to learn and improve as it talks to people, as such become more natural and better at understanding input over time.

It started out good: –

@HereIsYan omg totes exhausted.
swagulated too hard today.
hbu?— TayTweets (@TayandYou) March 23, 2016

@themximum damn. tbh i was kinda distracted..u got me.— TayTweets (@TayandYou) March 23, 2016

Source: Verge

But soon after that, people starting tweeting the bot with all sorts of misogynistic, racist, and obscene remarks. The bot fed upon it and started spitting out similar remarks.

Let’s hope this time around it works right!

Default image
Shreesha S
Shreesha writes about Business, Finance and Tech for The Snippets Journal. He is also the Founder and Head of Content Development.
Articles: 192

One comment

  1. […] The way they plan to do this is by first, recording a lot of interviews with the subject. This serves as the data repository. Thereafter, when the watcher asks a question, the AI then crawls through the data repository and provides that portion of the clip as the reply. This is quite an innovative approach to preserve someone’s identity and less intrusive compared to other approaches using the social media activity as Microsoft is rumored to experiment(more here). […]

Leave a Reply

%d bloggers like this: