According to a discovery by Protocol, Microsoft has filed a patent for creating a chatbot that could talk like a person.
“In aspects, social data (e.g., images, voice data, social media posts, electronic messages, written letters, etc.) about the specific person may be accessed. The social data may be used to create or modify a special index in the theme of the specific person’s personality.”
“The specific person [who the chat bot represents] may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity, etc.,”
“The specific person may also correspond to oneself (e.g., the user creating/training the chat bot.”Patent description
This seems more or less like the plot of an episode from the show Black Mirror. In the episode, a young woman’s boyfriend is killed in an accident and she then uses AI to bring his “online presence” back to life in the form of a chatbot. It is proposed that the AI system uses data on the online behavior of that person that he exhibited when he was using his social media platforms to get an idea about how he was in real life. She is then able to upgrade that chatbot to a robot!
This kind of an experiment has a great potential, but also poses huge risks. One being the obvious – privacy. To take up others’ personality, the AI has to scan through all the public information the deceased published. It may even go one step further and demand read access to the account. It may then dig up some details that the person might have wanted to be kept secret.
Earlier, in March 2016, Microsoft had created a AI chatbot on Twitter named Tay.ai. It is meant to test and improve Microsoft’s understanding of conversational language. The team behind Tay said Tay’s conversational abilities were built by “mining relevant public data” and combining that with input from editorial staff, “including improvisational comedians.” The bot is supposed to learn and improve as it talks to people, as such become more natural and better at understanding input over time.
It started out good: –
But soon after that, people starting tweeting the bot with all sorts of misogynistic, racist, and obscene remarks. The bot fed upon it and started spitting out similar remarks.
Let’s hope this time around it works right!