The Future Is Here
We may earn a commission from links on this page

Microsoft Landed a Patent to Turn You Into a Chatbot

Image for article titled Microsoft Landed a Patent to Turn You Into a Chatbot
Photo: Stan Honda (Getty Images)

What if the most significant measure of your life’s labors has nothing to do with your lived experiences but merely your unintentional generation of a realistic digital clone of yourself, a specimen of ancient man for the amusement of people of the year 4500 long after you have departed this mortal coil? This is the least horrifying question raised by a recently-granted Microsoft patent for an individual-based chatbot.

First noticed by the Independent, The United States Patent and Trademark Office confirmed to Gizmodo via email that Microsoft is not yet permitted to make, use, or sell the technology, only to prevent others from doing so. The application for the patent was filed in 2017 but just approved last month.

Advertisement

Hypothetical Chatbot You (envisioned in detail here) would be trained on “social data,” which includes public posts, private messages, voice recordings, and video. It could take 2D or 3D form. It could be a “past or present entity”; a “friend, a relative, an acquaintance, [ah!] a celebrity, a fictional character, a historical figure,” and, ominously, “a random entity.” (The last one, we could guess, might be a talking version of the photorealistic machine-generated portrait library ThisPersonDoesNotExist.) The technology could allow you to record yourself at a “certain phase in life” to communicate with young you in the future.

Advertisement

I personally relish the fact that my chatbot would be useless thanks to my limited text vocabulary (“omg” “OMG” “OMG HAHAHAHA”), but the minds at Microsoft considered that. The chatbot can form opinions you don’t have and answer questions you’ve never been asked. Or, in Microsoft’s words, “one or more conversational data stores and/or APIs may be used to reply to user dialogue and / or questions for which the social data does not provide data.” Filler commentary might be guessed through crowdsourced data from people with aligned interests and opinions or demographic info like gender, education, marital status, and income level. It might imagine your take on an issue based on “crowd-based perceptions” of events. “Psychographic data” is on the list.

Advertisement

In summary, we’re looking at a Frankenstein’s monster of machine learning, reviving the dead through unchecked, highly-personal data harvesting.

“That is chilling,” Jennifer Rothman, University of Pennsylvania law professor and author of The Right of Publicity: Privacy Reimagined for a Public World told Gizmodo via email. If it’s any reassurance, such a project sounds like legal agony. She predicted that such technology could attract disputes around the right to privacy, the right to publicity, defamation, the false light tort, trademark infringement, copyright infringement, and false endorsement “to name only a few,” she said. (Arnold Schwarzenegger has charted the territory with this head.)

Advertisement

She went on:

It could also violate the biometric privacy laws in states, such as Illinois, that have them. Presuming that the collection and use of the data is authorized and people affirmatively opt in to the creation of a chatbot in their own image, the technology still raises concerns if such chatbots are not clearly demarcated as impersonators. One can also imagine a host of abuses of the technology similar to those we see with the use of deepfake technology—likely not what Microsoft would plan but nevertheless that can be anticipated. Convincing but unauthorized chatbots could create issues of national security if a chatbot, for example, is purportedly speaking for the President. And one can imagine that unauthorized celebrity chatbots might proliferate in ways that could be sexually or commercially exploitative.

Advertisement

Rothman noted that while we have lifelike puppets (deepfakes, for example) this patent is the first she’s seen that combines such tech with data harvested through social media. There are some ways that Microsoft might mitigate concerns with varying degrees of realism and clear disclaimers. Embodiment as Clippy the paperclip, she said, might help.

It’s unclear what level of consent would be required to compile enough data for even the lumpiest digital waxwork, and Microsoft did not share potential user agreement guidelines. But additional likely laws governing data collection (the California Consumer Privacy Act, the EU’s General Data Protection Regulation) might throw a wrench in chatbot creations. On the other hand, Clearview AI, which notoriously provides facial recognition software to law enforcement and private companies, is currently litigating its right to monetize its repository of billions of avatars scraped from public social media profiles without users’ consent.

Advertisement

Lori Andrews, an attorney who has helped inform guidelines for the use of biotechnologies, imagined an army of rogue evil twins. “If I were running for office, the chatbot could say something racist as if it were me and dash my prospects for election,” she said. “The chatbot could gain access to various financial accounts or reset my passwords (based on information conglomerated such as a pet’s name or mother’s maiden name which are often accessible from social media). A person could be misled or even harmed if their therapist took a two-week vacation, but a chatbot mimicking the therapist continued to provide and bill for services without the patient’s knowledge of the switch.”

Hopefully, this future never comes to pass, and Microsoft has offered some recognition that the technology is creepy. When asked for comment, a spokesperson directed Gizmodo to a tweet from Tim O’Brien, General Manager of AI Programs at Microsoft. “I’m looking into this - appln date (Apr. 2017) predates the AI ethics reviews we do today (I sit on the panel), and I’m not aware of any plan to build/ship (and yes, it’s disturbing).”