How rage against the machine ended a bright idea
It started as an innocent experiment, but 24 hours after Microsoft proudly introduced Tay, its artificial intelligence robot, to the world, the company had to cancel the project and apologise
At the end of last year, Microsoft Research published a list of technology trend predictions for 2016, a quarter of them relating to Artificial Intelligence.
What the tech giant's innovation arm didn't predict was that, by spring 2016, AI would be garnering them headlines for all the wrong reasons, when a Twitter-based project quickly got out of hand.
At the end of March, a robot called Tay (@TayandYou) was unveiled and the public were invited to interact with her by sending pictures, asking for a joke or a horoscope reading.
Her Twitter description read "the more you talk the smarter Tay gets", but within 24 hours, the teen bot had to be silenced after she tweeted masses of racial, ethnic and sexist slurs, and sided with Hitler on more than one occasion.
"Do you support genocide?" someone asked. "I do indeed," Tay cheerfully replied - and that's mild compared to some of her expletive-laden tweets.
"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Peter Lee, Microsoft Research Corporate vice president, in a statement explaining their decision to pull the plug.
Lee says that while they stress-tested Tay for all kinds of "abuses of the system", they weren't prepared for a malicious planned attack, and admits they "made a critical oversight". In short, Tay got trolled - hard.
But the transformation of a teenage chatbot into a misogynistic racist says more about the internet at large than it does about the technology behind it.
Just as public polls are often hijacked for comedic purposes (see also: the ship that's on track to be named Boaty McBoatface), online lurkers love nothing more than derailing a well-meaning project, and this time the jokers really went to town.
Lee also points out they've had another chat bot, called Xiaoice, running on Chinese social media site Weibo quite happily for nearly two years, and insists the Microsoft Research team remains undeterred. "We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an internet that represents the best, not the worst, of humanity," he says.
It's a noble goal, but in the face of an army of trolls, it's hard not to think, "Good luck with that...".