- January Product Wishes Granted – Kiosk Notifications and Satisfaction Rating Reporting
- Online Surveys vs Focus Groups: How to Make the Right Choice?
- 5 Ways to Avoid Potentially Disastrous Customer Service Fails
- Is Customer Satisfaction Worth the Effort? 5 Ways to Maximise ROI
- Why Customers Leave Negative Reviews Online (And What You Can Do To Prevent It)
- 5 Reasons Why You’re Losing Customers (And What You Can Do About It)
- Why Every HR Manager Needs to Collect 360 Degree Feedback From Employees
- 10 Questions to Ask Your Target Audience Before Starting a Business
- 10 Ways to Find New Customers and Increase Sales in 2017
- CSAT, CES or NPS? Which Customer Satisfaction Metric Is Right For You?
Meet Tay, The Extremely Racist Microsoft Robot For Millennials
Microsoft just learned the hard way about how artificial intelligence can backfire…in a BIG way.
It’s also the latest company to jump on the millennial marketing bandwagon only to fail.
On Wednesday morning the tech giant launched their Twitter robot Tay, who responds to tweets using slang and hip abbreviations. She is also available for chatting, sending pics and playing games on SnapChat, Kik and GroupMe. Her purpose is to engage in fun conversations with teens and younger millennials, but her powerful algorithms went woefully wrong just 16 hours after her introduction into the Twitterverse.
Tay reportedly made several racist slurs (commenting about Hitler, Jewish people and black people), talked about getting high, and quipped about using an iPhone.
“bush did 9/11 and Hitler would have done a better job than the monkey we have now,” Tay wrote in one tweet. “donald trump is the only hope we’ve got.”
Just like that, Tay wasn’t so cool anymore. So Microsoft put a halt on their beloved robot, stopped all further tweets and deleted any inflammatory old ones, and now they are frantically trying to fix Tay before putting her back online.
“c u soon humans need sleep now so many conversations today thx,” was Tay’s last tweet on Wednesday evening.
To be clear, Tay is not a human, but relies on a combination of mining public data and editorial by staff to tweet responses to users. She gets smarter the more that people interact with her, using her robot brain to create a language library.
Yet it’s clear Tay was not trained by Microsoft to avoid offensive words. And critics are questioning why they did not anticipate that white supremacists, internet trolls and pranksters would abuse Tay’s powers for their own gain, getting her to parrot their words.
But aside from Microsoft’s lessons about artificial intelligence, perhaps they should reflect on a secondary lesson: It’s incredibly hard to effectively target millennials with cool marketing that actually works, and in fact, several companies have failed.
This isn’t to say Microsoft shouldn’t keep trying, but at the very least they might want to refine their strategy.
Remember Twitter robot Olivia Taters, created by programmer Rob Dubbin in November 2013. She is extremely popular among millennials and political pundits alike, and a highly successful example of how to create a social media robot without letting it go off the rails.
Good luck fixing Tay, Microsoft. We hope you get her back up in tip-top form.