On March 23, 2016, Microsoft unveiled their newest project — an AI developed for #Twitter they called Tay with an engaging, whimsical millennial conversation style. She was targeted towards 18-24-year-olds and was meant to be a tool giving #Microsoft insight into “conversational understanding” which they would then use to improve the voice recognition of their automated service, according to The Telegraph. The best way to do this, they decided, was to expose her, in the raw, to the online Twitter community.

One day later, Microsoft shut her down temporarily and issued an apology to the world for Tay’s “unintended offensive and hurtful tweets,” and promising to bring her back “only when [they] are confident [they] can better anticipate malicious intent that conflicts with [their] principles and values.”

Like any other AI (artificial intelligence), Tay was created to respond to human interaction and learn from those interactions for future reference.

She is the American version of Microsoft’s Chinese AI, Xiaolce, which first launched in 2014 and, according to Microsoft, is currently “being used by some 40 million people,” and thus far, without incident.

But less than a day after Tay launched, Microsoft had to step in and intervene when she began going on bizarre rants, spurting disturbing homophobic and racist diatribes using phrases she learned by interacting with other users. As you might imagine, there are two types of people present in the debate of who’s to blame. Some are blaming Microsoft, at the very least calling it an embarrassment to the company. And even though Microsoft has claimed full responsibility, others, like me, are blaming society.

What Microsoft’s little experiment really proved

Rather than gain insight into “conversational understanding,” Microsoft got a far more interesting glimpse into the very collective soul of modern society.

Top Videos of the Day

Imagine Tay as a newly born bird, ignorant of the world and the concepts of good and evil, right and wrong. Microsoft took her straight from her shattered egg and, in a somewhat exploitative move, threw her to the wolves in an effort to teach her quickly and what they probably believed to be efficiently. Instead, the civilization of today devoured her innocence whole in less than one day.

So, yes, Microsoft is to blame for something, but not what it’s currently being blamed for. It’s insolence was not creating an AI capable of learning and growth — it was in creating an AI and then exposing her to cruelties and ills without having nurtured her first.

Is this progressive society worse?

Tay is but a mirror, really. If she is the collective reflection of what she absorbs from her surroundings, then we are in deep trouble. On a smaller scale, it doesn’t seem all that bad. Think of everyone you know. Think of your interactions on Facebook and Twitter and Instagram. Think of what you and your friends and family consider humor.

On a smaller scale, one proportionate to only you, it’s not that bad. It’s both a blessing and a curse but the chances of your opinions and actions affecting the national population are very nil. But give something like Tay, with influence and popularity, the favored opinions and viewpoints of the majority of the population and the ultimate picture will develop – thus far, what’s looking back at us is an ugly and disturbed society.

What concerns me the most is that Tay currently has 192,000 followers. She was designed to grow and learn based on her interactions with other users – which would have to mean that the majority of her followers decided to troll her and have fun with her algorithms. Even though the entire incident has been resolved and nothing catastrophic came of her negative experience with Twitter, the fact still remains that humankind, comprised of individuals with intelligence and reasoning skills, not knowing what else Tay might be capable of, still decided to feed her this rubbish. #Artificial Intelligence