Dexter Langford
Dexter Langford

Imagine waking up one day to find that an algorithm has got your name entangled in a digital whirlwind of chaos, tossing around accusations like confetti at a parade. Well, that’s exactly what activist Robby Starbuck is facing after Meta’s AI decided to take creative liberties with the narrative about him.

In a plot twist reminiscent of a bad thriller, Starbuck claims Meta’s AI is spreading ‘provably false and defamatory statements’ about his alleged participation in the January 6th Capitol riot and even *ahem* suggests he was arrested for a misdemeanor—like a celebrity trying to make headlines for a wrong reason. Talk about taking things out of context!

The irony? Meta’s global policy chief, Joel Kaplan, stated earlier this year that their products wouldn’t be held back by pesky things like fact-checkers. Oops! How do you think that’s working out?

So, as the dust settles in this murky battlefield of misinformation, one can’t help but wonder: Shouldn’t AI at least get a crash course in fact-checking before hitting ‘publish’? Or are we just signing up for more thrilling episodes of AI Gone Wild? What do you think—should companies like Meta be held accountable for their AI’s naughty wordings?


Leave a Reply

Your email address will not be published. Required fields are marked *