Microsoft’s new AI editors already facing backlash over racism story

Kareem Anderson

Looking for more info on AI, Bing Chat, Chat GPT, or Microsoft's Copilots? Check out our AI / Copilot page for the latest builds from all the channels, information on the program, links, and more!

For the past decade, various tech industries have extolled the virtues of artificial intelligence and its place in the future of humanity. Unfortunately, not all benefits have extended equally to everyone and a recent Microsoft gaffe once again highlights a continued AI blindspot for people of color.

As tensions run high in the US following the murder of several people of color by police officers in recent months, Microsoft’s new AI publishing software pushed out a story dealing with the personal effects of racism on bi-racial singer Jade Thrilwall but featured the wrong mixed-race performer.

leigh ann pinnock jade thirwall

While the story was about Jade and her reflections on racism, the featured image was of band member Leigh-Anne Pinnock, another mixed-race singer of the band Little Mix. While apparently a reoccurring incident according to Jade, Microsoft’s gaffe was called out by Jade in particular on social media.

“@MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed race member of the group.”

“This shit happens to @leighannepinnock and I ALL THE TIME that it’s become a running joke,” she said. “It offends me that you couldn’t differentiate the two women of colour out of four members of a group … DO BETTER!”

The blunder comes a few days after news that Microsoft was in the midst of firing hundreds of human editorial staff in favor of its own AI editing platform.

Despite the tech industry’s bullish ambitions about AI, the software continues to have a spotty relationship when dealing with people of color with reports of the best algorithms struggling to recognize black faces equally, or even Microsoft’s own AI bot Tay using racially insensitive rhetoric comparing black people to monkeys.

When confronted by The Guardian about the bungled article in light of its recent editorial downsizing, a Microsoft spokesperson responded with, “As soon as we became aware of this issue, we immediately took action to resolve it and have replaced the incorrect image.”

Unfortunately, Microsoft’s AI editorial mistake doesn’t seem to give the company pause in continuing to shift its new news publishing process, and instead, the remaining staff is being instructed to clean up the mess caused by its AI replacements.

Because they are unable to stop the new robot editor selecting stories from external news sites such as the Guardian, the remaining human staff have been told to stay alert and delete a version of this article if the robot decides it is of interest and automatically publishes it on They have also been warned that even if they delete it, the robot editor may overrule them and attempt to publish it again.

As news continues to highlight protests, race, and injustice in the United States and particularly around black lives, Microsoft’s new AI will be definitely working overtime to correctly match black faces with new stories that surface, the question is, how often will it be successful?