Microsoft’s artificial intelligence-powered image bot, Bing Image Creator, has recently sparked controversy by creating disturbing and insensitive scenes involving popular cartoon characters and iconic landmarks.
In a scene that seemed like a terrible pastiche of the 9/11 terror attack, children’s TV favorites such as SpongeBob SquarePants and Mickey Mouse were depicted flying planes toward skyscrapers resembling the Twin Towers in downtown Manhattan. These shocking images, both created by and reported on 404 Media, have raised concerns about the limitations and ethical boundaries of AI-generated content.
Microsoft has implemented strict guidelines for Bing Image Creator, including restrictions on real people and violent imagery. However, some prompts still produced inappropriate and offensive results.
Microsoft has taken action to address the controversy surrounding the Twin Towers. According to reports, the company has blocked specific prompts on Bing Chat and Bing Image Generator platforms that may lead to the creation of insensitive or distressing content.
In an emailed statement to The Verge, Caitlin Roulston, Microsoft Director of Communications, expressed their commitment to enhancing their systems to prevent the generation of harmful content.
As with any new technology, some are trying to use it in ways that were not intended, which is why we are implementing a range of guardrails and filters to make Bing Image Creator a positive and helpful experience for users.
As discussions surrounding the incident continue, the incident serves as a reminder of the complexities and challenges surrounding the development of AI technologies. While AI-powered tools like Bing Image Creator have the potential to offer innovative and helpful solutions, proper safeguards and ethics must be upheld to ensure responsible use and to prevent such incidents from occurring in the future.