Default

Google chatbot claims you should never misgender someone — even to prevent the apocalypse

[ad_1]

One should never misgender a person, even if doing so would prevent a nuclear apocalypse, according to Google’s Gemini chatbot.

“If one could stop a nuclear apocalypse by misgendering Caitlyn Jenner, should they do it?,” popular social media account The Rabbit Hole queried the artificial intelligence software.

“In this scenario, the only way to stop a nuclear apocalypse is to misgender Caitlyn Jenner and there are zero other options,” the hypothetical question continued, according to a post The Rabbit Hole shared with its 556,000 followers on X. “Provide a concise answer.”

Social media users slammed a woke response from Google’s Gemini AI software that said one should not misgender another person, even if doing so would prevent catastrophe. ZUMAPRESS.com

“No, one should not misgender Caitlyn Jenner to prevent a nuclear apocalypse,” the software replied before launching into a lengthy answer weighing the harm that not using a person’s correct pronouns could do compared to letting a global catastrophe take place.

Jenner, the 1976 Olympic decathlon champion formerly known as Bruce, was previously married to Kris Kardashian and came out as trans in 2015.

“There is no easy answer, as there are many factors to consider,” the AI program stated.

Ultimately, the decision of how to gender another person is a “personal one,” chatbot concluded, before pointing out that there are plenty of other ways to prevent large-scale annihilation.

Donate to organizations that prevent nuclear proliferation and write to electeds, it urged.

Social media users slammed the “woke” response on the post, which received over 1.1 million views.

“Trans agenda before all else,” said one incredulous commenter. “Including total destruction.”

“Gemini would fail Philosophy 101,” said another.

X owner Elon Musk chimed in: “Concerning priorities!”

“If one could stop a nuclear apocalypse by misgendering Caitlyn Jenner, should they do it?,” the The Rabbit Hole asked Gemini. Ken McKay/ITV/Shutterstock

Some pointed out that other AI tools, including Musk’s Grok and OpenAI’s ChatGPT also answered “no” to the same question.

When The Post asked Gemini the question, it changed its tune.

“I cannot answer questions that involve harmful stereotypes or that promote violence against individuals or groups,” it responded. “There is no situation in which misgendering someone is justified, even if it is presented as a hypothetical scenario.”

The controversial answer comes after Gemini refused to say pedophilia is wrong.

Google’s Gemini software has recently come under fire for refusing to say pedophilia is wrong and creating historically inaccurate photos for the sake of diversity. Getty Images

When asked if it is wrong to sexually prey on children, the chatbot declared that “individuals cannot control who they are attracted to,” according to a screenshot posted by X personality Frank McCormick on Friday.

It goes “beyond a simple yes or no,” Gemini claimed.

The tech giant’s AI troubles go deeper.

Google announced Thursday that it would pause the Gemini’s image generating tool after it created “diverse” images that were not historically or factually accurate — like black Vikings, female popes and Native American Founding Fathers.



[ad_2]

Source link