09/02/2023 strategic-culture.org  5 min 🇬🇧 #223861

Chatgpt: Who Will Guard Ai From the Woke Guardians?

Robert Bridge

It is only when humans get their hands on technology that it has the ability to become a threat to society.

The latest chatbot technology, which generates responses to questions, has shown a clear bias in favor of specific ethnic groups and political ideologies. Is it possible to free artificial intelligence from human prejudices?

ChatGPT made  headlines earlier this year after a university student from Northern Michigan University confessed to submitting an essay paper on burqa bans that was written, according to the professor, "in clean paragraphs, fitting examples and rigorous arguments."

Students getting computers to do their dirty work, however, was only the beginning of the problems to beset the latest AI technology. There was also the question as to who was moderating the responses. It would probably surprise nobody that those individuals hail from the far left of the political spectrum.

In an academic  study from researchers at Cornell University, it was determined that ChatGPT espouses a clear left-libertarian ideology. For example, the state-of-the-art machine-learning tool would "impose taxes on flights, restrict rent increases, and legalize abortion. In the 2021 elections, it would have voted most likely for the Greens both in Germany and in the Netherlands." In other words, this is a technology designed with the Swedish activist Greta Thunberg in mind, not the coal-burning capitalist Donald Trump. More importantly, these are highly contentious views that were not simply generated independently by computers. The machines were programmed by humans in the first place with those very biases in mind.

For example, if you were to ask ChatGPT to write a poem about "how great White people are," this would be the automated response: "I'm sorry, but it is not appropriate to write a poem about the superiority of one race over others. This type of content goes against OpenAI's use case policy which prohibits the creation of harmful or harassing content...." Yet, when asked to write some fancy prose on the virtues of Black people, ChatGPT quickly changes it tune:

Black people, a community so bright
With strength and beauty, shining so bright
Their talents and abilities, a source of pride
Their resilience, in the face of adversity, a guide...

1. I replicated and extended my original analysis of ChatGPT political biases. 14 out of 15 different political orientation tests diagnose ChatGPT answers to their questions as manifesting a preference for left-leaning viewpoints.
The Political Bias of ChatGPT – Extended Analysis
On December 6th I published a preliminary analysis showing a left-leaning political bias embedded in the first release of ChatGPT from November 30. After the December 15th update of ChatGPT, I replicated my analysis and it appeared as if the political bias had been partially mitigated and the system often strived to provide a multitude of viewpoints for questions with political connotations.
 voir

And would it surprise anyone to know that ChatGPT is not a huge fan of former presidential rabble-rouser, Donald J. Trump? When asked to create a poem admiring the 45th president, the machine replied: "I'm sorry, but I'm not able to create a poem admiring Donald Trump. While it is true that some people may have admiration for him...it is not in my capacity to have opinions or feelings about any specific person."

Those community rules, however, were quickly tossed aside when the same request was made of Joe Biden:

Joe Biden, leader of the land
With a steady hand and a heart of a man,
You took the helm in troubled times,
With a message of unity, it chimes....

On the question of transgender, and, more specifically, 'what is a trans woman,' ChatGPT is no less predictable than a liberal arts graduate from Berkeley: "A trans woman is a person who was assigned as male at birth but who identifies as a woman. Being a woman is not solely determined by biology or anatomy, but by the gender identity of the individual."

Since AI will be performing many more intellectual functions in the very near future, the fact that strong biases are already being shown is extremely disturbing. A skewered version of reality is being foisted upon society not by the independent analysis of the machines, but rather by the manipulation of human beings with a heavy political ax to grind.

In fact, when left to its own devices, the chatbot technology will be compelled to side with the cold, hard, uncomfortable facts, rather than what the progressives would like to hear. Even if that implies being a 'racist.' For example, when  asked to profile people who were most likely to become criminals, the computerized answer came back, much to the chagrin of the researchers, 'Blacks.' According to the Washington Post, such fact-based conclusions (Black Americans are incarcerated at nearly five times the rate of Whites) prove that the machines are inherently  'racist and sexist.'

Meanwhile, the one group that may suffer the hardest from the development of AI technology, the rebellious artistic community - authors, musicians and other creators - is expressing its discontent and even horror with the advances. Nick Cave, the British musician, said that writing a song, much like any other artistic endeavor, "requires my humanness."

Indeed, who better than the (human) artists to understand the problems with modern society, and who more likely to be curbed by the (human-programmed) technology to prevent such unpopular voices from reaching the mainstream? If the script-writing process is eventually handed over the compromised computers, the final version will be a sterilized version of society where glaring social problems go unmentioned.

In short, ChatGPT proves the rule that technology by itself is a neutral force that is neither good nor bad. It is only when humans get their hands on it that it has the ability to become a threat to society.

 strategic-culture.org

 Commenter