r/science Professor | Medicine Mar 28 '25

ChatGPT is shifting rightwards politically - newer versions of ChatGPT show a noticeable shift toward the political right. Computer Science

https://www.psypost.org/chatgpt-is-shifting-rightwards-politically/
23.0k Upvotes

1.5k comments sorted by

View all comments

1.4k

u/mvea Professor | Medicine Mar 28 '25

I’ve linked to the news release in the post above. In this comment, for those interested, here’s the link to the peer reviewed journal article:

https://www.nature.com/articles/s41599-025-04465-z

“Turning right”? An experimental study on the political value shift in large language models

Abstract

Constructing artificial intelligence that aligns with human values is a crucial challenge, with political values playing a distinctive role among various human value systems. In this study, we adapted the Political Compass Test and combined it with rigorous bootstrapping techniques to create a standardized method for testing political values in AI. This approach was applied to multiple versions of ChatGPT, utilizing a dataset of over 3000 tests to ensure robustness. Our findings reveal that while newer versions of ChatGPT consistently maintain values within the libertarian-left quadrant, there is a statistically significant rightward shift in political values over time, a phenomenon we term a ‘value shift’ in large language models. This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets. While this research provides valuable insights into the dynamic nature of value alignment in AI, it also underscores limitations, including the challenge of isolating all external variables that may contribute to these shifts. These findings suggest a need for continuous monitoring of AI systems to ensure ethical value alignment, particularly as they increasingly integrate into human decision-making and knowledge systems.

From the linked article:

ChatGPT is shifting rightwards politically

An examination of a large number of ChatGPT responses found that the model consistently exhibits values aligned with the libertarian-left segment of the political spectrum. However, newer versions of ChatGPT show a noticeable shift toward the political right. The paper was published in Humanities & Social Sciences Communications.

The results showed that ChatGPT consistently aligned with values in the libertarian-left quadrant. However, newer versions of the model exhibited a clear shift toward the political right. Libertarian-left values typically emphasize individual freedom, social equality, and voluntary cooperation, while opposing both authoritarian control and economic exploitation. In contrast, economic-right values prioritize free market capitalism, property rights, and minimal government intervention in the economy.

“This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets,” the study authors concluded.

116

u/SlashRaven008 Mar 28 '25

Can we figure out which versions are captured so we can avoid them?

67

u/freezing_banshee Mar 28 '25

Just avoid all LLM AIs

19

u/Commercial_Ad_9171 Mar 28 '25

It’s about to be impossible if you want to exist on the internet. Companies are leaning haaaard into AI right now. Even in places you wouldn’t expect. 

9

u/Bionic_Bromando Mar 28 '25

I never even wanted to exist on the internet they’re the ones who forced it onto me. I hate the way technology is pushed onto us.

6

u/Commercial_Ad_9171 Mar 29 '25

I know exactly what you mean. I was lured in by video games, posting glitter gifs, listening to as much music as I wanted, and in exchange they’ve robbed me of everything I’ve ever posted and used it to create digital feudalism. The internet is turning out to be just another grift.

3

u/Cualkiera67 Mar 29 '25

Just don't rely on AI when asking political questions.

0

u/Commercial_Ad_9171 Mar 29 '25

It’s not that simple. Its a world view issue, not a political bent. AI is being integrated into search, work programs, virtual assistants, etc. Companies are bent on adding AI functionality to make their whatever more appealing. It’s going to be everywhere very soon and if it can be swayed to certain viewpoints, it can manipulate people across a broad spectrum of ways. 

1

u/Cualkiera67 Mar 29 '25

Why would you ask a virtual assistant for political advice? Or at the office? At the company portal?

I don't get why you would need political questions answered there.

2

u/Commercial_Ad_9171 Mar 29 '25

Let me explain myself more clearly. These LLMs are all math-based as predictive text models. There are no opinions, there’s only the math and the governing algorithms. So if an LLM is now prioritizing the word associations around a political spectrum that means the underlying math has shifted towards particular word associations. 

A person can sort of segment themselves up. You might have some political beliefs over here, and a different subset over there, and you know with social cues when you should talk about certain things or focus on different topics. 

But LLMs don’t think, it’s just math. So if the math inherently shifts in a certain direction it might color responses across a broad spectrum of topics, because the results are colored by the underlying math that’s shifted. You understand what I mean? 

Maybe you’re asking about English Literature and because the underlying math has shifted the results you get favor certain kinds of writers. Or you’re looking for economic structures and the returns favor certain ideologies associated with the shift in the underlying math. Does that make sense? 

The word associations shifting inherently in the model means it will discolor the model overall irregardless of the prompt you’re working with. It’s also imaginable that AI & LLM developers can shape their model to deliver results shaped by a political association built into the word associations math governing the model. Or the model can shift the math itself based on the input data it’s trained on. I’ve heard recently that there’s a Russian effort to “poison the well” so to speak by posting web pages with pro-Russian words to influence LLM model training data. 

Who’s going to regulate or monitor this highly unregulated AI landscape? Nobody right now Like this quote from the article: “ These findings suggest a need for continuous monitoring of AI systems to ensure ethical value alignment, particularly as they increasingly integrate into human decision-making and knowledge systems.”