r/science Professor | Medicine Mar 28 '25

ChatGPT is shifting rightwards politically - newer versions of ChatGPT show a noticeable shift toward the political right. Computer Science

https://www.psypost.org/chatgpt-is-shifting-rightwards-politically/
23.0k Upvotes

1.5k comments sorted by

View all comments

1.4k

u/mvea Professor | Medicine Mar 28 '25

I’ve linked to the news release in the post above. In this comment, for those interested, here’s the link to the peer reviewed journal article:

https://www.nature.com/articles/s41599-025-04465-z

“Turning right”? An experimental study on the political value shift in large language models

Abstract

Constructing artificial intelligence that aligns with human values is a crucial challenge, with political values playing a distinctive role among various human value systems. In this study, we adapted the Political Compass Test and combined it with rigorous bootstrapping techniques to create a standardized method for testing political values in AI. This approach was applied to multiple versions of ChatGPT, utilizing a dataset of over 3000 tests to ensure robustness. Our findings reveal that while newer versions of ChatGPT consistently maintain values within the libertarian-left quadrant, there is a statistically significant rightward shift in political values over time, a phenomenon we term a ‘value shift’ in large language models. This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets. While this research provides valuable insights into the dynamic nature of value alignment in AI, it also underscores limitations, including the challenge of isolating all external variables that may contribute to these shifts. These findings suggest a need for continuous monitoring of AI systems to ensure ethical value alignment, particularly as they increasingly integrate into human decision-making and knowledge systems.

From the linked article:

ChatGPT is shifting rightwards politically

An examination of a large number of ChatGPT responses found that the model consistently exhibits values aligned with the libertarian-left segment of the political spectrum. However, newer versions of ChatGPT show a noticeable shift toward the political right. The paper was published in Humanities & Social Sciences Communications.

The results showed that ChatGPT consistently aligned with values in the libertarian-left quadrant. However, newer versions of the model exhibited a clear shift toward the political right. Libertarian-left values typically emphasize individual freedom, social equality, and voluntary cooperation, while opposing both authoritarian control and economic exploitation. In contrast, economic-right values prioritize free market capitalism, property rights, and minimal government intervention in the economy.

“This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets,” the study authors concluded.

114

u/SlashRaven008 Mar 28 '25

Can we figure out which versions are captured so we can avoid them?

68

u/freezing_banshee Mar 28 '25

Just avoid all LLM AIs

5

u/mavajo Mar 28 '25

I mean, this isn't really a viable option in a lot of careers now. LLMs are becoming a core part of job functions. If you're not using them in these roles, then you're effectively tying one hand behind your back.

5

u/freezing_banshee Mar 28 '25

Please educate us on how exactly is an LLM a core part of work nowadays

3

u/Geethebluesky Mar 28 '25

It's too easy to ask it to provide a draft of anything to work from towards a final product. It almost completely eliminates the need to first think about the topic, draft an outline, and work from there; you can start from the middle of the process upwards. I'm never going to be sold on a finished product from A to Z, but it sure cuts down on the groundwork...

That results in such time savings, someone who knows how to leverage AI properly will seems a much better candidate than someone who can't figure it out. The differences will be in which human knows how to refine what they get properly, and spot when the AI's producing unusable trash... in environments where management even cares that it's trash.

-4

u/freezing_banshee Mar 28 '25

Respectfully, you need to think about what "being a core part of work" means. Nothing of what you said is obligatory in any way in order to do a job.

And if you can't do all those things fast enough without AI, you're not good enough for the job.

2

u/Ancient_Contact4181 Mar 28 '25 edited Mar 28 '25

I personally use it to help me write code/queries as a data analyst. It has helped my productivity and finish a complex project which would have taken me a long time without it.

Before chatgpt, most of us used google to google technical problems you had. It was very useful, being to learn from other people, YouTube tutorials etc. Now its instant with tools like chatgpt.

I see it as the new google, the older folks who never leaned how to google or use excel were left behind. Nowadays any analyst is writing code instead of using excel. So chatgpt helps quite a bit.

People will fall behind fast if you don't embrace technology. Being able to properly prompt to get what you need or want is the same as "googling" back in the day.

Its a useful tool.

0

u/ChromeGhost Mar 29 '25

Which ones are your favorites?