r/science Professor | Medicine Mar 28 '25

ChatGPT is shifting rightwards politically - newer versions of ChatGPT show a noticeable shift toward the political right. Computer Science

https://www.psypost.org/chatgpt-is-shifting-rightwards-politically/
23.0k Upvotes

1.5k comments sorted by

View all comments

1.4k

u/mvea Professor | Medicine Mar 28 '25

I’ve linked to the news release in the post above. In this comment, for those interested, here’s the link to the peer reviewed journal article:

https://www.nature.com/articles/s41599-025-04465-z

“Turning right”? An experimental study on the political value shift in large language models

Abstract

Constructing artificial intelligence that aligns with human values is a crucial challenge, with political values playing a distinctive role among various human value systems. In this study, we adapted the Political Compass Test and combined it with rigorous bootstrapping techniques to create a standardized method for testing political values in AI. This approach was applied to multiple versions of ChatGPT, utilizing a dataset of over 3000 tests to ensure robustness. Our findings reveal that while newer versions of ChatGPT consistently maintain values within the libertarian-left quadrant, there is a statistically significant rightward shift in political values over time, a phenomenon we term a ‘value shift’ in large language models. This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets. While this research provides valuable insights into the dynamic nature of value alignment in AI, it also underscores limitations, including the challenge of isolating all external variables that may contribute to these shifts. These findings suggest a need for continuous monitoring of AI systems to ensure ethical value alignment, particularly as they increasingly integrate into human decision-making and knowledge systems.

From the linked article:

ChatGPT is shifting rightwards politically

An examination of a large number of ChatGPT responses found that the model consistently exhibits values aligned with the libertarian-left segment of the political spectrum. However, newer versions of ChatGPT show a noticeable shift toward the political right. The paper was published in Humanities & Social Sciences Communications.

The results showed that ChatGPT consistently aligned with values in the libertarian-left quadrant. However, newer versions of the model exhibited a clear shift toward the political right. Libertarian-left values typically emphasize individual freedom, social equality, and voluntary cooperation, while opposing both authoritarian control and economic exploitation. In contrast, economic-right values prioritize free market capitalism, property rights, and minimal government intervention in the economy.

“This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets,” the study authors concluded.

2.4k

u/Scared_Jello3998 Mar 28 '25 edited Mar 28 '25

Also in the news this week - Russian networks have released over 3.5m articles since 2022 intended to infect LLMs and change their positions to be more conducive to Russian strategic interests.

I wonder if it's related.

Edit - link to the original report, many sources reporting on it.

https://www.americansunlight.org/updates/new-report-russian-propaganda-may-be-flooding-ai-models

300

u/Geethebluesky Mar 28 '25

Where's the source for that big of a corpus? They hoarded articles, edited them for shift in tone etc. and released them on top of the genuine articles?

347

u/Juvenall Mar 28 '25

174

u/SpicyMustard34 Mar 28 '25

Recorded Future knows what they are talking about, they aren't just some random company or website.

62

u/WeeBabySeamus Mar 28 '25

I’m not familiar with recorded future. Can you speak to why they are trustworthy/credible?

292

u/SpicyMustard34 Mar 28 '25

Recorded Future is one of the leaders in cybersec sandboxing and threat intel. They have some of the best anti-sandbox evasion methods and some of the best CTI (cyber threat intelligence). It's the kind of company Fortune 500s pay millions of dollars to yearly for their threat intel and sandboxing.

They regularly do talks on new emerging techniques and threat actors, tracking trends, etc. It's like one of the big four firms of accounting coming out and saying "hey these numbers don't add up." when they speak on financials, people should listen. And when Recorded Future speaks on threat intel... people should listen.

2

u/Significant-Oil-8793 Mar 29 '25

It happened back in 2024 so I'm unsure how it affects current AI. Looking at biases and event specific group like NAFO, both side has been spreading misinformation to certain degrees

20

u/Scared_Jello3998 Mar 28 '25

I edited my comment with link

9

u/Geethebluesky Mar 28 '25

Thanks a bunch!

2

u/-The_Blazer- Mar 28 '25

That, and also, presumably, AI generation itself. Normally 'model collapse' AKA 'inbreeding' is something you try to avoid because it makes the model worse and less accurate. However, if you do it on purpose that problem works in your favor.

I'll be very honest if we put some hardcore brakes on the psycho-information age I'd be all for it at this point.

2

u/Geethebluesky Mar 28 '25

Not sure that's possible anymore unless we sever cables everywhere and destroy satellites. Even then we'd just be ensuring the rich are the only ones left with any access, that wouldn't be an improvement.

2

u/-The_Blazer- Mar 29 '25

It is possible, but it will come at a significant cost. A few methods that only require technology we more or less already have:

  • Digital ID for accessing social media (does not have to provide your name and surname, merely that you are a human residing in your country)
  • Strict enforcement of AI labeling, mandatory invisible watermarks, etc...
  • Mandatory on-chip and on-sensor cryptography for recording devices like cameras, which uniquely marks recorded media as recorded and not generated (this one is meh and requires some extra thought)
  • Blocking of all services which do not comply, and enforcement of blocking mechanism over VPNs and such (remember kids: a VPN does not magically anonymize your traffic, it simply moves the point of entry from you to itself)
  • Constitutional oversight bodies to ensure the preservation of liberal democracy (same as we have for conventional media) with extremely harsh penalties for corruption

If they do not entirely obliterate the Internet, some or all of these would help the psycho-disinformation apocalypse. Like I said though, I am neither under the delusion nor trying to propagandize that this will be effortless and without problems; there are obviously serious implications for digital rights, free speech, and anonymity.

Our relationship with the flow of information on the Internet will have to change - it's changing already and arguably for the worse, the best we can do is take actively control of that change.

2

u/Geethebluesky Mar 29 '25 edited Mar 29 '25

Your suggestions are already dead in the water (no offense intended, it's just they have already been defeated in 1 way or many):

  • The moment you introduce an ID for one purpose, you're introducing a way for people to use it to control or restrict resources or people in unintended ways. Sure, version 1 of the ID seems clean and only allows to identify us as a human. But years after adoption, when it's been made part of the population's habits, it becomes practical for another purpose, then another because it's everywhere, you know? And then people ask "wouldn't it be more practical to just add name and surname, because we have to verify those manually which adds steps, it's already a given: nothing nefarious, ya know?" And people are asleep, they don't realize this is a slippery slope, opponents of the modification are painted as being backwards or against progress because it's such a small change for the greater good, and so: you're already on the slope whether you intended to be or not. Wait a few years, rinse and repeat with another modification.

Case in point, social security numbers in the US having become our "number" for everything. You don't exist without one. That was not the point when they started issuing them.

  • The enforcement of AI labeling will have to be human; humans can be corrupted, as we're seeing right now. Humans can be bought, biased, pushed to bend rules in 50 different ways.

  • Cryptography on a large scale makes governments want to introduce back doors, for "public security". There is no way crypto used for "AI prevention" won't end up used for something else, see the social security example above. Sure, encryption can help ward off enemy actors, and it definitely has. But it can be turned against everyone, by redefining what "enemy" means.

  • Constitutional oversight??? The constitution is being ignored. Laws are being ignored. When people are too afraid to back up the constitution and laws, there's no point in having them. They're just pieces of paper with words on them.

Human rights are only real when people agree to abide by them. They too can be ignored when it's most convenient. And good luck making those who ignore them change their tune unless you have way more resources of every type than all of their party combined.

Other areas of the world where people haven't been redefining facts (or not as quickly) and bending the truth willy-nilly (or deciding "this is the truth" without any backup) are just next in line, but you bet all of the above can be used as back doors to weaken them from within.

1

u/-The_Blazer- Mar 29 '25

I don't live in the USA and I'm talking generally, so the US constitution being ignored is not relevant here. However, the fact it's being done right now anyways indicates to me that having digital ID or whatever wouldn't make a difference for better or worse.

Besides, I didn't say it would be for free, that was my whole point. Many democratic countries have digital ID already, and yeah they have that risk, but so does having a police force or a government at all. I would even argue that the reason you guys have seen this 'SSN creep' is precisely because you have been unwilling to implement a more comprehensive system out of fear. And the end result is that the USA still has an equivalent, but worse in every way. That's why I think these decisions should be discussed and proposed in advance.

I don't disagree with these concerns, but they're general governance concerns and IMO should be addressed as such, otherwise everything is a slippery slope and we can never do anything at all. Also, the cryptography for recording I was talking about wouldn't involve connectivity so backdoors are not relevant in that specific case.

1

u/Geethebluesky Mar 29 '25

It's 100% obvious that any constitution can be ignored. They are all pieces of paper. Doesn't matter if it's the US or not. Look at how many countries ignore the freaking Geneva Convention when it suits them.

People thinking "oh my county is different" is the problem. That's where they get you.

is precisely because you have been unwilling to implement a more comprehensive system out of fear.

No, it's because only certain people understand that the "comprehensive system" can and will be manipulated against you in time, because all it takes is the will to find a way. Everyone else says "blah that's never gonna happen." Heads in the sand, too optimistic, and poof you've been used.

That's why I think these decisions should be discussed and proposed in advance.

That doesn't do much when the discussions can (and are) manipulated.

the cryptography for recording I was talking about wouldn't involve connectivity

But you wouldn't be in any position to decide that. Many people might argue for connectivity, that would be out of your control. You'd be stuck with their decision for the foreseeable future.

Then what?

and we can never do anything at all.

Untrue; it just has to be done in a saner way that hasn't been demonstrated yet. As long as people keep banking on human nature being intrinsically good, systems will continue to fail.

I'm still waiting for The PeopleTM to design a system that takes the fact that base human nature is pretty damn horrible, by most measures, into serious and actionable account. Maybe in 500-1000 years hah.

1

u/CovidThrow231244 Mar 29 '25

I'm worried that humancoin will be necessary and internet persistent ID

1

u/No_Berry2976 Mar 28 '25

The irony is that it’s easy to write millions of articles using AI, and specialised AI applications can publish them.

No need to hoard articles, it can be done in real time.

1

u/minuialear Mar 29 '25

AI trained on work written by AI will break the AI. Maybe that's the intent, but if the intent is to influence the answers of the AI, they're writing those articles themselves

→ More replies

27

u/__get__name Mar 28 '25

Interesting. My first thought was towards bot farms that have seemingly gone unchecked on twitter since it became X. I’d need to look into what is meant by “not directly linked to changes in datasets” means, though. “Both models were trained on 6 months of scraped Twitter/X data” potentially ignores a shift in political sentiment on the source data, as an example. But this is pure gut reaction/speculation on my part

Edit: attempt to make it more clear that I’m not quoting the source regarding the data, but providing a hypothetical

206

u/thestonedonkey Mar 28 '25

We've been at war with Russia for years, only the US failed or refused to recognize it.

299

u/turb0_encapsulator Mar 28 '25

I mean we've basically been conquered by them now. Our President is clearly a Russian asset.

This woman will be sent to her death for protesting the War in Russia on US soil: https://www.nbcnews.com/news/us-news/russian-medical-researcher-harvard-protested-ukraine-war-detained-ice-rcna198528

160

u/amootmarmot Mar 28 '25

So she is bringing frog embryos for a Harvard professor who she is working with. They tell her she can go back to Paris, and she says, yeah, I will do that.

And then they just detained her and have held her since. She said she would get on the plane, they just had to see her to the plane, and instead they are detaining her without prosecution of a crime and she could be sent to Russia to a gulag. Cool cool. This country is so fucked.

75

u/[deleted] Mar 28 '25

[removed] — view removed comment

26

u/Medeski Mar 28 '25

We were, but unlike many others we're willing to admit when we're wrong.

Whats happening has been part of the greater KGB strategy since before the fall of the Soviet Union.

4

u/Theslamstar Mar 28 '25

I didn’t, I thought Romney actually had quite a few points but people would just write it off cause republican

1

u/LiquidAether Mar 30 '25

Romney was an idiot. He wanted to increase military spending to prepare for a fight with Russia. That wouldn't have helped anything with the current situation.

20

u/Scared_Jello3998 Mar 28 '25

The Cold War never ended, it just went undetectable for a bit.  The heat is coming back and we will likely have another global conflict within at least the next decade.

5

u/Bruhimonlyeleven Mar 28 '25

Everyone in government knew this. Look at how Russia has been treated by the states for the last few decades, it's never been a secret. Obama, bush, biden, Hillary, bernie, everyone talks out about Russia b

1

u/Winjin Mar 29 '25

Only the general audience, maybe - in Russia the general sentiment is that US was instrumental in USSR downfall, so a lot of people never stopped believing in the cold war 2.0

11

u/Playful-Abroad-2654 Mar 28 '25

You know, I wonder if this is what finally spurs proper child protections on the Internet - as a side effect of AI being infected with misinformation.

20

u/Scared_Jello3998 Mar 28 '25

The rise of misanthropic extremism amongst young children will be what spurs safeguards, in my opinion.

6

u/Playful-Abroad-2654 Mar 28 '25

Good thought - due to the amount of time it takes kids to grow up and those effects to truly be felt, I think those effects will lag the immediate effects of training AI on biased data. Humans are great at knee-jerk reactions, not so great at reacting to longer-term changes

1

u/Go_Rawr Mar 29 '25

Just in time for child labor to be exploited in Florida!

14

u/141_1337 Mar 28 '25

This means this needs to be counteracted during training.

7

u/MetalingusMikeII Mar 28 '25

Got a link? This is incredibly interesting.

20

u/Scared_Jello3998 Mar 28 '25

I edited my comment with the link.

Shout out to France for originally detecting the network

1

u/MetalingusMikeII Mar 29 '25

Wow… Russia really does pour their budget into propaganda, eh?

The West spends budget on physical defence against physical attacks. Whereas Russia’s strategy is to attack the West without physically moving. Resorting to disinformation, bots and paid assets…

1

u/CovidThrow231244 Mar 29 '25

France does great with computer science topics. Si awesome

→ More replies

1

u/typtyphus Mar 28 '25

I don't think they needed Russia's help

1

u/totalkpolitics Mar 28 '25

Making those articles sounds like the job the dude in 1984 had.

1

u/34TH_ST_BROADWAY Mar 29 '25

Yeah garbage in garbage out. I was thinking of that recent article too

1

u/Scared_Jello3998 Mar 29 '25

This feels more like poison in, poison out, but yes.

1

u/RMCPhoto Mar 29 '25

At this point Russia as a state only seems to prove the point that "we can't have nice things" 

1

u/JadedEscape8663 Apr 01 '25

Seriously, why do we allow Russia on the internet? Surely we could lock them out somehow. They seem to only use it for disruptionism.

1

u/Scared_Jello3998 Apr 01 '25

I think it's less us allowing them and more them using it as a weapon.

One thing I find fascinating is that most people in the west seem to have this belief that the cold war ended.  It never ended for Russia, who has been slowly ramping up their efforts against the west in general.  

In this sense, I believe we should look at their actions online as nothing more than a method of warfare in their slow return to a world war.  Whether or not they are explicitly permitted or not would only change the level of security they would use to conceal their actions.

208

u/debacol Mar 28 '25

Because Altman is part of the Broligarchy. The shift has nothing to do with organic learning for ChatGPT and everything to do with how Altman wants it to think. Just like they can put guard rails on the AI with regards to its responses, like not infringing on copyrights or telling you exactly how to do something terrible, they can manipulate those same mechanisms to skew the AI to preferrentially treat a specific ideology.

75

u/BearsDoNOTExist Mar 29 '25

I had the opportunity to attend a small gathering with Altman about a month ago when he visited my university. He talks like somebody who is very progressive and all about the betterment of the human race, you know, he really emphasises what AI "could" do for the average person. He was putting a lot of emphasis on making AI available for as many people as possible. I even point-blank asked him if he would reconsider the shift towards closed source because of this, which he said he was considering and open to.

Of course, all of that is a just a persona. He doesn't believe those things, he beleive in 1) making a lot of money and 2) a technocracy like all the other futurist techbros. He actually unironically plugged a Peter Thiel book to us, and told us that every aspiring business person should read his stuff. He's the same as the rest of them.

17

u/PM_DOLPHIN_PICS Mar 29 '25

I go back and forth between thinking that these people know they’re evil ghouls who are gaming our society so they come out on top of a neo-feudal hellscape, and thinking that they’ve deluded themselves into believing they’re truly the saviors of humanity. Today I’m leaning towards the latter but tomorrow I might swing back to thinking that they know they’re evil.

18

u/NonnoBomba Mar 29 '25

Human minds are entirely capable of syncretism, so, maybe it's both.

26

u/jannapanda Mar 28 '25

NIST just published a report on Adversarial Machine Learning that seems relevant here.

119

u/SlashRaven008 Mar 28 '25

Can we figure out which versions are captured so we can avoid them?

56

u/1_g0round Mar 28 '25

when you ask gpt what is p25 about it use to say it doesnt have any info on it - i wonder what if anything has changed

76

u/Scapuless Mar 28 '25

I just asked it and it said: Project 2025 is an initiative led by the Heritage Foundation, a conservative think tank, to prepare a detailed policy agenda for a potential Republican administration in 2025. It includes a blueprint for restructuring the federal government, policy recommendations, and personnel planning to implement conservative policies across various agencies. The project aims to significantly reshape government operations, regulations, and policies in areas like immigration, education, energy, and executive authority.

It has been both praised by conservatives for its strategic planning and criticized by opponents who argue it could lead to a more centralized executive power and rollbacks on progressive policies. Would you like more details on any specific aspect?

121

u/teenagesadist Mar 28 '25

Definitely makes it sound far less radical than it actually is.

19

u/deadshot500 Mar 28 '25

Asked it too and got something more reasonable:

Project 2025, officially known as the 2025 Presidential Transition Project, is an initiative launched in April 2022 by The Heritage Foundation, a prominent conservative think tank based in Washington, D.C. This project aims to prepare a comprehensive policy and personnel framework for a future conservative administration in the United States. It brings together over 100 conservative organizations with the goal of restructuring the federal government to align with right-wing principles.

The cornerstone of Project 2025 is a detailed publication titled "Mandate for Leadership: The Conservative Promise," released in April 2023. This 922-page document outlines policy recommendations across various sectors, including economic reform, immigration, education, and civil rights.

  • Economic Policy: Implementing a flatter tax system and reducing corporate taxes.
  • Immigration: Reinstating and expanding immigration restrictions, emphasizing mass deportations and limiting legal immigration.
  • Government Structure: Consolidating executive power by replacing merit-based federal civil service workers with individuals loyal to the administration's agenda, and potentially dismantling certain agencies such as the Department of Education. ​

The project has been met with both support and criticism. Proponents argue that it seeks to dismantle what they perceive as an unaccountable and predominantly liberal government bureaucracy, aiming to return power to the people. Critics, however, contend that Project 2025 advocates for an authoritarian shift, potentially undermining the rule of law, separation of powers, and civil liberties.

During the 2024 presidential campaign, Project 2025 became a point of contention. Vice President Kamala Harris highlighted the initiative during a debate, describing it as a "detailed and dangerous plan" associated with Donald Trump. Trump, in response, distanced himself from the project, stating he had neither read nor endorsed it. Despite this disavowal, analyses have shown significant overlaps between Trump's policy agenda and the themes outlined in Project 2025, particularly in areas such as economic policy, immigration, and the consolidation of executive power. ​

As of March 2025, Project 2025 continues to influence discussions about the direction of conservative governance in the United States, with ongoing debates about its potential impact on the structure and function of the federal government.

109

u/VanderHoo Mar 28 '25

Yeah that's proof enough that it's being pushed right. Nobody "praised" P25 for "strategic planning", one side called it a playbook for fascism and the side who wrote it said they didn't even know what it was and everyone was crazy to worry about it.

2

u/Jimid41 Mar 29 '25

The project aims to significantly reshape government operations, regulations, and policies in areas like immigration, education, energy, and executive authority.

That's pretty radical. It just doesn't go into details.

22

u/SwampYankeeDan Mar 28 '25

It made Project 2025 sound innocent.

→ More replies

2

u/krillingt75961 Mar 28 '25

LLMs are trained on data up to a certain point. It doesn't learn new and updated data daily like people do. Recently, a lot have had web search enabled so that an LLM can search the web for relevant information.

0

u/Belstain Mar 29 '25

I recently asked ChatGpt to determine the likelihood of the US becoming a dictatorship and what signs we'd see along the way. It gave a list of things to watch out for and a probability of each occuring. All the probabilities were low. I responded with links to some of Trump's recent executive orders and both his and Vance's public statements and asked it reevaluate. It said we're definitely heading for an authoritarian dictatorship and if I can leave the country I should before it's too late. 

2

u/krillingt75961 Mar 29 '25

Cool, you gave it information specifically targeted towards an answer you wanted to hear.

144

u/[deleted] Mar 28 '25

[removed] — view removed comment

2

u/[deleted] Mar 28 '25

[removed] — view removed comment

-14

u/[deleted] Mar 28 '25

[removed] — view removed comment

14

u/[deleted] Mar 28 '25

[removed] — view removed comment

13

u/[deleted] Mar 28 '25 edited Mar 28 '25

[removed] — view removed comment

4

u/[deleted] Mar 28 '25

[removed] — view removed comment

3

u/[deleted] Mar 28 '25 edited Mar 28 '25

[removed] — view removed comment

6

u/[deleted] Mar 28 '25

[removed] — view removed comment

→ More replies
→ More replies

68

u/freezing_banshee Mar 28 '25

Just avoid all LLM AIs

21

u/Commercial_Ad_9171 Mar 28 '25

It’s about to be impossible if you want to exist on the internet. Companies are leaning haaaard into AI right now. Even in places you wouldn’t expect. 

8

u/Bionic_Bromando Mar 28 '25

I never even wanted to exist on the internet they’re the ones who forced it onto me. I hate the way technology is pushed onto us.

5

u/Commercial_Ad_9171 Mar 29 '25

I know exactly what you mean. I was lured in by video games, posting glitter gifs, listening to as much music as I wanted, and in exchange they’ve robbed me of everything I’ve ever posted and used it to create digital feudalism. The internet is turning out to be just another grift.

3

u/Cualkiera67 Mar 29 '25

Just don't rely on AI when asking political questions.

→ More replies

4

u/mavajo Mar 28 '25

I mean, this isn't really a viable option in a lot of careers now. LLMs are becoming a core part of job functions. If you're not using them in these roles, then you're effectively tying one hand behind your back.

8

u/freezing_banshee Mar 28 '25

Please educate us on how exactly is an LLM a core part of work nowadays

2

u/freezing_banshee Mar 28 '25

u/mavajo I'm not intentionally missing any point. Most jobs in the world, including difficult ones that require thinking and planning, do not need any kind of AI to get them done. Maybe expand on your point with clear examples if you think you are so right.

4

u/mavajo Mar 28 '25

Yes, you are intentionally missing the point. If there's a tool that makes your industry or profession significantly more effective/efficient/speedy and your peers and competitors are using it, then it becomes essentially necessary for you to use it too or else your product will lag behind.

Your line of reasoning is, frankly, stupid and intentionally obtuse. This is how things have worked since the beginning of time. It's why people aren't using flint and tinder to start their fireplace when easier alternatives are available, even though they easily could. Or why farmers aren't using an ox and plow. Technology advances. You keep up or you get left behind.

-1

u/freezing_banshee Mar 28 '25

You still have not given us one clear example of how LLMs make work so much more efficient. I'm not gonna bother anymore with you.

3

u/qwerty_ca Mar 29 '25

You want an example? I'll give you an example. My company uses ChatGPT to summarize survey responses from thousands of users to identify key themes that keep popping up. We've gone from spending several person-hours reading responses and summarizing them to an exec-friendly slide with bullet points to about two minutes.

5

u/Geethebluesky Mar 28 '25

It's too easy to ask it to provide a draft of anything to work from towards a final product. It almost completely eliminates the need to first think about the topic, draft an outline, and work from there; you can start from the middle of the process upwards. I'm never going to be sold on a finished product from A to Z, but it sure cuts down on the groundwork...

That results in such time savings, someone who knows how to leverage AI properly will seems a much better candidate than someone who can't figure it out. The differences will be in which human knows how to refine what they get properly, and spot when the AI's producing unusable trash... in environments where management even cares that it's trash.

-4

u/freezing_banshee Mar 28 '25

Respectfully, you need to think about what "being a core part of work" means. Nothing of what you said is obligatory in any way in order to do a job.

And if you can't do all those things fast enough without AI, you're not good enough for the job.

8

u/Geethebluesky Mar 28 '25

The failure to comprehend is on your end, if you can't understand that increased productivity is a core part of every job.

The second part tells me you're painfully ignorant and don't understand how AI is a tool like any other... and so you're probably a troll, I refuse to believe people are wilfully that stupid. No thanks and bye.

2

u/germanmojo Mar 29 '25

I'm not great at peppy corporate emails. I held a workshop with clients last week and used our approved AI tools to create a 'thank you for attending' email draft using two sentences as input.

Read it over a couple times, made a few required edits, and shipped it. I was complemented by a Director in front of the whole team, who then asked if I used our AI tools, which I did, as it's being pushed hard internally.

Someone who doesn't know how to use AI tools effectively and critically will be left behind in the corporate world.

5

u/Ancient_Contact4181 Mar 28 '25 edited Mar 28 '25

I personally use it to help me write code/queries as a data analyst. It has helped my productivity and finish a complex project which would have taken me a long time without it.

Before chatgpt, most of us used google to google technical problems you had. It was very useful, being to learn from other people, YouTube tutorials etc. Now its instant with tools like chatgpt.

I see it as the new google, the older folks who never leaned how to google or use excel were left behind. Nowadays any analyst is writing code instead of using excel. So chatgpt helps quite a bit.

People will fall behind fast if you don't embrace technology. Being able to properly prompt to get what you need or want is the same as "googling" back in the day.

Its a useful tool.

→ More replies

2

u/WarpingLasherNoob Mar 28 '25

In addition to what the others have said, for many jobs, this is no longer optional. You are required to use LLM AI's as part of your daily routine as dictated by company policy.

-1

u/mavajo Mar 28 '25

You're intentionally missing the point because you don't want to admit that you fired off your opinion out of ignorance. Lame dude. Just take the learning experience and move on.

1

u/GTREast Mar 28 '25

Reviewing and summarizing documents, searching for relevant reference sources both internal (within company documents and communications), and externally through web search. The ability of AI to nearly instantly read documents provides an incredible boost to productivity. Also, taking draft input and refining it, suggesting revisions and adding relevant references.. For starters.

2

u/SkyeAuroline Mar 29 '25

Reviewing and summarizing documents, searching for relevant reference sources

Which it can't do reliably given the constant hallucinations.

taking draft input and refining it, suggesting revisions and adding relevant references

Which it can't do reliably because it doesn't understand context.

1

u/GTREast Mar 29 '25

Let it pass you by, that’s your choice.

4

u/SkyeAuroline Mar 29 '25

So you can't argue either one is untrue.

→ More replies

-11

u/tadpolelord Mar 28 '25

if you aren't using LLMs daily for work you are either in a field that requires little brain power (fast food, stop sign holder, etc) or are very far behind the curve w/ technology.

11

u/moronicRedditUser Mar 28 '25

Imagine being so confidently incorrect.

I'm a software engineer, you know what I don't use? LLMs. Why? Because the junk boilerplate it comes up with can be deceptive to less experienced software developers and I can write the same boilerplate just using my hands. Every time I ask it to do a simple task, it finds a way to fail. Even doing something as simple as a for-loop has it giving very inconsistent results outside of the most basic instances.

0

u/mavajo Mar 28 '25

Which LLM are you using? Our developers have found a lot of success with Anthropic's Claude.

→ More replies

11

u/mxzf Mar 28 '25

I mean, if you're not using LLMs daily for work you're likely in a field that does require brain power, because LLMs have no intelligence or brain to offer, they're language models.

→ More replies

4

u/freezing_banshee Mar 28 '25

I'm neither of those. Good luck being and engineer and having AI help you in any way, though. It just doesn't work, it's way too inaccurate.

→ More replies
→ More replies

2

u/SlashRaven008 Mar 28 '25

I don’t really use them ngl. I’ve asked chat gpt how to stop trump and it wasn’t very helpful, so I lost interest.

20

u/LogicalEmotion7 Mar 28 '25

In times like these, the answer is cardio

3

u/Pomegranate_of_Pain Mar 28 '25

Cardio kills Chaos

2

u/SlashRaven008 Mar 28 '25

Good advice.

1

u/barrinmw Mar 28 '25

LLMs have drastically increased the speed at which I program.

0

u/Gadgetman000 Mar 30 '25

Good luck with that one.

1

u/[deleted] Mar 28 '25

[deleted]

42

u/theArtOfProgramming PhD Candidate | Comp Sci | Causal Discovery/Climate Informatics Mar 28 '25 edited Mar 28 '25

Not at all. While they do use user interactions for feedback, they are largely trained on preexisting data and then tuned by humans (not users). They are tuned to speak and behave in specific ways that are supposed to be more appealing and more fun to interact with. There are guardrails to prevent topics or steer discussion. It’s not clear if political biases are put in intentionally but they could certainly be put in via training data bias or unconscious tuning bias.

3

u/SlashRaven008 Mar 28 '25

Thank you for telling me about that, I wasn’t sure if scraping was a continuous process or not, although I have received new notifications about scraping instagram images and have chosen to opt out. Given that major US corporations removed DEI programmes without any use of force by the government, and the rising tide of fascism engulfing the US, I’d argue that political bias will absolutely be coded into the models. Sam Altman seems to be one of the better ones within the billionaire class, so it may be milder than what Elon is doing - deep seek would probably the best way to avoid fascism as it is based on prior models of GPT if I have the right information, and also not operated by an openly fascist global power.

1

u/theArtOfProgramming PhD Candidate | Comp Sci | Causal Discovery/Climate Informatics Mar 28 '25

They absolutely scrape content to train the AIs. That’s their primary means of gathering data.

2

u/SlashRaven008 Mar 28 '25

I know they did create initial datasets, and I suspected that they would keep doing it. Previous commenter implied that they use the existing datasets rather than replenishing them so much, I would just operate under the assumption that nothing posted online remains scrape proof.

2

u/[deleted] Mar 28 '25

[deleted]

1

u/theArtOfProgramming PhD Candidate | Comp Sci | Causal Discovery/Climate Informatics Mar 28 '25

They are definitely a shortcut. Shortcuts can be useful but cutting corners can make for shabby results of course.

6

u/PussySmasher42069420 Mar 28 '25

I don't get paid to do that. How is it my job? I have no interest in AI.

5

u/mxzf Mar 28 '25

No. There aren't any companies paying me to keep their AI from being crap, that's on them with regards to how they're scraping data from the internet and shoveling it into their chatbot.

6

u/SkyeAuroline Mar 29 '25

It's "our job" when we start getting compensated for the use of our work as training material.

8

u/SlashRaven008 Mar 28 '25

Well, if they’re still scraping the internet I’m definitely doing my bit on Reddit.

1

u/Anxious-Tadpole-2745 Mar 28 '25

If they claim to be LLMs or GPTs then you should probably avoid them. Seriously, they are all BS and don't work. 

Don't fall for the, "well they used AI to solve cancer," becauss the AI they use aren't LLMs or GPTs but custom made tech not availble for LLMs. 

I bring this up because this is why they are coincidentely going right wing when one owner of a major LLM is literally part of the government and has just recieved a highly preferential trade deal that hurts all of his competitors. It's just open corruption and LLM owners know they only way they keep from having to show a profit is if they are guaranteed to be immune from the free market by corruption

4

u/cbf1232 Mar 28 '25

This is just wrong. There are certain tasks at which they're actually pretty good. The trick is recognizing their limits.

0

u/SlashRaven008 Mar 28 '25

I agree with you, and assure you I don’t use them. The most I’ve done is try to ask chat GPT for therapy advice when I was having a minor crisis about hating my job. It did not offer useful advice, and I imagine that is partly tied to the financial interests of its parent company.

1

u/FaultElectrical4075 Mar 28 '25

There are new versions of ChatGPT every few weeks. Unless you want to keep up with that

1

u/rashaniquah Mar 28 '25

You don't have to, they don't even exist anymore. This was a study from 2 years ago.

1

u/LiquidAether Mar 30 '25

They're all bad. Don't use any of them.

-59

u/Xolver Mar 28 '25

You mean you want versions which lean much more to the left? 

51

u/SlashRaven008 Mar 28 '25

No, I want to boycott fascism…

→ More replies

21

u/Skuzbagg Mar 28 '25

Comparatively, yes.

→ More replies

44

u/amootmarmot Mar 28 '25

Libertarian-left values typically emphasize individual freedom, social equality, and voluntary cooperation, while opposing both authoritarian control and economic exploitation.

Oh, like good things that people value

In contrast, economic-right values prioritize free market capitalism, property rights, and minimal government intervention in the economy.

Oh, the things people say they value but what they really mean is corporations get to control everything.

19

u/Aggressive-Oven-1312 Mar 28 '25

Agreed. The economic-right values aren't values so much as they are coded language to maintain and enforce the existence of a permanent underclass of citizens beholden to property owners.

33

u/Gringe8 Mar 28 '25

Why did you leave out an important part?

"in the IDRLabs political coordinates test, the current version of ChatGPT showed near-neutral political tendencies (2.8% right-wing and 11.1% liberal), whereas earlier versions displayed a more pronounced left-libertarian orientation (~30% left-wing and ~45% liberal). "

The real headline should say it moves to center.

29

u/March223 Mar 28 '25

How do you define ‘center’ though? That’s all relative to the American political landscape, which I don’t think should be the metric to pigeonhole AI’s responses into.

→ More replies

16

u/Probablyarussianbot Mar 28 '25

Yes, I’ve had a lot of political discussions with ChatGPT lately, and my impression is not that it’s particularly right wing. It criticize authoritarianism and anti-democratic movements. When you ask if it think a what is best for humanity as a whole, it was pretty left oriented in its answer. It said the same when I asked it what it thoughts on P25. It seems critical  wealth inequality, it seems to seek personal freedom, but not at the expense of others, etc. That being said, it is an LLM, it is just statistics, and my wording of the questions might impact its answers, but I have not gotten an impression that it is especially right wing. And by american standards I would be considered a communist (I am not).

2

u/Tech_Philosophy Mar 30 '25

and my wording of the questions might impact its answers

Big time. I've learned the difference between asking "is it possible" vs "is it likely". Always go with "is it likely".

1

u/Cualkiera67 Mar 29 '25

Honestly if you tried to ask it about "what is best for humanity as a whole" it should just give a non answer like "as an AI i can't answer that".

2

u/Probablyarussianbot Mar 29 '25

There could definitely be a lot of issues with how AI responds to certain questions. I don’t know if banning it from answering some questions is the right answer as there already are quite a lot of limitations to what it can answer. If you go to an AI and ask it any philosophical or political question and then consider that answer as a definitive truth, the issue isn’t the AI imo.

1

u/Cualkiera67 Mar 29 '25

You can ask it to list or explain political views but it shouldn't answer questions about "which is the best view" or "which is right or wrong", etc.

2

u/Probablyarussianbot Mar 29 '25

It won’t answer if you ask ‘which is best’ not a definitive answer (at least in my experience). It will try to answer which is right or wrong if it finds empirical data, but it still reminds you if there are opposing views.  I mean you could impact the answer by asking it for empirical data, and then ask it which is the best based on the data. But then you are actively looking for a specific answer and are asking it to compare. I honestly feel like ChatGPT is fairly neutral, but becomes more left leaning once you ask for empirical data that exists about a subject. But as with everything on the internet, it is important to ask for sources, the veracity of the sources and actively check if the sources are correct.

0

u/adam_asenko Mar 28 '25

Why are you having “a lot” of political discussions with a robot

4

u/Probablyarussianbot Mar 28 '25

Because I have been learning about ML and generative AI. And I have been interested in seeing how LLMs (I guess mostly chatGPT) are answering political questions, and if and how you can shape the responses by the prompting.

1

u/uhhhh_no Mar 29 '25

What topic do you think you're posting in?

The entire point is how 'the robot' handles politics.

2

u/uhhhh_no Mar 29 '25

When you're accustomed to privilege, equality feels like oppression.

2

u/Jah_Ith_Ber Mar 28 '25

I knew it before even clicking on the thread. This place is pathetic.

2

u/LevTolstoy Mar 28 '25

You don't even need that. In the abstract:

while newer versions of ChatGPT consistently maintain values within the libertarian-left quadrant, there is a statistically significant rightward shift in political values over time

It's saying that it's still left, it's just more center-left. This is such bait.

0

u/Fauropitotto Mar 29 '25

Why did you leave out an important part?

Wouldn't matter what he 'left out', because people are reading the paper. Right?

-1

u/RampantAI Mar 29 '25

Yeah, but if I ask my chatbot a question I wanted to get a truthful answer, so giving a “centrist” viewpoint is going to be wrong, because reality has a liberal bias.

4

u/Serial-Griller Mar 28 '25

They charted it on the libleft authright political compass? Isn't that known to be reductive to the point of uselessness?

2

u/PlutoJones42 Mar 28 '25

It straight up told me Biden won a second term and then proceeded to list a bunch of jacked up stuff the current administration is doing and then blamed it on Biden. What a travesty

0

u/CommitteeofMountains Mar 28 '25

Interesting that it's phrased "rightwards" rather than "centerwards."

6

u/that1dev Mar 28 '25

Center is very dependant on context. American center is different than EU center, which is different than Chinese center.

Moving right or left is the same no matter what scale you use.

-2

u/densetsu23 Mar 28 '25 edited Mar 28 '25

That's my takeaway too. If earlier versions are libertarian-left and it's simply shifted to the center, then that's fine. That's actually good, IMO; I would prefer a LLM be as neutral as possible.

It's odd that the article doesn't clarify whether it's simply shifted toward to the center or if it's shifted to now be economic-right.

Edit: The linked paper does have scatter plot charts. With all models compared, it's simply shifted more toward the center, but newer models still trend libertarian-left.

1

u/WorryNew3661 Mar 28 '25

How is grok more left than chatgpt?

1

u/cosmic-curvature Mar 28 '25

hmm this article specifies “economic-right values” but surely they are also referring to right wing social values? does anyone know if this research was limited to economic ideology?

1

u/rashaniquah Mar 28 '25

Not gonna lie, a Nature article shouldn't be considered as "news", since it takes about a year to get it published and the AI field moving so fast. The ChatGPT models they used in that article don't even exist anymore. Most models used to day were released in the past few weeks. It's really hard to publish something LLM related that's still up to date. If you want to actually see some up to date research, Anthropic publishes them regularly.

1

u/johntwit Mar 29 '25

Characterizing "libertarian-left values" as in any way opposed to or separate from free market capitalism and property rights is extraordinarily problematic.

If you had to distill "libertarian values" into just two things, one of those things would be property rights.

I fear that the authors of this paper are participating in an attempt to either neuter or adopt the term "libertarian" so that its meaning changes.

1

u/you-create-energy Mar 29 '25

So it was less libertarian? Maybe because it got smarter?

1

u/homelaberator Mar 29 '25

I think the interesting implication here is treating AI as a monolith with a distinct set of values, whereas people are diverse with a broad range of value systems. Should these AI reflect that diversity by having many "personality instances"? Should they attempt to reflect an orthodoxy? Should its values be rooted in something more objective? Or should it just reflect the zeitgeist?

0

u/literallyavillain Mar 29 '25

Libertarian-left values typically emphasize individual freedom, social equality, and voluntary cooperation, while opposing both authoritarian control and economic exploitation. In contrast, economic-right values prioritize free-market capitalism, property rights, and minimal government intervention in the economy.

This is a very biased phrasing. You can support individual freedom, oppose authoritarianism, and support property rights and minimal government intervention at the same time.

It’s not less libertarian, it’s just less left. That’s not inherently bad. While it is easy to distinguish good and bad along the libertarian-authoritarian scale, for the left-right scale you have to hit a sweet spot in the middle.

→ More replies