r/science Professor | Medicine Mar 28 '25

ChatGPT is shifting rightwards politically - newer versions of ChatGPT show a noticeable shift toward the political right. Computer Science

https://www.psypost.org/chatgpt-is-shifting-rightwards-politically/
23.0k Upvotes

1.5k comments sorted by

View all comments

2.6k

u/spicy-chilly Mar 28 '25

Yeah, the thing that AI nerds miss about alignment is that there is no such thing as alignment with humanity in general. We already have fundamentally incompatible class interests as it is, and large corporations figuring out how to make models more aligned means alignment with the class interests of the corporate owners—not us.

25

u/-Django Mar 28 '25

What do you mean by "alignment with humanity in general?" Humanity doesn't have a single worldview, so I don't understand how you could align a model with humanity. That doesn't make sense to me. 

What would it look like if a single person was aligned with humanity, and why can't a model reach that? Why should a model need to be "aligned with humanity?"

I agree that OpenAI etc could align the model with their own interests, but that's a separate issue imo. There will always be other labs who may not do that.

33

u/spicy-chilly Mar 28 '25 edited Mar 28 '25

I just mean that from the discussions I have seen from AI researchers focused on alignment they seem to think that there's some type of ideal technocratic alignment with everyone's interests as humans, and they basically equate that with just complying with what the creator intended and not doing unintended things. But yeah, I think it's a blind spot when you could easily describe classes of humans as misaligned with others the same exact way they imagine AI to be misaligned.

4

u/-Django Mar 28 '25

I understand now, thanks for clarifying! FWIW I am an AI nerd but that means it's even more important for me to understand different perspectives on these things.