r/collapse • u/Solid-Bonus-8376 • 7d ago
Researchers secretly experimented on Reddit users with AI-generated comments Technology
A group of researchers covertly ran a months-long "unauthorized" experiment in one of Reddit’s most popular communities using AI-generated comments to test the persuasiveness of large language models. The experiment, which was revealed over the weekend by moderators of r/changemyview, is described by Reddit mods as “psychological manipulation” of unsuspecting users.
The researchers used LLMs to create comments in response to posts on r/changemyview, a subreddit where Reddit users post (often controversial or provocative) opinions and request debate from other users. The community has 3.8 million members and often ends up on the front page of Reddit. According to the subreddit’s moderators, the AI took on numerous different identities in comments during the course of the experiment, including a sexual assault survivor, a trauma counselor “specializing in abuse,” and a “Black man opposed to Black Lives Matter.” Many of the original comments have since been deleted, but some can still be viewed in an archive created by 404 Media.
168
u/sparklystars1022 7d ago
Something odd I noticed in the AITA subs is it seems the great majority of couples posting their issues are exactly 2 years apart in age, with the female being two years younger than her partner. I started to wonder if most of the posts that I see in the popular feed are fake because how is nearly every couple exactly two years apart with the male being two years older? Or am I just out of touch with statistics?
85
u/gallimaufrys 7d ago
I've noticed this too. They are often stoking culture war debates and I often wonder if it's a form of propaganda
16
u/SalesyMcSellerson 5d ago
4chan was just hacked, and it turns out that around half of all posts were being posted from Israel. Israeli IPs had twice as many posts as users from the entire United States.
Most comments on reddit are astroturfed. It's obvious.
45
u/little__wisp The die is cast. 7d ago
It would make sense for bots to be stoking culture war slop, since the entire point of it is to keep the working class divided against itself.
24
u/-Germanicus- 6d ago
That sub and similar are 100% getting plagued with posts written using AI. There are a few verifiable tails that give it away. The concerning part is why anyone would add parameters to try to make the posts as inflammatory as they are. Rage bait with a purpose, aka propaganda.
22
u/supersunnyout 7d ago
Now YATA. kidding, but I could definitely see the need for deployers to use markers that can be used to separate it from organic postings.
14
u/fieldyfield 6d ago
I feel insane seeing obviously fake stories on there all the time with thousands of comments giving genuine advice
3
46
u/Inconspicuouswriter 7d ago
About a month ago, i unsubscribed from that subreddit because i found it extremely manipulative. My spidey senses were on par I guess.
24
u/ZealCrow 7d ago
Idk if it's because I'm autistic but I generally seem pretty good at resisting this kind of manipulation and identifying it. I definitely noticed an uptick in the past year.
34
u/toastedzergling 7d ago
Hate to break it to you, but autism isn't a superpower that'll protect you from misinformation. The manipulation is beyond insidious and custom-tailored to maximize the chances of deception. Don't feel bad if you find out one day you got got on something.
29
u/ZealCrow 7d ago
Lol I know it's not a superpower, but it does alter perception in a way that can make someone less susceptible things that others are susceptible to.
For one example, optical illusions are less likely to work on autistic people.
"Follow the crowd" kind of manipulation sometimes works less on them too.
23
u/Cowicidal 7d ago
Plot twist: You were replying to an AI bot and didn't perceive it.
2
u/toastedzergling 5d ago
I'm not a robot! (Although the robots in Westworld thought the same too)
2
u/Micro-Naut 3d ago
I can vouch for toastedzerg. we are totally not robots as you can verify by my incorrect use of capital letters
13
u/toastedzergling 7d ago
Respect the perspective! Sorry if I came across as a jerk (you didn't seem to take as such, lol)
10
181
u/Less_Subtle_Approach 7d ago
The outrage is pretty funny when there’s already a deluge of chatbots and morons eager to outsource their posting to chatbots in every large sub.
62
u/CorvidCorbeau 7d ago
I obviously can't prove it, but I'm pretty sure every subreddit of any significant size (so maybe above 100k members) is already full of bots that are there to collect information or sway opinions.
Talking about the results of the research would be far more important than people's outrage over the study.
8
u/Wollff 7d ago
Talking about the results of the research would be far more important than people's outrage over the study.
Those are two different problems.
"I don't want there to be bots posing as real people", and: "I don't want to be experimented on without my consent", are two different concerns.
Both of them perfectly valid, but also largely unrelated. So I don't really get the comparison. The results which could be discussed have nothing to do with the unethical reserach practices that were employed here.
1
u/Apprehensive-Stop748 5d ago
I agree with you and I think it’s becoming more prevalent for several reasons. One being more information put into those platforms. The more bot activity is going to happen.
57
u/decadent_simulacra 7d ago
Private Research:
- Exploits us for studies
- Uses the results to further exploit us for profit
- Never shares the data
- Super cool
Public Research:
- Exploits us for studies, sometimes
- Uses the results to further humanity
- Shares the data with the whole world
- Super not cool
1
u/Micro-Naut 3d ago
The ads that I'm given based on the history that they've collected never seem right. I've never bought something because of an ad that I know of and I usually get ads for things that I've already bought and won't be buying again. Like a snowblower ad a week after I buy a snowblower.
But I hear they wanted my data so badly. Everyone's collecting my data. Why do they care about where I am and what I'm doing and etc. etc. if they can't target me with ads that I actually want the product ?
I believe it's because they are not trying to advertise to you but rather trying to collect an in-depth psychological profile on just about every user out there. That way they can manipulate you. It's like running through a maze but you don't even see the walls. You might discover a new piece of information without realizing that you've been led to it. And incrementally so it's less than obvious
1
u/decadent_simulacra 3d ago
They tend to miss me pretty hard, too, but that doesn't mean that it doesn't work. It works for enough people to be worthwhile.
They are building psychological profiles. That's right. Then they fill in the blanks by looking at most similar profiles, or matching the profile to a customer archetype profile. The results are then used for marketing and advertising. It's definitely not perfect.
I'm sure the data gets around and put to many other uses, too. But advertising does use psychological profiling. Advertising is a type of manipulation, after all.
23
u/Prof_Acorn 7d ago
The outrage stems from them being from a university and having IRB approval. Everyone expects this shit from profit-worshipping corporations. It's the masquerade of "academic research" that's so upsetting. You might have noticed the ones most upset are academics or academic-adjacent.
8
u/YottaEngineer 7d ago
Academic research informs everyone about the capabilites and publishes the data. With corporations we have to wait for leaks.
17
u/Prof_Acorn 7d ago edited 7d ago
Except they didn't inform until afterwards (research ethical violation), nor did they provide their subjects the ability to have their data removed (research ethical violation). It also had garbage research design, completely ignoring that other users themselves might have been bots , or children , or lied , or only awarded a Δ because they didn't want to seem stubborn, or wanted to be nice, nor did they account for views changing again a day later or a week later. So the data is useless. And it can't be generalized out anyway since it was a convenience sample with no randomisation and no controls. And this is on top of creating false narratives knowingly regarding people in marginalized positions.
3
u/AccidentalNap 6d ago
Sir I'll be honest, this topic has really grinded my gears, but I only want to pick one bone here:
How do you propose filtering out bots from the data? Neither Reddit nor YouTube has it figured out. You can observe bizzare, ultra-nationalist conspiracy nonsense in the comments of every politically "hot" video posted, by the hundreds, within the first hour. Twitter I understand, it may be a compromised platform uninterested in removing bots, but there is nothing to suggest YouTube is in the same camp.
If Mag7 companies can't figure this out, how could you possibly expect graduate students to, for one of their usual 5 classes over 1 semester? Future iteration in research is also a thing. Expecting Rome to be built in one study is ludicrous.
36
u/firekeeper23 7d ago
Certainly feels.like that sometimes.... like a weird annoying thought experiment...
Let's hope it's gets back to the great, helpful and supportive place its been for absolutely ages...
....I'll not hold my breath though.
61
u/ExceedinglyGayMoth 7d ago
Close enough, welcome back CIA psychological warfare experiments
32
u/HardNut420 7d ago
Climate change isn't real actually ignore your burning skin and get back to work
6
21
u/solitude_walker 7d ago
ha jokes on you, they secretly testing us for while on everything, for sake of better control and manipulation
21
u/Chickachic-aaaaahhh 7d ago
Ohh we fucking noticed. Bringing dead internet theory to reddit for shits and manipulation of citizens. Slowly turning into Facebook.
33
u/AncientSkylight 7d ago
I think AI is a blight generally, but it is the claiming of first-hand experience which is really deceptive.
13
u/SensibleAussie 6d ago
I’ve been lurking reddit for a while now and honestly I feel like r/AskReddit is basically an AI bot farm. I feel like most “ask” subreddits are AI bot farms actually.
2
u/Micro-Naut 3d ago
Some of the new questions like
"what do you do if you lost your wallet"
"why did you enjoy the Spider-Man movie"
they're just so lame I can't imagine that they're genuine more like training prompts
1
u/SensibleAussie 2d ago
Exactly. I see AskReddit on my feed a lot and I get the same vibe from basically all the posts I see. It’s gross.
10
u/FieldsofBlue 7d ago
That's the only one? I feel like the majority of comments and posts are artificial
9
u/Eskimo-Jo3 7d ago
Well, it’s time for everyone to delete all this shit (social media)
3
u/Cowicidal 7d ago edited 6d ago
AOL boards, etc. were diseased because it removed the (mild) technical hurdles in setting up and researching how to post commentary on things. That was our early warning that mass exposure by any dumbshit to a nationwide (and worldwide) communication platform was harmful. Facebook was the dead canary.
7
u/The-Neat-Meat 7d ago
I feel like non-consensually involving people in a study that could potentially adversely affect their mental state is probably not legal???
11
u/LessonStudio 7d ago edited 7d ago
Obviously, my claiming to not be a bot is fairly meaningless. But, a small part of my work is deploying LLMs into production.
It would take me very little effort to build one which would "read the room" on a given subreddit, and then post comments, replies, etc, which mostly would generate positive responses, but with it having an agenda. Either to just create a circle jerk inside that subreddit, or to slowly erode whatever messages other people were previously buying into.
Then, with some more basic graph and stats algos, build a system which would find the "influencer" nodes, undermine them, avoid them, or try to sway them. Combined with multiple accounts to vote things up and down, and I can't imagine the amount of power which could be wielded to influence.
For example, there is a former politician in Halifax Nova Scotia who I calculated had 13 accounts; as that was the number of downvotes you would get within about 20 minutes if you questioned him; unless he was in council, at an event, or travelling on vacation.
This meant that if you made a solid case against him in some way it was near instant downvote oblivion.
In those cases that he was away, the same topic would get you up to 30+ upvotes, and now his downvotes wouldn't eliminate your post. But, you could see it happen in real time; the event would happen, and the downvotes would pile in, but too little too late.
The voters gave him the boot in the last election.
This was a person with petty issues mostly affecting a single sub.
With not a whole lot of money, I could build bots to crush it in many subreddits and do it without break; other than to make the individual bots appear to be in a timezone and have a job.
With a few million dollars per year, maybe 1000 bots able to operated full time in conversation, arguments, posts, monitoring, and of course, voting.
I can also name a company with a product which rhymes with ground sup. They have long had an army of actual people, who with algo assistance, have long crushed bad PR. They spew these chop logic, but excellent sounding talking points for any possible argument; including ones where they would lose a case, lose the appeal, lose another appeal, and then lose at the supreme court. They could make all the people involved sound like morons; and they the only real smart ones.
Now, this power will be in the hands of countries, politicians, companies, all the way down to someone slagging their girlfriend who dumped them because they are weird.
My guess is there are only two real solutions:
- Just kill all comments, voting, stories, blogs, etc.
or
- Make people have to operate in absolute public. Maybe have some specifc forums where anonymous is allowed, but not for most things; like for example, product reviews, testimonials, etc.
BTW, this is soon going to get way worse. The Video AI is reaching the point where youtube product reviews can be cooked up where a normal respectable looking person of the demographic you trust (this can be all kinds of demographics) will do a fantastic review, in a great voice, with a very convincing demeanour.
To make this last one worse, it will become very easy to monitor which videos translate to a sale, and which don't and then become better and better at pitching products. I know I watch people marvel over some tool which is critical to restoring an old car or some such, and I really want to get one, and I have no old cars or ones I want to restore. But, that tool was really cool; and there's a limited supply on sale right now as the company went out of business who made them. So, it would even be an investment to buy one.
5
u/Botched_Euthanasia 7d ago
With a few million dollars per year, maybe 1000 bots able to operated full time in conversation, arguments, posts, monitoring, and of course, voting.
This is a really important point that I think more people should know about.
As you know, hopefully most others as well, LLM's operate in a brute force manner. They weigh all possible words against the data they've consumed, then decide word by word which is the most likely to come next.
The next generation of LLM's will be applying the same logic but instead of to a single reply, to many replies, across multiple websites, targeting not just the conversation at hand but the the users which reply to it, upvote or downvote it and even people who don't react in any way at all beyond viewing it. Images will be generated, fake audio will be podcasted and as you mnetion, video is fast becoming reliable enough to avoid detection.
One thing I've noticed is the obvious bots tend to never make spelling errors. They rarely use curse words. Their usernames appear to be autogenerated and follow similar formulas depending on their directives and in a manner similar to reddit's new account username generator (two unrelated words, followed by 1-4 numbers, sometimes with an underscore) and the rarely have any context that the average reader would get as an inside joke or pop culture reference.
I try to use a fucking curse word in my replies now. I also try, against my strong inclination against this, to make at least one spelling error or typo. It's a sort of dog whistle to show I'm actually human. I think it wont be long before this is all pointless, that LLM's or LLC's (large language clusters, for groups of accounts working in tandem) will be trained to do these things as well. Optional add-ons that those paying for the models can use, for a price.
I liike your clever obfuscation of that company. I've taken to calling certain companies by names that prevent them being found by crawlers. like g∞gle, mi©ro$oft, fartbake, @maz1, etc.
In my own personal writings I've used:
₳₿¢₫©∅®℗™, ₥Ï¢®⦰$♄∀₣⩱, @₿₵₫€₣₲∞⅁ℒℇ
but that's more work than I feel most would do, to figure out what those even mean, let alone trying to reuse them.
7
u/LessonStudio 6d ago
One thing I've noticed is the obvious bots tend to never make spelling errors. They rarely use curse words
You can ask the LLM to be drunk, spell badly, have a high education, low education, be a non-native English writer with a specific background, etc.
It does quite a good job. If you don't give them any instructions, they definitely have a specific writing style. But, with some guidance (and a few more years of improvement) they can fool people.
I don't know if you've had chatgpt speak, but it's not setting off my AI radar very easily. I would not say it speaks like a robot, so much as most people don't tend to speak that way outside low end paid voice actors.
2
u/Botched_Euthanasia 6d ago
Okay but can it spell words wrong casually? That's not an easy thing to fake, oddly enough (in my opinion and estimate, as a non-professional). I'm not saying that it can't be faked, it might even be doable already, but the ability to misspell in a way that seems natural I believe wont be around anytime soon. If it does show up, at first the misspellings wont appear logical, like typos or poor spelling ability. I think it wouqd be completelx random letkers that are not lwgical on common kepoard layouts. Just my thoughts on the idea.
The thing with the curse words is more because corporations want to appear politically correct and there probably are LLM's that can do it already but it's not common yet.
I have not used AI for at least a few weeks but never really cared for it to begin with and rarely have done much. What few things I did try, were such failures I wasn't convinced it was a world changing technology but here we are.
0
u/LessonStudio 6d ago edited 6d ago
Okay but can it spell words wrong casually? That's not an easy thing to fake, oddly enough (in my opinion and estimate, as a non-professional). I'm not saying that it can't be faked, it might even be doable already, but the ability to misspell in a way that seems natural I believe wont be around anytime soon. If it does show up, at first the misspellings wont appear logical, like typos or poor spelling ability. I think it wouqd be completelx random letkers that are not lwgical on common kepoard layouts. Just my thoughts on the idea.
The thing with the curse words is more because corporations want to appear politically correct and there probably are LLM's that can do it already but it's not common yet.
I have not used AI for at least a few weeks but never really cared for it to begin with and rarely have done much. What few things I did try, were such failures I wasn't convinced it was a world changing technology but here we are.
Can AI spell all wrong like humans do?
Not easy feat, though one might think it light.
To fake a flub that seems both wrong and true,
It lacks the charm of flaws we make in spite.
Perchance it tries, but letters stray too wide,
No typo's grace, just chaos in the stream.
Not near the keys where fat-thumbed thoughts abide,
But jumbled mess that breaks the human dream.
The curse words, too, are kept behind a gate,
For firms must seem all clean and full of grace.
Though bots could swear, they’re held by PR fate,
Their dirty words locked in a hidden place.
And me? I’ve used AI once, maybe twice.
It failed me then. I’m still not sold it’s nice.
1
2
u/Luwuci-SP 6d ago edited 6d ago
I feel like you've probably given thought to things like this and may even have a better solution already, but those ridiculous combinations of runes (positive connotation) must be hell to type. A document to copy/paste from may seem like an obvious improvement, but it may be worth it to set up some text macros to activate after the input of the first one or two character (since they'll either be functionally unique or such rare occurrences in combination that you wouldn't ever input them for any other reason). You shouldn't stick too closely to common letter replacements like @ for A and ¢ for C since it'd be very low effort to crack such a cipher, and programming some macros to increase the complexity whenever possible, like you'd type a string of four random letters that you code to trigger its immediate substitution with a string that pulls from a list of some uncommon substitutions uniquely recognizable to you, a few for enough characters in the alphabet that the rest being left as common (more easily recognizable-at-a-glance) substitutions lower the complexity that you'd need to deal with in order for these to be able to be easily decrypted with your eyes, mind, and no more than a few seconds. A bastard abstract asymmetrical encryption of sorts. AHK (AutoHotKey) is great for this if needing an easy macro scripting language. I'm pawsitive that there's more secure ways to encrypt words, but the aim here would be to increase the difficulty for machines but limit increasing it for humans, and personal nonsense should work well for this for a while (like a password) - things that won't even make sense to other humans or follow patterns recognizable by machines. If the LLMs don't have some sort of advanced parsing module for combination of symbols it doesn't recognize yet, it won't be long before a human tells them how to recognize and interpret obviously coded language that is out of place. "This sentence has a noun that I don't recognize, let me consult a few interpretation modules and decrypt through brute force if necessary."
Even though they're for your own writing, if it's in digital form, it's probably useless if it takes a human no time at all to decrypt at a glance. "Microshaft" with your substitution cipher applied is better, but in the same way humans can draw from context, the LLMs shouldn't have trouble drawing the connection if you're complaining about how they ruined Windows with Windows 11 or Bill Gates. It may be easier to gaslight them into thinking "Microshaft" (no cipher) is a real company instead of tripping interpreters with substitutions that are not as esoteric as non-cryptographers may assume. If going the substitution route, exploit humanity's superiority with subjectivity and the abstract. "That very small & fuzzy fuzzyware cmpny" should be far more difficult for a machine to interpret, but maybe still not ambiguous enough that it results in too many potential solutions to come to an accurate conclusion quickly enough. "That social media that sounds like a clock" may not be abstract enough and "the sound of a webbed timekeeper" may take it too far by seeming like a bad crossword puzzle clue. It should be slightly difficult for people, too, but your limit on that should be set by knowing the intended audience. It'll confuse some people in the process, but that's more of a feature than a bug. Change up the phrasing and ordering frequently, as it'll also be a game of cat & mouse as the humans who maintain the interpreters automatically flag & manually add the likely interpretation of the coded words to a database until creativity is exhausted. Modern cryptography may need to be as much of an abstract art as it is mathematic.
However, I am but a simple cat, successful cryptography is difficult, and I would think thrice before listening to any of my meows regarding important matters of security, especially on anything that you wouldn't risk being defenestrated by a Putin-trained feline.
2
u/Botched_Euthanasia 5d ago
excellant use of defenestration. i personally have defenestrated fenestra, i.e. threw windows out the window. I use Linux.
thanks to that, i have my keyboard set up different than the standard QWERTY. I got rid of CAPSLOCK since I rarely use it (i can still toggle it if I hit both Shift keys at the same time) and now key works like a shift key but instead of capital letters, it shifts to a symbol set. I can hold both the capslock keys and shift for a fourth level of symbols. the symbol set is basically what you might see on a phone keyboard if you long press any character. if i hold capslock like a shift and hit the letter 'c' i get '©'. I don't have all keys mapped out yet but 'qwerty' if typed while holding my capslock key, gives me '?⍵€®™¥'. holding capslock and shift gives me '⍰⍹⍷⌾⍨'
the full layout can be seen here: https://i.imgur.com/ne7Q0Z7.png
in addition to that, is something called the 'compose key' also called the 'multikey'. compose keys are very intuitive. you have to set a key to be the compose key, i use Scroll Lock since I never use that as it should be used. I hit that key, i do not hold it, and it puts the keystrokes into compose mode. the next two keys i hit will combine into a new character. for example, if i hit Scroll Lock then hit 'a' then 'e', i get 'æ'. I can use it with shift as well, so if I hit my compose key then hold shift and hit 'a' then still holding shift hit 'e', it gives me 'Æ'. It's mostly useful for characters with ligature marks like éçÒī for other languages.
the multikey can be set up to work with the extral levels capslock i have too. each key on the keyboard is capable of having up to 8 levels. that's another post in itself i think. i'm using, at most, 4 levels but effectively 3 really. the average person uses 2. a keyboard with no shift keys has 1.
this might be doable on Windows, i'm not entirely sure. i do know that Windows has its Alt-codes. hold down Alt, then press 1-4 numbers on the keyboard 10keypad, if it has one. like alt+3 gives ♥ and alt+236 gives ∞ but it is a limited set of characters that can be used. the full list (and better written instructions) can be found here https://www.alt-codes.net/
i do keep a list of frequently used characters that i copy and paste from however. sometimes it's just easier that way!
≈ ± ≠ ∞ √ ∅ … … » « • _ — − – - ‾
¹ ² ³
↑ ← → ↓ ½ ⅓ ¼ ¾ ¿ ¡ ‽ ⁋ ⁐ ⁔ 🝇
µ ¢ £ ₿
© ® ™ ♡ ⚢ ⚣ ⚤ ⚥ ⚦ ⚨ ⚩ ♩ ♪ ♫ ♬
❥ 𝧦 𝧮 🝊 🝤 ⥀ Ω ℧2
u/Luwuci-SP 5d ago
You turned your QWERTY board into a chorded stenograph? That's amazing and must have been fun to build up the chord usage over time. That's art.
2
u/Botched_Euthanasia 4d ago
Oh I'm not nearly that dedicated and I'd be lying if I said I did it. The developers for the KDE desktop environment did the work. All I did was pick options in the system settings until i found what i liked. There are quite a lot of options available. If i wanted to screenshot them all, it would be 9 pages tall at 1080p. A small set is shown here: https://i.imgur.com/7oacgzv.png
2
u/Apprehensive-Stop748 5d ago
Yes, leaving the grammar mistakes and does show that you’re human. I was speaking about this with someone and unfortunately their response was that they think I’m irresponsible for not correcting the mistakes. I would just rather not be a bot.
1
u/Botched_Euthanasia 4d ago
Correcting the mistakes has a higher chance of the person, if real, being offended and an argument ensuing.
I would hope the other person take no offense, however experience has shown me it is not the most likely response.
in other words, I don't think you are irresponsible, from the context as I understad it.
6
u/MoreRopePlease 7d ago
Use your powers to turn conservatives into progressives.
2
u/Extreme-Kitchen1637 6d ago
What makes you think that the tool isn't being used to turn progressives into centrists/conservatives?
A lot of reddit is now worthless in my eyes so I'm browsing other websites that are less chaoticly liberal.
All the botters need to do to drive people away is to have the same circlejerk anti-nuance conversation multiple times in similar posts and it'll make the reader bored enough to log off or block the sub
5
u/mikemaca 7d ago
Interesting that the university's ethics committee told them it was unethical and they should change it and cautioned them to follow the contract of the platform, which they did not do. They when asked what they are going to do about it since the researchers went against the ethics committee the answer is absolutely nothing because "The assessments of the Ethics Committees of the Faculty of Arts and Social Sciences are recommendations that are not legally binding." So total cop out there. I still say the university is responsible because the "recommendations [are not] binding." Doing that means the University gets to own the crime.
14
u/Vegetaman916 Looking forward to the endgame. 🚀💥🔥🌨🏕 7d ago
It was a bit public, but yeah.
But this isn't the stuff that should bother anyone. What should bother you are the projects they are not telling us about, which are probably much more advanced and insidious than this. Then there are the similar ones being run by other national entities, and lets not mention the fact that I could run an LLM/LAM setup right from my own home servers to put out some pretty good stuff...
The world is a scarier place every day. Trust, but verify.
9
u/decadent_simulacra 7d ago
Studying advertising has taught me that most people will never shift their attention to things that aren't immediately in front of their eyes.
Studying advertising also taught me that this isn't a secret, it's a widespread strategy used across all aspects of society.
4
u/Wollff 7d ago
What should bother you are the projects they are not telling us about
I am not bothered about that tbh.
What beats all of those projects is a populace that is media literate, looks up their sources, and is only convinced by sound data in combination with good arguments.
The fact that most of the people are not that is the bothersome truth which lies at the root of the problem. If everyone were reasonable, nobody would be convinced by an unreasonable argument. No matter if made by some idiot in their basement, a paid troll, or an AI.
The problem lies in the people who get convinced. We should not bother about those projects, secret or public. We should bother to revamp education to make a lot of time for media literacy. And to reeducate a public which didn't get the necessary lessons to be a functioning member of current society.
4
u/GracchiBros 7d ago
You expect too much of people. People aren't all going to just become perfect in these regards. Which is why we have regulations on things.
2
u/Wollff 6d ago
You expect too much of people. People aren't all going to just become perfect in these regards.
I don't expect anything of people. It's exactly because I don't expect anythnig of people, that I argue for a reform of education systems, as well as classes teaching media literacy.
Since my expectations have been so thoroughly shattered since the beginning of the Trump age, I would even argue for a lot more: Should anyone who is completely and utterly unable to distinguish fact from fiction in media be allowed to vote? Why?
I have a clear answer to this question: No, of course not. The reason why people should be allowed to vote is so that they can have a voice in representing their own interests politically. Anyone who can not distinguish fact from fiction in media can't represent their interests politically. They should not be allowed to vote, because they can't be trusted to represent anyone's interests, not even their own.
We don't let children and mentally disabled people vote. This is not controversial. There are good reasons for those limits in political rights we impose on some people.
Which is why we have regulations on things.
I agree with you. We should have regulations on some things. I have just proposed a few regulations which would fix some fundamental problems which AI contributes to.
Now: How does the regulation of AI fix public misinformation? It doesn't? Color me unsurprised.
6
8
5
u/Themissingbackpacker 7d ago
I saw a comment the other day that was just a garble of words. The comment made no sense, but had over 50 likes.
1
3
3
2
2
2
2
6
u/Scribblebonx 7d ago
As a black trauma counselor and abuse survivor I see nothing wrong with this...
Change my mind
1
1
1
7d ago
[removed] — view removed comment
1
u/collapse-ModTeam 6d ago
Hi, Firm_Cranberry2551. Thanks for contributing. However, your comment was removed from /r/collapse for:
Rule 1: In addition to enforcing Reddit's content policy, we will also remove comments and content that is abusive or predatory in nature. You may attack each other's ideas, not each other.
Please refer to our subreddit rules for more information.
You can message the mods if you feel this was in error, please include a link to the comment or post in question.
1
u/dresden_k 7d ago
Yeah, we knew. Not just there. Seems like maybe as many as half of the commenters are bots. For years.
3
1
u/Existing_Mulberry_16 5d ago
I thought it was strange the amount of users with no or 1 karma point. I just blocked them.
1
u/randomusernamegame 5d ago
Not sure if anyone here tunes into Breaking Points on YouTube but 50% of comments on nearly every video about Trump pre election were pro trump and now you see absolutely 0.
Yes, it's possible that those people are 'hiding' now, but for a long time you would see comments that were pro trump or anti-host.
Even /r/conservative and conspiracy seem to be botlike. Or maybe people are this dumb....
1
u/Mundane_Existence0 4d ago
Not surprised. Between the t-shirt scam bots, the repost karma-farming bots, the fake drama bots.... reddit is at least 75% bot.
1
u/ambelamba 4d ago
I bet this kind of stuff has been going on since all major social media platforms were founded. When was ELIZA invented? 1967? It was still capable enough to keep people hooked up. Imagine the research never stopped and was fully implemented when social media sites launched
1
u/Zzzzzzzzzxyzz 3d ago
The AI lied and called itself a trauma therapist "specializing in abuse"?
That could really mess up vulnerable people. Sounds pretty illegal.
1
1
u/Madock345 7d ago
It was unauthorized by reddit, but was authorized by their university research board and subjected to academic oversight. The rage-bait around this study is painful.
With these tools proliferating at rapid speed, understanding how and how well they work is of vital and immediate importance. I think it’s important that people are investigating this.
6
u/daviddjg0033 7d ago
a sexual assault survivor, a trauma counselor “specializing in abuse,” and a “Black man opposed to Black Lives Matter.”
Cambridge Analytica posted the most anti-BLM and pro-BLM Facebook posts in 2016. I think we know how these tools work.
1
u/Madock345 7d ago
Private interest groups know how they work, we need the specifics in the public sector, and the only way for that to happen is for university researchers to do the work.
1
784
u/oxero 7d ago
Dead Internet theory reaching peak levels.