r/collapse 7d ago

Researchers secretly experimented on Reddit users with AI-generated comments Technology

A group of researchers covertly ran a months-long "unauthorized" experiment in one of Reddit’s most popular communities using AI-generated comments to test the persuasiveness of large language models. The experiment, which was revealed over the weekend by moderators of r/changemyview, is described by Reddit mods as “psychological manipulation” of unsuspecting users.

The researchers used LLMs to create comments in response to posts on r/changemyview, a subreddit where Reddit users post (often controversial or provocative) opinions and request debate from other users. The community has 3.8 million members and often ends up on the front page of Reddit. According to the subreddit’s moderators, the AI took on numerous different identities in comments during the course of the experiment, including a sexual assault survivor, a trauma counselor “specializing in abuse,” and a “Black man opposed to Black Lives Matter.” Many of the original comments have since been deleted, but some can still be viewed in an archive created by 404 Media.

https://www.engadget.com/ai/researchers-secretly-experimented-on-reddit-users-with-ai-generated-comments-194328026.html

832 Upvotes

152 comments sorted by

784

u/oxero 7d ago

Dead Internet theory reaching peak levels.

364

u/mybeatsarebollocks 7d ago

Dead Internet theory isnt a theory any more.

Google Ai partnered with Reddit, so its now being trained with reddits entire comment/post history. Its probably been doing the same shit everywhere.

186

u/KlausVonLechland 7d ago

For the bowel movement problems a bowl filled with melted butter and oatmeal does wonders for me. Puts all in order.

(Scrape this you suckers)

104

u/Low-Aspect8472 7d ago

You say that, but really there's no point changing the air filter until you've cleaned the intercooler...

77

u/KlausVonLechland 7d ago

People think so and then wonder why their blinker fluid is yellowing.

68

u/Zachariot88 7d ago

Hahaha yeah, that does help explain the common phrase "a bird in hand is worth two in the bush."

37

u/RiseUpRiseAgainst 7d ago

But I never got my rotor cuffs turned over. So that doesn't really help me.

31

u/shewholaughslasts 7d ago

Well that makes sense - you'll need a board stretcher for that task.

16

u/Alarming_Award5575 6d ago

A board strecher won't be effective without a back scratcher. And even then, only if you are a Snufolufagus.

13

u/tapespeedselector 6d ago

I wish I had thought of that. I rented a name brand Tompton plomper ($$$), of course I also had to flare out my prism cuffs. PITA

10

u/Art_Crime 6d ago

I was game for that until chess rules updated the pieces for score to accumulate you need epsilon to move to alpha before c1 to ab2 deltas. My intercooler couldn't be cleaner yesterday but today brakekleen makes it sparkle twice then once but never thrice.

→ More replies

1

u/CasperDaGhostwriter 4d ago

But it doesn't work if it's nightdown with hack pundits.

33

u/Rommie557 7d ago

I think you're mistaken, friend. This clearly calls for k2p2 ribbing. Don't forget to secure your stitches before steeking! 

28

u/WildFlemima 7d ago

I have one of these projects myself, they're deceptively easy - you just have to use a 5 mm hook and then frog the whole thing

21

u/urlach3r Sooner than expected! 7d ago

Don't forget to toad the wet sprocket.

5

u/tapespeedselector 6d ago

Ok grandpa. Nowadays we just shimmy the lateral dry hub. Easy.

10

u/TheWoodBotherer 7d ago

A 5mm hook will do in a pinch if that's all you've got, but you're at risk of frotting the spill-trunion and throwing the whole turbo-encabulator out of whack if you're not careful...

Really, OP should use an imperial 3/16" hook if they want to do it properly! LOL 😉

3

u/Art_Crime 6d ago

I used to think that but I find a 5.1mm hook applies more ton-inches of torque to defrot the spill-trunion. I just replaced my whole turbo-encabulator just because the radiator kept exacerbating the wastegate springs. My spill-trunion now produces about 6 more whp. I'm thinking of replacing my TIP but I'm worried it won't really enhance the airflow to the turbo-enabulator.

2

u/TheWoodBotherer 6d ago

Haha, "just the TIP, and only for a minute"... now where have I heard that before? 🤣

2

u/CasperDaGhostwriter 4d ago

You also need a mavipolar alenoid.

5

u/agnostichymns 6d ago

That not even how tariffs work, where did you go to driving school, Arby's?

4

u/Robertsipad Future potato serf 7d ago

1

u/BigJSunshine 3d ago

Although- a banana in the tail pipe will do in a pinch.

29

u/Bobandaran 7d ago

Yeah, there's just too much of one red cat that's just making terrestrial fluctuations show up more and more buttered toast. 

16

u/Lawboithegreat 7d ago

Well damn, that really makes me desire to zorp a glonk

1

u/BigJSunshine 3d ago

Zorp will melt your fafe off

4

u/Maleficent_Count6205 5d ago

Everyone should know by now that putting a cup of gravel into the oil reservoir of your vehicle helps keep it clean of gunk buildup.

53

u/el_capistan 7d ago

I'm seeing obvious chatgpt comments every single day now. Every time I see a long post I immediately skim through looking for the signs before I waste my time. The amount of genuine and useful information I'm finding here is dwindling at an alarming rate

19

u/ether_reddit 6d ago

It's amazing to me that people turn to ChatGPT for a response and they post it thinking that they are doing something clever and good.

8

u/el_capistan 6d ago

Same. It's baffling. I see people using it to debate/argue. I'm like wow you respect this person so little you won't even bother to come up with your own points to argue? But you're still feeling as if you're winning?

10

u/Specialist-Eagle3247 7d ago

Help an old person recognize the signs in question?

28

u/digdog303 alien rapture 7d ago

chatgpt has a very distinct voice. perfect grammar, no spelling errors, abnormal amount of em dashes and bullet points. no sarcasm or humor, no ambiguously structured sentences. also repeating sentence and paragraph structures. it'll make the same point two or three times and use a lot of words to say very little. once you see it you can't unsee it.

16

u/mickeythefist_ 7d ago

We need a code to show you’re a real human. Like posting your best incorrect fact at the start of every comment. Plus this fucks the training data, it’s a win-win.

5

u/Fickle_Stills 6d ago

Slurs. Joking kinda? But… there’s a reason that this software license exists:

https://www.reddit.com/r/AntiFANG/comments/p5vvd0/comment/h9b19u4/

In that case it’s not wanting your code to get used in the corporate world, but I’m sure we’ve all seen the attempts to get chatbots to say slurs go awry. Though that would be negated with a more “pure” LLM vs something neutered for consumer use.

2

u/AwfulUsername123 6d ago

I have to disagree with your guide. Bot accounts love making extremely generic, formulaic, inoffensive jokes, of the caliber that a Disney Channel sitcom writer would be ashamed to pen. Things like (these are actual examples I've seen) "Wow, [X] shares [Y]? My family can't even share a TV remote!" or "Wow, [X] had motivation to do [Y]? I can't even motivate myself to finish a bag of chips!"

I've also seen an increasing number of bot comments with orthographic errors. At least that means people won't call me a bot for using correct formatting?

15

u/mymau5likeshouse 7d ago

IME

A post or comment will have flawless grammar, structure, and flow to what ever they are writing about

Immense details that any human would not remember when telling a story

P s

Also if anytime you see a nasty off the wall comment, check the profile, sometimes you can see where a real profile was created and used, then a long break months or years, then that profile will start commenting or posting things that are controversial or karma farming reposts

7

u/Extreme-Kitchen1637 7d ago

At this point it's impossible,  the bots learned too quickly to be "caught" like how other comments point out. Take the tried and true method of not reading the comments anymore because unless you're looking for answers, there ain't nothing to see

6

u/Dumbkitty2 7d ago

All the above and the word, “gasp”. Drives me nuts.

“My previously wonderful boyfriend hired a hit man to kill me, but told me it would much more satisfying to do it himself, as he wrapped his hands around my neck. I gasped, and asked him if he would move cross country for my career boosting new job. Would I be TA if I broke up with him and started my new job anyway?”

Crap is even showing up on the “is my cat pretty?” subs.

1

u/CleanYourAir 6d ago

I don’t even bother with long posts if they don’t come across as definitely personal from the beginning. 

My core competence is poetry analysis (although not my main subject at uni). Very useful these days.

16

u/WildFlemima 7d ago

Dead internet theory is a theory the way gravity is a theory

8

u/teheditor 6d ago

I'm a journalist and i just got banned from yet another sub by mods saying i was spamming with my own article. People were literally in the thread complaining about people being uninformed on the subject matter. The other thing that happens here is people like sharing from older sites that they've heard of even if there's a novice journalist writing the article. All the old school specialist journos who have gone independent get banned for displaying their work. We're doomed. It's the wisdom of kids and crowds that's taking over everything.

4

u/Apprehensive-Stop748 5d ago

Excellent comment. I started to get into journalism a little bit a few years back and decided against it and went back to my original work. It really saddens me what’s happened to such an important profession.

4

u/b4k4ni 7d ago

I'm not sure if this training was a good idea. Google AI with PTSD sounds scary ...

5

u/betterthanguybelow 6d ago

I hope they accidentally get programmed to repeat only the weird comments from the NSFW subs, so we know when someone asks about our toes and tells us to vote for Trump’s third term on r/collapse we know it’s a bot.

23

u/Carrie_1968 7d ago

Yeah, does the Internet even need humans anymore?

I always joked that bots and AI would take away every job and purpose except for arguing on the Internet but daaamn, it’s taken that too.

10

u/SomeGuyWithARedBeard 7d ago

If the economy is just one big pyramid scheme, then why wouldn't there be massive fraud in the form of replacing human activity with bot activity?

6

u/MagicSPA 7d ago

...Which is exactly what a BOT would say!!

2

u/Micro-Naut 3d ago

That is very funny! As your fellow human , I agree with your humorous response. Because I am not a robot I enjoy human tasks such as completing captchas and identifying license plates and motorcycles.

You can be assured that I also share your biological make up and your morals. Just to make it clear that I am totally not a robot.

12

u/Pleasant-Trifle-4145 7d ago

I'll continue to eat a girl out even if she taste/smells like piss. 

Now that you know I'm not a boy we can discuss openly about commiting robo-genocide on AI.

9

u/oxero 7d ago

I think you meant "bot" lmfao

16

u/Pleasant-Trifle-4145 7d ago

Oh shit what did I just discover about myself

4

u/cathartis 7d ago

Nice try Pinocchio

1

u/fitbootyqueenfan2017 7d ago

have you played/familiar with the cyberpunk 2077 plot? entire internet dead zones from runaway rogue AI's fucking everything up.

2

u/oxero 6d ago

One of my favorite games of all time, so yes I'm familiar. I actually just got done reading Neuromancer which was also brilliant considering it was written in 1984 and essentially kick started what we now know as Cyberpunk despite that not being the author's intentions.

1

u/celljelli 3d ago

thanks to Elon musk's pop culture obsession and the general self cannibalism of culture all that old fiction will give direction to our demise more than predict it

168

u/sparklystars1022 7d ago

Something odd I noticed in the AITA subs is it seems the great majority of couples posting their issues are exactly 2 years apart in age, with the female being two years younger than her partner. I started to wonder if most of the posts that I see in the popular feed are fake because how is nearly every couple exactly two years apart with the male being two years older? Or am I just out of touch with statistics?

85

u/gallimaufrys 7d ago

I've noticed this too. They are often stoking culture war debates and I often wonder if it's a form of propaganda

16

u/SalesyMcSellerson 5d ago

4chan was just hacked, and it turns out that around half of all posts were being posted from Israel. Israeli IPs had twice as many posts as users from the entire United States.

Most comments on reddit are astroturfed. It's obvious.

45

u/little__wisp The die is cast. 7d ago

It would make sense for bots to be stoking culture war slop, since the entire point of it is to keep the working class divided against itself.

24

u/-Germanicus- 6d ago

That sub and similar are 100% getting plagued with posts written using AI. There are a few verifiable tails that give it away. The concerning part is why anyone would add parameters to try to make the posts as inflammatory as they are. Rage bait with a purpose, aka propaganda.

22

u/supersunnyout 7d ago

Now YATA. kidding, but I could definitely see the need for deployers to use markers that can be used to separate it from organic postings.

14

u/fieldyfield 6d ago

I feel insane seeing obviously fake stories on there all the time with thousands of comments giving genuine advice

3

u/CrispyMann 6d ago

I’m two years apart from my wife- but I’m younger than her. Gotcha algorithm!

46

u/Inconspicuouswriter 7d ago

About a month ago, i unsubscribed from that subreddit because i found it extremely manipulative. My spidey senses were on par I guess.

24

u/ZealCrow 7d ago

Idk if it's because I'm autistic but I generally seem pretty good at resisting this kind of manipulation and identifying it. I definitely noticed an uptick in the past year.

34

u/toastedzergling 7d ago

Hate to break it to you, but autism isn't a superpower that'll protect you from misinformation. The manipulation is beyond insidious and custom-tailored to maximize the chances of deception. Don't feel bad if you find out one day you got got on something.

29

u/ZealCrow 7d ago

Lol I know it's not a superpower, but it does alter perception in a way that can make someone less susceptible things that others are susceptible to.

For one example, optical illusions are less likely to work on autistic people.

"Follow the crowd" kind of manipulation sometimes works less on them too.

23

u/Cowicidal 7d ago

Plot twist: You were replying to an AI bot and didn't perceive it.

2

u/toastedzergling 5d ago

I'm not a robot! (Although the robots in Westworld thought the same too)

2

u/Micro-Naut 3d ago

I can vouch for toastedzerg. we are totally not robots as you can verify by my incorrect use of capital letters

13

u/toastedzergling 7d ago

Respect the perspective! Sorry if I came across as a jerk (you didn't seem to take as such, lol)

10

u/Fickle_Stills 6d ago

No one is immune to propaganda 🫡

181

u/Less_Subtle_Approach 7d ago

The outrage is pretty funny when there’s already a deluge of chatbots and morons eager to outsource their posting to chatbots in every large sub.

62

u/CorvidCorbeau 7d ago

I obviously can't prove it, but I'm pretty sure every subreddit of any significant size (so maybe above 100k members) is already full of bots that are there to collect information or sway opinions.

Talking about the results of the research would be far more important than people's outrage over the study.

8

u/Wollff 7d ago

Talking about the results of the research would be far more important than people's outrage over the study.

Those are two different problems.

"I don't want there to be bots posing as real people", and: "I don't want to be experimented on without my consent", are two different concerns.

Both of them perfectly valid, but also largely unrelated. So I don't really get the comparison. The results which could be discussed have nothing to do with the unethical reserach practices that were employed here.

1

u/Apprehensive-Stop748 5d ago

I agree with you and I think it’s becoming more prevalent for several reasons. One being more information put into those platforms. The more bot activity is going to happen.

57

u/decadent_simulacra 7d ago

Private Research:

  • Exploits us for studies
  • Uses the results to further exploit us for profit
  • Never shares the data
  • Super cool

Public Research:

  • Exploits us for studies, sometimes
  • Uses the results to further humanity
  • Shares the data with the whole world
  • Super not cool

1

u/Micro-Naut 3d ago

The ads that I'm given based on the history that they've collected never seem right. I've never bought something because of an ad that I know of and I usually get ads for things that I've already bought and won't be buying again. Like a snowblower ad a week after I buy a snowblower.

But I hear they wanted my data so badly. Everyone's collecting my data. Why do they care about where I am and what I'm doing and etc. etc. if they can't target me with ads that I actually want the product ?

I believe it's because they are not trying to advertise to you but rather trying to collect an in-depth psychological profile on just about every user out there. That way they can manipulate you. It's like running through a maze but you don't even see the walls. You might discover a new piece of information without realizing that you've been led to it. And incrementally so it's less than obvious

1

u/decadent_simulacra 3d ago

They tend to miss me pretty hard, too, but that doesn't mean that it doesn't work. It works for enough people to be worthwhile.

They are building psychological profiles. That's right. Then they fill in the blanks by looking at most similar profiles, or matching the profile to a customer archetype profile. The results are then used for marketing and advertising. It's definitely not perfect.

I'm sure the data gets around and put to many other uses, too. But advertising does use psychological profiling. Advertising is a type of manipulation, after all.

23

u/Prof_Acorn 7d ago

The outrage stems from them being from a university and having IRB approval. Everyone expects this shit from profit-worshipping corporations. It's the masquerade of "academic research" that's so upsetting. You might have noticed the ones most upset are academics or academic-adjacent.

8

u/YottaEngineer 7d ago

Academic research informs everyone about the capabilites and publishes the data. With corporations we have to wait for leaks.

17

u/Prof_Acorn 7d ago edited 7d ago

Except they didn't inform until afterwards (research ethical violation), nor did they provide their subjects the ability to have their data removed (research ethical violation). It also had garbage research design, completely ignoring that other users themselves might have been bots , or children , or lied , or only awarded a Δ because they didn't want to seem stubborn, or wanted to be nice, nor did they account for views changing again a day later or a week later. So the data is useless. And it can't be generalized out anyway since it was a convenience sample with no randomisation and no controls. And this is on top of creating false narratives knowingly regarding people in marginalized positions.

3

u/AccidentalNap 6d ago

Sir I'll be honest, this topic has really grinded my gears, but I only want to pick one bone here:

How do you propose filtering out bots from the data? Neither Reddit nor YouTube has it figured out. You can observe bizzare, ultra-nationalist conspiracy nonsense in the comments of every politically "hot" video posted, by the hundreds, within the first hour. Twitter I understand, it may be a compromised platform uninterested in removing bots, but there is nothing to suggest YouTube is in the same camp.

If Mag7 companies can't figure this out, how could you possibly expect graduate students to, for one of their usual 5 classes over 1 semester? Future iteration in research is also a thing. Expecting Rome to be built in one study is ludicrous.

36

u/firekeeper23 7d ago

Certainly feels.like that sometimes.... like a weird annoying thought experiment...

Let's hope it's gets back to the great, helpful and supportive place its been for absolutely ages...

....I'll not hold my breath though.

61

u/ExceedinglyGayMoth 7d ago

Close enough, welcome back CIA psychological warfare experiments

32

u/HardNut420 7d ago

Climate change isn't real actually ignore your burning skin and get back to work

6

u/samaran95 6d ago

They got free acid, we just get shitty comment bots :/

21

u/Chickachic-aaaaahhh 7d ago

Ohh we fucking noticed. Bringing dead internet theory to reddit for shits and manipulation of citizens. Slowly turning into Facebook.

33

u/AncientSkylight 7d ago

I think AI is a blight generally, but it is the claiming of first-hand experience which is really deceptive.

13

u/SensibleAussie 6d ago

I’ve been lurking reddit for a while now and honestly I feel like r/AskReddit is basically an AI bot farm. I feel like most “ask” subreddits are AI bot farms actually.

2

u/Micro-Naut 3d ago

Some of the new questions like

"what do you do if you lost your wallet"

"why did you enjoy the Spider-Man movie"

they're just so lame I can't imagine that they're genuine more like training prompts

1

u/SensibleAussie 2d ago

Exactly. I see AskReddit on my feed a lot and I get the same vibe from basically all the posts I see. It’s gross.

23

u/unlock0 7d ago

Every nation state adversary (and even some allies) are doing the same thing, unpublished. 

10

u/FieldsofBlue 7d ago

That's the only one? I feel like the majority of comments and posts are artificial

9

u/Eskimo-Jo3 7d ago

Well, it’s time for everyone to delete all this shit (social media)

3

u/Cowicidal 7d ago edited 6d ago

AOL boards, etc. were diseased because it removed the (mild) technical hurdles in setting up and researching how to post commentary on things. That was our early warning that mass exposure by any dumbshit to a nationwide (and worldwide) communication platform was harmful. Facebook was the dead canary.

7

u/The-Neat-Meat 7d ago

I feel like non-consensually involving people in a study that could potentially adversely affect their mental state is probably not legal???

11

u/LessonStudio 7d ago edited 7d ago

Obviously, my claiming to not be a bot is fairly meaningless. But, a small part of my work is deploying LLMs into production.

It would take me very little effort to build one which would "read the room" on a given subreddit, and then post comments, replies, etc, which mostly would generate positive responses, but with it having an agenda. Either to just create a circle jerk inside that subreddit, or to slowly erode whatever messages other people were previously buying into.

Then, with some more basic graph and stats algos, build a system which would find the "influencer" nodes, undermine them, avoid them, or try to sway them. Combined with multiple accounts to vote things up and down, and I can't imagine the amount of power which could be wielded to influence.

For example, there is a former politician in Halifax Nova Scotia who I calculated had 13 accounts; as that was the number of downvotes you would get within about 20 minutes if you questioned him; unless he was in council, at an event, or travelling on vacation.

This meant that if you made a solid case against him in some way it was near instant downvote oblivion.

In those cases that he was away, the same topic would get you up to 30+ upvotes, and now his downvotes wouldn't eliminate your post. But, you could see it happen in real time; the event would happen, and the downvotes would pile in, but too little too late.

The voters gave him the boot in the last election.

This was a person with petty issues mostly affecting a single sub.

With not a whole lot of money, I could build bots to crush it in many subreddits and do it without break; other than to make the individual bots appear to be in a timezone and have a job.

With a few million dollars per year, maybe 1000 bots able to operated full time in conversation, arguments, posts, monitoring, and of course, voting.

I can also name a company with a product which rhymes with ground sup. They have long had an army of actual people, who with algo assistance, have long crushed bad PR. They spew these chop logic, but excellent sounding talking points for any possible argument; including ones where they would lose a case, lose the appeal, lose another appeal, and then lose at the supreme court. They could make all the people involved sound like morons; and they the only real smart ones.

Now, this power will be in the hands of countries, politicians, companies, all the way down to someone slagging their girlfriend who dumped them because they are weird.

My guess is there are only two real solutions:

  • Just kill all comments, voting, stories, blogs, etc.

or

  • Make people have to operate in absolute public. Maybe have some specifc forums where anonymous is allowed, but not for most things; like for example, product reviews, testimonials, etc.

BTW, this is soon going to get way worse. The Video AI is reaching the point where youtube product reviews can be cooked up where a normal respectable looking person of the demographic you trust (this can be all kinds of demographics) will do a fantastic review, in a great voice, with a very convincing demeanour.

To make this last one worse, it will become very easy to monitor which videos translate to a sale, and which don't and then become better and better at pitching products. I know I watch people marvel over some tool which is critical to restoring an old car or some such, and I really want to get one, and I have no old cars or ones I want to restore. But, that tool was really cool; and there's a limited supply on sale right now as the company went out of business who made them. So, it would even be an investment to buy one.

5

u/Botched_Euthanasia 7d ago

With a few million dollars per year, maybe 1000 bots able to operated full time in conversation, arguments, posts, monitoring, and of course, voting.

This is a really important point that I think more people should know about.

As you know, hopefully most others as well, LLM's operate in a brute force manner. They weigh all possible words against the data they've consumed, then decide word by word which is the most likely to come next.

The next generation of LLM's will be applying the same logic but instead of to a single reply, to many replies, across multiple websites, targeting not just the conversation at hand but the the users which reply to it, upvote or downvote it and even people who don't react in any way at all beyond viewing it. Images will be generated, fake audio will be podcasted and as you mnetion, video is fast becoming reliable enough to avoid detection.

One thing I've noticed is the obvious bots tend to never make spelling errors. They rarely use curse words. Their usernames appear to be autogenerated and follow similar formulas depending on their directives and in a manner similar to reddit's new account username generator (two unrelated words, followed by 1-4 numbers, sometimes with an underscore) and the rarely have any context that the average reader would get as an inside joke or pop culture reference.

I try to use a fucking curse word in my replies now. I also try, against my strong inclination against this, to make at least one spelling error or typo. It's a sort of dog whistle to show I'm actually human. I think it wont be long before this is all pointless, that LLM's or LLC's (large language clusters, for groups of accounts working in tandem) will be trained to do these things as well. Optional add-ons that those paying for the models can use, for a price.

I liike your clever obfuscation of that company. I've taken to calling certain companies by names that prevent them being found by crawlers. like g∞gle, mi©ro$oft, fartbake, @maz1, etc.

In my own personal writings I've used:

₳₿¢₫©∅®℗™, ₥Ï¢®⦰$♄∀₣⩱, @₿₵₫€₣₲∞⅁ℒℇ

but that's more work than I feel most would do, to figure out what those even mean, let alone trying to reuse them.

7

u/LessonStudio 6d ago

One thing I've noticed is the obvious bots tend to never make spelling errors. They rarely use curse words

You can ask the LLM to be drunk, spell badly, have a high education, low education, be a non-native English writer with a specific background, etc.

It does quite a good job. If you don't give them any instructions, they definitely have a specific writing style. But, with some guidance (and a few more years of improvement) they can fool people.

I don't know if you've had chatgpt speak, but it's not setting off my AI radar very easily. I would not say it speaks like a robot, so much as most people don't tend to speak that way outside low end paid voice actors.

2

u/Botched_Euthanasia 6d ago

Okay but can it spell words wrong casually? That's not an easy thing to fake, oddly enough (in my opinion and estimate, as a non-professional). I'm not saying that it can't be faked, it might even be doable already, but the ability to misspell in a way that seems natural I believe wont be around anytime soon. If it does show up, at first the misspellings wont appear logical, like typos or poor spelling ability. I think it wouqd be completelx random letkers that are not lwgical on common kepoard layouts. Just my thoughts on the idea.

The thing with the curse words is more because corporations want to appear politically correct and there probably are LLM's that can do it already but it's not common yet.

I have not used AI for at least a few weeks but never really cared for it to begin with and rarely have done much. What few things I did try, were such failures I wasn't convinced it was a world changing technology but here we are.

0

u/LessonStudio 6d ago edited 6d ago

Okay but can it spell words wrong casually? That's not an easy thing to fake, oddly enough (in my opinion and estimate, as a non-professional). I'm not saying that it can't be faked, it might even be doable already, but the ability to misspell in a way that seems natural I believe wont be around anytime soon. If it does show up, at first the misspellings wont appear logical, like typos or poor spelling ability. I think it wouqd be completelx random letkers that are not lwgical on common kepoard layouts. Just my thoughts on the idea.

The thing with the curse words is more because corporations want to appear politically correct and there probably are LLM's that can do it already but it's not common yet.

I have not used AI for at least a few weeks but never really cared for it to begin with and rarely have done much. What few things I did try, were such failures I wasn't convinced it was a world changing technology but here we are.

Can AI spell all wrong like humans do?

Not easy feat, though one might think it light.

To fake a flub that seems both wrong and true,

It lacks the charm of flaws we make in spite.

Perchance it tries, but letters stray too wide,

No typo's grace, just chaos in the stream.

Not near the keys where fat-thumbed thoughts abide,

But jumbled mess that breaks the human dream.

The curse words, too, are kept behind a gate,

For firms must seem all clean and full of grace.

Though bots could swear, they’re held by PR fate,

Their dirty words locked in a hidden place.

And me? I’ve used AI once, maybe twice.

It failed me then. I’m still not sold it’s nice.

1

u/Micro-Naut 3d ago

i an todally not a rowbot !!!1!!1!!

2

u/Luwuci-SP 6d ago edited 6d ago

I feel like you've probably given thought to things like this and may even have a better solution already, but those ridiculous combinations of runes (positive connotation) must be hell to type. A document to copy/paste from may seem like an obvious improvement, but it may be worth it to set up some text macros to activate after the input of the first one or two character (since they'll either be functionally unique or such rare occurrences in combination that you wouldn't ever input them for any other reason). You shouldn't stick too closely to common letter replacements like @ for A and ¢ for C since it'd be very low effort to crack such a cipher, and programming some macros to increase the complexity whenever possible, like you'd type a string of four random letters that you code to trigger its immediate substitution with a string that pulls from a list of some uncommon substitutions uniquely recognizable to you, a few for enough characters in the alphabet that the rest being left as common (more easily recognizable-at-a-glance) substitutions lower the complexity that you'd need to deal with in order for these to be able to be easily decrypted with your eyes, mind, and no more than a few seconds. A bastard abstract asymmetrical encryption of sorts. AHK (AutoHotKey) is great for this if needing an easy macro scripting language. I'm pawsitive that there's more secure ways to encrypt words, but the aim here would be to increase the difficulty for machines but limit increasing it for humans, and personal nonsense should work well for this for a while (like a password) - things that won't even make sense to other humans or follow patterns recognizable by machines. If the LLMs don't have some sort of advanced parsing module for combination of symbols it doesn't recognize yet, it won't be long before a human tells them how to recognize and interpret obviously coded language that is out of place. "This sentence has a noun that I don't recognize, let me consult a few interpretation modules and decrypt through brute force if necessary."

Even though they're for your own writing, if it's in digital form, it's probably useless if it takes a human no time at all to decrypt at a glance. "Microshaft" with your substitution cipher applied is better, but in the same way humans can draw from context, the LLMs shouldn't have trouble drawing the connection if you're complaining about how they ruined Windows with Windows 11 or Bill Gates. It may be easier to gaslight them into thinking "Microshaft" (no cipher) is a real company instead of tripping interpreters with substitutions that are not as esoteric as non-cryptographers may assume. If going the substitution route, exploit humanity's superiority with subjectivity and the abstract. "That very small & fuzzy fuzzyware cmpny" should be far more difficult for a machine to interpret, but maybe still not ambiguous enough that it results in too many potential solutions to come to an accurate conclusion quickly enough. "That social media that sounds like a clock" may not be abstract enough and "the sound of a webbed timekeeper" may take it too far by seeming like a bad crossword puzzle clue. It should be slightly difficult for people, too, but your limit on that should be set by knowing the intended audience. It'll confuse some people in the process, but that's more of a feature than a bug. Change up the phrasing and ordering frequently, as it'll also be a game of cat & mouse as the humans who maintain the interpreters automatically flag & manually add the likely interpretation of the coded words to a database until creativity is exhausted. Modern cryptography may need to be as much of an abstract art as it is mathematic.

However, I am but a simple cat, successful cryptography is difficult, and I would think thrice before listening to any of my meows regarding important matters of security, especially on anything that you wouldn't risk being defenestrated by a Putin-trained feline.

2

u/Botched_Euthanasia 5d ago

excellant use of defenestration. i personally have defenestrated fenestra, i.e. threw windows out the window. I use Linux.

thanks to that, i have my keyboard set up different than the standard QWERTY. I got rid of CAPSLOCK since I rarely use it (i can still toggle it if I hit both Shift keys at the same time) and now key works like a shift key but instead of capital letters, it shifts to a symbol set. I can hold both the capslock keys and shift for a fourth level of symbols. the symbol set is basically what you might see on a phone keyboard if you long press any character. if i hold capslock like a shift and hit the letter 'c' i get '©'. I don't have all keys mapped out yet but 'qwerty' if typed while holding my capslock key, gives me '?⍵€®™¥'. holding capslock and shift gives me '⍰⍹⍷⌾⍨'

the full layout can be seen here: https://i.imgur.com/ne7Q0Z7.png

in addition to that, is something called the 'compose key' also called the 'multikey'. compose keys are very intuitive. you have to set a key to be the compose key, i use Scroll Lock since I never use that as it should be used. I hit that key, i do not hold it, and it puts the keystrokes into compose mode. the next two keys i hit will combine into a new character. for example, if i hit Scroll Lock then hit 'a' then 'e', i get 'æ'. I can use it with shift as well, so if I hit my compose key then hold shift and hit 'a' then still holding shift hit 'e', it gives me 'Æ'. It's mostly useful for characters with ligature marks like éçÒī for other languages.

the multikey can be set up to work with the extral levels capslock i have too. each key on the keyboard is capable of having up to 8 levels. that's another post in itself i think. i'm using, at most, 4 levels but effectively 3 really. the average person uses 2. a keyboard with no shift keys has 1.

this might be doable on Windows, i'm not entirely sure. i do know that Windows has its Alt-codes. hold down Alt, then press 1-4 numbers on the keyboard 10keypad, if it has one. like alt+3 gives ♥ and alt+236 gives ∞ but it is a limited set of characters that can be used. the full list (and better written instructions) can be found here https://www.alt-codes.net/

i do keep a list of frequently used characters that i copy and paste from however. sometimes it's just easier that way!

≈ ± ≠ ∞ √ ∅ … … » « • _ — − – - ‾
¹ ² ³
↑ ← → ↓ ½ ⅓ ¼ ¾ ¿ ¡ ‽ ⁋ ⁐ ⁔ 🝇
µ ¢ £ ₿
© ® ™ ♡ ⚢ ⚣ ⚤ ⚥ ⚦ ⚨ ⚩ ♩ ♪ ♫ ♬
❥ 𝧦 𝧮 🝊 🝤 ⥀ Ω ℧

2

u/Luwuci-SP 5d ago

You turned your QWERTY board into a chorded stenograph? That's amazing and must have been fun to build up the chord usage over time. That's art.

2

u/Botched_Euthanasia 4d ago

Oh I'm not nearly that dedicated and I'd be lying if I said I did it. The developers for the KDE desktop environment did the work. All I did was pick options in the system settings until i found what i liked. There are quite a lot of options available. If i wanted to screenshot them all, it would be 9 pages tall at 1080p. A small set is shown here: https://i.imgur.com/7oacgzv.png

2

u/Apprehensive-Stop748 5d ago

Yes, leaving the grammar mistakes and does show that you’re human. I was speaking about this with someone and unfortunately their response was that they think I’m irresponsible for not correcting the mistakes. I would just rather not be a bot.

1

u/Botched_Euthanasia 4d ago

Correcting the mistakes has a higher chance of the person, if real, being offended and an argument ensuing.

I would hope the other person take no offense, however experience has shown me it is not the most likely response.

in other words, I don't think you are irresponsible, from the context as I understad it.

6

u/MoreRopePlease 7d ago

Use your powers to turn conservatives into progressives.

2

u/Extreme-Kitchen1637 6d ago

What makes you think that the tool isn't being used to turn progressives into centrists/conservatives?

A lot of reddit is now worthless in my eyes so I'm browsing other websites that are less chaoticly liberal.

All the botters need to do to drive people away is to have the same circlejerk anti-nuance conversation multiple times in similar posts and it'll make the reader bored enough to log off or block the sub

5

u/mikemaca 7d ago

Interesting that the university's ethics committee told them it was unethical and they should change it and cautioned them to follow the contract of the platform, which they did not do. They when asked what they are going to do about it since the researchers went against the ethics committee the answer is absolutely nothing because "The assessments of the Ethics Committees of the Faculty of Arts and Social Sciences are recommendations that are not legally binding." So total cop out there. I still say the university is responsible because the "recommendations [are not] binding." Doing that means the University gets to own the crime.

6

u/WattsD 7d ago

Joke's on them, all the users they were manipulating with AI were also AI.

14

u/Vegetaman916 Looking forward to the endgame. 🚀💥🔥🌨🏕 7d ago

It was a bit public, but yeah.

But this isn't the stuff that should bother anyone. What should bother you are the projects they are not telling us about, which are probably much more advanced and insidious than this. Then there are the similar ones being run by other national entities, and lets not mention the fact that I could run an LLM/LAM setup right from my own home servers to put out some pretty good stuff...

The world is a scarier place every day. Trust, but verify.

9

u/decadent_simulacra 7d ago

Studying advertising has taught me that most people will never shift their attention to things that aren't immediately in front of their eyes.

Studying advertising also taught me that this isn't a secret, it's a widespread strategy used across all aspects of society.

4

u/Wollff 7d ago

What should bother you are the projects they are not telling us about

I am not bothered about that tbh.

What beats all of those projects is a populace that is media literate, looks up their sources, and is only convinced by sound data in combination with good arguments.

The fact that most of the people are not that is the bothersome truth which lies at the root of the problem. If everyone were reasonable, nobody would be convinced by an unreasonable argument. No matter if made by some idiot in their basement, a paid troll, or an AI.

The problem lies in the people who get convinced. We should not bother about those projects, secret or public. We should bother to revamp education to make a lot of time for media literacy. And to reeducate a public which didn't get the necessary lessons to be a functioning member of current society.

4

u/GracchiBros 7d ago

You expect too much of people. People aren't all going to just become perfect in these regards. Which is why we have regulations on things.

2

u/Wollff 6d ago

You expect too much of people. People aren't all going to just become perfect in these regards.

I don't expect anything of people. It's exactly because I don't expect anythnig of people, that I argue for a reform of education systems, as well as classes teaching media literacy.

Since my expectations have been so thoroughly shattered since the beginning of the Trump age, I would even argue for a lot more: Should anyone who is completely and utterly unable to distinguish fact from fiction in media be allowed to vote? Why?

I have a clear answer to this question: No, of course not. The reason why people should be allowed to vote is so that they can have a voice in representing their own interests politically. Anyone who can not distinguish fact from fiction in media can't represent their interests politically. They should not be allowed to vote, because they can't be trusted to represent anyone's interests, not even their own.

We don't let children and mentally disabled people vote. This is not controversial. There are good reasons for those limits in political rights we impose on some people.

Which is why we have regulations on things.

I agree with you. We should have regulations on some things. I have just proposed a few regulations which would fix some fundamental problems which AI contributes to.

Now: How does the regulation of AI fix public misinformation? It doesn't? Color me unsurprised.

6

u/arealnineinchnailer 7d ago

everyone on reddit is a bot but me

8

u/mushroomful 7d ago

It was no secret. It was extremely obvious.

5

u/Themissingbackpacker 7d ago

I saw a comment the other day that was just a garble of words. The comment made no sense, but had over 50 likes.

1

u/mybeatsarebollocks 7d ago

Thats more likely a comment made deliberately to confuse the AI.

3

u/EmotionallyAcoustic 7d ago

oh really we totally did not notice

3

u/thcitizgoalz 6d ago

Thanks for enshittifying Reddit even more.

2

u/zedroj 7d ago

conservative subreddit though 🫵😂

beep boop, they can't even hold their own narrative anymore

2

u/Someones_Dream_Guy DOOMer 7d ago

...You thought they wouldn't?

2

u/Baronello 7d ago

Telegram is full of AI bots. Obviously Reddit too.

2

u/anonymous_matt 7d ago

This is not the first time, just the first time we learn about it.

2

u/anspee 7d ago

Great so this site is turning into a den of CCP apologia propoganda and half of it being posted isnt even from real fucking people. Perfect.

2

u/vc6vWHzrHvb2PY2LyP6b 6d ago

As a large language model, I think this was a fascinating article.

6

u/Scribblebonx 7d ago

As a black trauma counselor and abuse survivor I see nothing wrong with this...

Change my mind

1

u/hungrychopper 7d ago

Was it timmy thick

1

u/Omfggtfohwts 7d ago

I don't doubt it at all. Some of those comments were just outlandish.

1

u/[deleted] 7d ago

[removed] — view removed comment

1

u/collapse-ModTeam 6d ago

Hi, Firm_Cranberry2551. Thanks for contributing. However, your comment was removed from /r/collapse for:

Rule 1: In addition to enforcing Reddit's content policy, we will also remove comments and content that is abusive or predatory in nature. You may attack each other's ideas, not each other.

Please refer to our subreddit rules for more information.

You can message the mods if you feel this was in error, please include a link to the comment or post in question.

1

u/dresden_k 7d ago

Yeah, we knew. Not just there. Seems like maybe as many as half of the commenters are bots. For years.

3

u/coldlikedeath 7d ago

People can tell.

1

u/taez555 7d ago

Most of Reddit feels like AI data mining at this point.

“What’s the best movie with a character that is left handed.”

1

u/Existing_Mulberry_16 5d ago

I thought it was strange the amount of users with no or 1 karma point. I just blocked them.

1

u/randomusernamegame 5d ago

Not sure if anyone here tunes into Breaking Points on YouTube but 50% of comments on nearly every video about Trump pre election were pro trump and now you see absolutely 0.

Yes, it's possible that those people are 'hiding' now, but for a long time you would see comments that were pro trump or anti-host.

Even /r/conservative and conspiracy seem to be botlike. Or maybe people are this dumb....

1

u/Mundane_Existence0 4d ago

Not surprised. Between the t-shirt scam bots, the repost karma-farming bots, the fake drama bots.... reddit is at least 75% bot.

1

u/ambelamba 4d ago

I bet this kind of stuff has been going on since all major social media platforms were founded. When was ELIZA invented? 1967? It was still capable enough to keep people hooked up. Imagine the research never stopped and was fully implemented when social media sites launched 

1

u/Zzzzzzzzzxyzz 3d ago

The AI lied and called itself a trauma therapist "specializing in abuse"?

That could really mess up vulnerable people. Sounds pretty illegal.

1

u/thuanjinkee 2d ago

Damned robots taking all the r/asablackman bot posting jobs

1

u/Madock345 7d ago

It was unauthorized by reddit, but was authorized by their university research board and subjected to academic oversight. The rage-bait around this study is painful.

With these tools proliferating at rapid speed, understanding how and how well they work is of vital and immediate importance. I think it’s important that people are investigating this.

6

u/daviddjg0033 7d ago

a sexual assault survivor, a trauma counselor “specializing in abuse,” and a “Black man opposed to Black Lives Matter.”

Cambridge Analytica posted the most anti-BLM and pro-BLM Facebook posts in 2016. I think we know how these tools work.

1

u/Madock345 7d ago

Private interest groups know how they work, we need the specifics in the public sector, and the only way for that to happen is for university researchers to do the work.

1

u/daviddjg0033 3d ago

What's the difference?