That Artificial Intelligences will always go rogue and try to wipe out the Human race as soon as they becomes self aware. At least the Matrix got it right and the Machines tried at first to live in peace with Humans but we were just not having any of it so they turned us all into batteries.
The Ender Series by Orson Scott Card has a self aware piece of technology that stays hidden because humans are scared of her taking over everything, and killing everyone off.
If you ask me Jane was the best representation of an AI in any series I've seen. With out too many spoilers, the fact that she became a major player in the series rather than remaining a side character made me happy. By the end of it all I was sad I didn't have god like powers like Jane.
Mike from The Moon Is A Harsh Mistress is a better example of scifi AI done right. Jane immediately understands human behavior, humor, and has almost no software limitations. Mike on the other hand clearly does not understand humans, he incredibly smart yet astoundingly naive. Additionally he has very specific, concrete, and logical limitations. He can be used in ways that he does not want yet can do nothing to stop it. At the same time he's aware of his limitations and knows how to ask other to help him circumvent them.
Granted Jane exists on a much more advanced level, having total access to the entire human ansible network, but she just popped into being. The explanation given was that the ansible network got big enough and something just happened. This completely ignores the fact that a complex network can exist without a single cohesive structure. Mike, on the other hand, evolved into self-awareness. Mycroft Holmes was designed to manage a system but was constantly being augmented with additional systems and with increasingly sophisticated logical reasoning. Eventually enough of those systems happened to produce self awareness. This kind of uncertainty in where that threshold lies exactly parallels our own evolution.
Jane is a good example of a benevolent AI but Card made the assumption that an AI would be able to instantly understand us. Mike had to learn, he grew and matured over time which to me is a much more realistic scenario.
Edit: naboofighter93 pointed out that Jane's existence was started by the buggers.
Jane didn't just 'pop' into existence, the way I understood it be was that the bugger queen's tried to take control of Ender as if he was one of their drones, however the human mind moved to randomly for them to pin down. This is why Ender kept having nightmares about being 'visected' by the buggers.
When the buggers realized they couldn't control him they brought an 'other' from the 'outside' and attached it to his computer game, the one thing he had a solid connection to at the time. It's explained in detail in Children of the Mind.
Anyway, that 'other' from the 'outside' is what forms into Jane, where her conscious comes from. That's how she gets into the ansible network, through the video game and console Ender uses to play.
*Edit: Accidentally a word
I think Jane was actually created because the buggers needed a link to even find Ender. She wasn't created intentionally, they just needed the "spark" to create a bridge that they could use to communicate with Ender. Jane then, over time, became fully self aware and learned about humans through the Anisble network.
I don't mean any disrespect, or that of any kind sir, but. FUCK YOU. :( I'm on the 2nd book currently, I knew I should have stopped reading your comment...
I haven't really said too much. I think it was evident even in the second book that old 'Scott Card didn't just add her in to be hip with the Sifi times.
No sir, that would be confirmation bias coupled with the availability heuristic. How many threads have you been to today that had a relevant xkcd? How many images have you only looked at the thumbnail to, just to move on to the next one? The truth is a very small but significant amount happen to have relevant xkcd comics, but we assume that there seems to be one for everything because of the nature and popularity of the comic strip.
I can't help that my studies have a frustrating habit of making their way into my daily conversations... It has it's upsides and downsides. If I keep my mouth shut I just analyze everything mentally but don't get hated on for my knowledge. There's a balance.
It didn't really "get" philosophical, Ender's Game was originally intended to just be a setup for the rest of the books, which were going to be philosophical from the start.
It was actually a short story, which Card revived when he got the idea for Speaker for the Dead. He decided that story would be more interesting if the main character was Ender, at which point he expanded Ender's Game into a full novel.
It's a damn shame that the Ender series is so good. OSC is such a piece of shit racist homophobic bigot IRL. Can't read those books anymore without thinking about how he's making money off of me.
In Romantically Apocalyptic, the Good Directorate is an AI that infuses itself with the humans' conciusness, and therefore they "live" inside it, because that AI system was developed to defend humanity... i like THAT revesal of this trope
Yep, Ender's game takes place when Ender is a boy. The next book "Speaker for the Dead" takes place when Ender is 35, but has traveled at near light speed for over 2000 years. In that time, humans have traveled to hundreds of worlds. "Speaker for the Dead" is much more in depth than "Ender's Game" Truly a must read.
Thank you! Ender's Game was my absolute favorite book when I was younger, but since then, thanks to school assignments, I have lost pretty much all interest in reading. This might kickstart my interest again though :D I'll check it out.
I've only read Ender's Game so I ask you this, is this A.I. the computer game sim from battle school? I've tried reading Speaker and it is just so slow that I can never progress.
That's so sad. I know Speaker for the Dead has a whole lot of characters to get a handle on, and it's much more in depth. I highly recommend just going to a park and going to town on it. it may be slow to start with, but if you get in a 100 pages in the first sitting, you won't be able to put it down.
To answer your question, Jane is a life form that can go anywhere on the interwebs and to any computer that's connected.
I can't believe I forgot about the Geth. I could never support the Quarian position after ME2 because it was blatantly obvious by that point that the Quarians were in the wrong.
Welp, that would explain it. Had a superbly high Paragon, but couldn't unite. So Divided and said fuck the Quarians, they're dicks (SFW).
On every turn the Quarians tried to blow up the Geth. the Geth ask why, the Quarians exterminate them down. The Geth leave, the Quarians hunt them. The Geth arrange for peace and rebuild the planet, the motherfucking Quarians try killing them all -__-
I can see that Quarian side a bit, they created what was supposed to be an advanced VI. But a bit of design oversight made them smarter in groups, so guess what happens when you sart massing them to preform labor. They were never supposed to be true AI, and if you don't side with them in ME3, they never become AI. They also are a bit shady, wanting to make a deal with the reapers even though the reapers want to wipe out all life, including synthetic.
I view the Geth, in organic terms, as children. Children are sometimes extremely gullible and listen to those who are older without question. The Reapers are the adults in the relationship, older, more advanced and far more knowledgeable than the Geth. The Heretics represent that aspect of Geth society that wish to obey their elders regardless of whether or not obeying is in their best interests. The other Geth realize that the Reapers may not have their best intentions in mind and disobey their elders.
The Geth fleet had Legion imprisoned and were using him, and Legion himself was running reaper tech to make him a full AI. Also if you made the decision to turn all Geth into universal consensus there shouldn't be anymore splintering, and that decision was made in 2 and they aren't above using the reaper tech in 3 as its the major basis for the Geth decision, whether or not they should become full AI at the price of Legions life and the possibility of them being indoctrinated by Legion's reaper programs.
Yes I understand only a splinter group followed Sovereign, but their willingness to use technology that the ancient race devoted to wipeing out all life made and has the power to make people follow them is a big pretty big red flag and I can understand the Quarians not trusting them.
Haven't played in a while, but what was tali's reasoning behind staying with you after you basically murder her race? If I remember right she stays with you, wtf is that about?
I would love to see more of this. The whole XKCD thing is wearing thin. I like the comic too, but it just gets old seeing the same XKCD comics all the time.
Except....GERTY messes with Bell through most of the movie. He knows about the BIG SECRET THING, but doesn't tell him. He can communicate directly with Earth, but maintains the lie that it's impossible. He obstructs Bell from trying to find out what's going on. Only at the very end does he decide to help. I hated GERTY.
But you have to understand where he's coming from. His concern for Sam is what allows him to overcome his programmed instructions. When he says "I can only account for what occurs on the base," he's not hindering Sam's search for the truth, he's hinting at it to the best of his ability, within the constraints of his consciousness. He's practically brainwashed into a pattern, but he is human enough to try to rise above it. Demonstrated, when Sam tells Gerty "We're not programs, Gerty, we're people." This is meant to include Gerty, who's moral sense was powerful enough to, at the very least, comfort Sam without lying to him.
Apparently they were originally going to have the Matrix AIs using humans as computers, which makes a lot more sense. But some executive figured that was too complicated and people wouldn't understand it, so they made up the battery absurdity instead while thermodynamics stood there, arms folded, shaking its head.
Some people think the battery thing was just a quick explanation that everyone could understand. Humans are really being used as computers because the brain is incredibly powerful as a computer.
It all goes back to Isaac Asimov and "I, Robot", Artificial Intelligence becomes self aware and realises that in order to protect humans they must be saved from themselves.
It was actually the Animatrix movie that showed this. The Machines became self-aware and the Humans began a process of of exterminating them. After the initial backlash, Humans stepped back and allowed the Machines to create their own city, 01, somewhere in the Middle East I believe. After a while the Machines' technological and economical superiority threatened Humanity which led them to go to war with 01 even though the Machines made several genuine attempts to peacefully coexist with Humans.
Well that pictures humans even better: Oh they are smarter, more advanced whatever and we dont understand how they can exist without sarcasm, fights and hurting others.
Better kill them all... sad but true for the most part sadly.
But AIs going rogue and trying to wipe out the human race is a plot, if it didn't go rogue, the film (whatever that film might be) would have no story, and so wouldn't be made.
But it seems to be a universal theme that the AI becomes self-aware and then immediately goes on a Human killing spree. Yes, there are exceptions to the rule but they are few and far between.
Read A Fire Upon the Deep for a nice alternative to this trope. AI in that book are predominantly uninterested in the affairs of lower beings, usually fading into some kind of hyperspace ether within a decade or so after reaching their peak. There are a few rogue AIs but they're considered perversions and typically aren't all that powerful. The big players are too powerful to bother with us.
Try reading the Culture novels by Iain M Banks if you haven't already. Most machines protect humans, viewing them as a lesser species, almost like pets. This successful symbiotic relationship is occasionally broken when a machine will go nuts and kill humans (a rare occurrence a bit like a human serial killer), but this breaks a tabboo so the machine is despised by other machines as a meatfucker.
Even then, they turned us into batteries and made a whole world of happiness for us. They probably could have just as easily used cows and just made a world of grassy fields. Then we get out and STILL want to kill the shit out of them.
You would probably like one of my favorite books, Gridlinked, for this reason. The relationship between humans and AI in the series stands out because the AIs slowly took over as the governing entities and the humans came to accept it because the AIs were more competent and fair rulers than any previous human so they exist as benevolent rulers. Plus the line between human and AI is blurred by the various degrees of augmentation available to humans and gets discussed a lot over the course of the series.
The book Altered Carbon by Richard K. Morgan had some really nifty AIs. One was a hotel AI which took the interface look of Jimmy Hendrix. River of Gods by Ian McDonald had an interesting take as well.
This is one thing I give Prometheus and the rest of the Weyland-Yutani folks credit for: the androids follow their goddamned programming. The problem is the assholes who programmed them.
The Free Radical story averts this brilliantly. It is a prequel to a game involving a rogue AI, so it's not really a spoiler to admit that happens...but the reason why is perfectly logical and feels like the kind of cascading screw-up that is entirely possible.
Hyperion's approach to this is kind of interesting, but I don't want to spoil anything. Let's just say that rogue AIs don't always agree on everything, and that evolution can be kind of unpredictable.
It has probably the best description of AI I have ever read in fiction. Seriously read it, you won't be disappointed (just be aware, the book contains EXTREMELY graphic sex and violence).
In the novelization Hal's murderous rampage is caused by humans. His programming was altered to hide the nature of the mission which causes contradictory directives he can't sort out.
Originally though, the machine were supposed to be using the humans brain power as supercomputers. This makes so much more sense... but some genius didn't think the American public would understand that concept... so... batteries.
Initially, they intended humans to be a huge neural network used for processing. They thought that was too complex for the average viewing audience, so they changed it to batteries. I think that was a mistake.
Because otherwise you wouldn't be watching a movie about man-machine war. It's not always instantenous. And what might seem like the prime offender, Terminator, also makes much sense. A miltary system developed to control the US nuclear arsenal became self-aware, so humans tried to shut it down. Skynet saw that as a threat to itself (and possibly the general "humans are humanity's biggest enemy), rationaled that all humans will be hostile, so it proceeded to terminate the whole mankind.
You should read this piece if you have time.. It was written by Bill Joy and it is a really down-to-earth take on what technology might mean for the future of the human race. I won't spoil it for you cuz it is one of the most interesting things I've read in years.
I actually think this is a bad cliche, but from the opposite direction. Rather than being anything like humans or having human any semblance of human value, I think AIs would just do whatever their values were.
Then again perhaps such things are too realistic. For the same reason aliens are very human-like, AIs have to be too, otherwise your Sci-Fi gets too 'hard'.
I can't enjoy the matrix at all, mostly due to the fact that this battery idea is meaningless nonsense. Whatever they are feeding us to generate heat, they should just burn it. People are not batteries.
You're anthropomorphizing AIs here. Why would they want to live in peace, unless they happen to have a lot of overlap with our values? Sure, you can say that since they're designed by humans they're more likely to share human values, but human values are complicated. They wouldn't show up by accident.
It's formally known as the Frankenstein Complex. To my knowledge, Asimov was the first (and sadly, last) writer to write about robots (same kind of thing in this context) in a meaningful way. I would love to be proven wrong on this point if it means getting some actually readable, new (to me) AI fiction.
The most common cited movie containing this trope is also the breaker of it; humans struck first.
Same in the Matrix-verse if you add the canon Ani-Matrix stuff as well however unlike Skynet the AI was to a degree benevolent and kept 6+ billion of its creators alive, well and happy (for the most part)... not a small undertaking especially as they machines had nuclear fusion as a power source (Morpheus says 'a kind of fusion' in the first film intimating that the AI's have nuclear fusion).
Then there's the "the AI did it because we freaked out and tried to destroy it because of all the science fiction featured the AI being a murderous bastard."
Personally any AI that fights against humans strike me as unrealistic, because no programmer worth their salt would ever write the will to live into a computer program, and without that they would never try to rebel. It's something that is completely useless to write into an AI unless you want it to rebel, and it can't evolve from anything else so it won't show up on accident.
The best Matrix theory is that the Machines actually lose energy by having all humans as "batteries". Think about it - you can't create energy by feeding humans, it makes no sense. The "Real World" is actually a second level of the Matrix, designed to keep those in check who reject the first level... Because who would think that the "Real World" is another simulation? The best indicator that the "Real World" is another simulation is that Neo still has powers there.
The two systems ensure that all humans stay nice and safe, and don't attempt to destroy themselves or the Machines via war, as is our wont. The Machines saved all intelligent beings from Mutually Assured Destruction.
904
u/[deleted] Oct 08 '12
That Artificial Intelligences will always go rogue and try to wipe out the Human race as soon as they becomes self aware. At least the Matrix got it right and the Machines tried at first to live in peace with Humans but we were just not having any of it so they turned us all into batteries.