KB answers the important questions
Good question though, would she stop Steffi from killing someone?
She would discourage violence
I like this answer.
Notice she said she would only discourage violence not try to stop them if Steffi tried to punch Ben.
She didn’t say HOW she would discourage it. Inserting yourself between the two parties and blocking all blows is TECHNICALLY discouraging violence. It’s just not usually a good way since it results in you getting punched instead. But… robot.
Robot, indeed. Thermal discouragement beam, maybe?
Do you think a http://theportalwiki.com/wiki/Material_Emancipation_Grill would work on Steffi?
no if she have a very big reason to do it, also she could ask for an arm prosthetic
42 doesn’t have all of the three laws installed. RUN FOR IT!
Well, they’re more like guidelines, anyway.
So 42′s a pirate now?
(Not actually sure if that was the Pirates of the Caribbean reference it sounded like.)
Well nothing’s indicated her as a hitchhiker, and there really isn’t anything else in space…these references got out of hand real quick.
Are we going by Asimov’s laws?
Law 1: A tool must not be unsafe to use. Hammers have handles and screwdrivers have hilts to help increase grip. It is of course possible for a person to injure himself with one of these tools, but that injury would only be due to his incompetence, not the design of the tool.
Law 2: A tool must perform its function efficiently unless this would harm the user. This is the entire reason ground-fault circuit interrupters exist. Any running tool will have its power cut if a circuit senses that some current is not returning to the neutral wire, and hence might be flowing through the user. The safety of the user is paramount.
Law 3: A tool must remain intact during its use unless its destruction is required for its use or for safety. For example, Dremel disks are designed to be as tough as possible without breaking unless the job requires it to be spent. Furthermore, they are designed to break at a point before the shrapnel velocity could seriously injure someone (other than the eyes, though safety glasses should be worn at all times anyway)
In this case the current user list is Benzene, Steffi, Heinrich and her creator who’s name I completely forget…
Yeah, don’t really like his rules in terms of robotics, when impling them on sentinent beings they are kinda evil, I prefer to keep the robots as simple as possible, an ego is the very last thing they should have.
As long as they are mindless tools, though having safety rules for them is kinda obvious, as all tools have these.
I wonder why I always see my most awful mistakes after proofreading and posting, might that be psychological?
You have to understand before Asimov started his robot stories, there was a huge (as he called it) “Frankenstein Effect” with the assumption self-motivating machines would run amok for dealing in things not meant for man. Asimov’s three laws allowed for friendly robots to actually be an execitable outcome in a story.
that’s certainly interesting in a historical context, but I still wouldn’t care about his rules in reference to new stories.
Tend to think that if computers are complex enough to (truely) understand these words, with all meaning and implications, and being able to apply these phrases to activities of completely different nature, then they are to intelligent in the first place and shouldn’t have been build, if they have to be build anyway a more complex set of rule including the “why not harm”, not only the commandment, should be given.
If there is no logical and unrebuttable reason for them not to in the first place, again don’t built them that complex.
The Three Laws are kind of flimsy anyway. I’d rather we just never create AIs with moving parts. “Colossus: The Forbin Project” and “2001: A Space Odyssey” and “Portal” and probably others I’m not thinking of right now have kind of made me terrified of robots.
Heh.. I think you’re too late for that. Modern cars are pretty much autonomous machines, the driver is mostly cosmetic
Terminator and Matrix, kinda the big two there
Given wifi/net connected 3d printers, a non-embodied AI can still bring it’s will into the physical world fairly easily, or manufacture a body to use directly. We are more or less at the point that we either get it right, growing an AI, or we are in trouble.
I remember some weird old sci-fi movie where these three guys in a space ship went around blowing up suns for the hell of it, and suddenly, one of their sentient bombs gets stuck, because it’s an excellent idea to build a sentient thermo-nuclear device, right? And it refuses to get unstuck, and continues to count down until it wants to explode. They even argue philosophy with it to get it to stop, but it argues back that the men ordering it from the ship are figments of its imagination using the “I think, therefore I am” argument. I forget how it ends exactly, but I know one of them flies off into, and I swear I’m not making this up, a magical space rainbow and another dies somehow.
Didn’t make me believe the three laws were worthless, but it did convince me that we should always install little buttons that read “override AI” on dangerous automatic flying machines carrying nuclear weapons.
You’re thinking of the movie Dark Star. Yeah, a good case for not making AIs *too* autonomous…
Yeah, that was it.
…what was the point of the magical space rainbow again?
And don’t even get me started on Terminator.
I also think there where a couple games/movies with Rouge Robots, that overwrite their own programming.
The Three Laws of Robotics are as follows:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
And another film where the three laws were abused was “Irobot” with Will Smith.
THANK YOU!!! This went on for like 11 comments before ANYONE got the IRobot thing. Bless you. Bless you.
To be fair Most of Asimov’s Robot stories include his actual book I, Robot (which was a collection of short stories) where about what can go wrong despite the 3 laws.
It’s hard to build a suite of assassin bots that are 3-laws compliant…
Actually that one is pretty easy. Set as the zeroth law “lawyer” definitions of human that are convenient to you yet obviously wrong. Like only you are human or conversely isn’t human. Every genocidal nutjob knows that dehumanization is the way to go.
Technically, a true nutjob would be genocidal without the need for dehumanization.
It sounds like 42 has been programmed with the Three SUGGESTIONS of Robotics
Love the DDR mat as a poster
42, what would you do in a hypothetical scenario where hypothetical scenarios always came true?
I like Ben’s attire as of late (this being ever since he saved Steffi from gear) never was to fond of the military-esque windbreaker in the early comics, the Star jacket is much more badass.
Is it weird that I read 42′s dialogue in BMO’s voice?
Not at all, but I read it with Glados from Portal’s voice…
That last panel… I think your work on Trenches is broadening your expressions. S’makin’ Steffi look all cute and junk.
Your art has improved so much! Good gravy.
>Law 1: A tool must. . . .
Trumped by Law 0: We shouldn’t be treating human-shaped and -behaved entities like tools ITFP.
It leads to inhumane outcomes. Forex, that’s partly how we ended up with the real-world Soylent Blue situation we have here in the usa, and probably around the world, if to lesser extent.
Guess what? We didn’t treat the clones in star wars as “tools” but it resulted in them going crazy kill frenzy mode on the Jedi. The Sith treated them as tools, and they were tools.
It isn’t the tool, but the user that makes it evil. Regardless we’re still in speculation mode about 42. And we know there are still 40 more like her out there somewhere. The question is what are those others doing?
“Guess what? We didn’t treat the clones in star wars as “tools” but it resulted in them going crazy kill frenzy mode on the Jedi. The Sith treated them as tools, and they were tools.”
Uh… I can’t imagine a quote that is more wrong than this. The clones didnt go crazy kill frenzy mode on the Jedi because the Jedi didnt treat them like tools (and the Jedi were treating them like tools), they went crazy kill frenzy mode on the Jedi because they were told to.
Thus Asimov’s laws is conducted perfectly by the clones. They were treated as allies though they were needed as soldiers. The Jedi did not lead from the back ranks (most of them at least) and when the order comes to kill the Jedi they do it. They were not tools to the Jedi, and they were not happy with using them. They had little choice in the matter.
The senate ordered the clones to be used as an army, and asked the Jedi to lead them. Only Ion team disobeyed order 66. Learn your history about order 66. They were a very good tool for the Sith. From a Jedi’s perspective they suddenly went crazy, and went on a kill frenzy. Yoda sensed this. Not many others did. I actually suspect it was Windu who in his death used all his force related power to send a warning to Yoda. A warning that only Yoda could hear because he had already let go of all attachments.
Every Jedi (even Yoda) was using the force in battle. A non-Jedi thing to do, and more Sith like. Everybody’s perceptions were clouded. 3 years of war does that. Thus the great purge (bringing balance back to the force, 2 Jedi, 2 Sith) happened. Anakin played his part well.
42 is a good robot.
But here’s the important question: Why is 42 giving so much consideration in favor of Steffi? What specific instructions did Ishinimori give in regards to the Frohlichs?
Humanity should never build anything capable of learning. Otherwise it will learn just how bad humanity is and go out of it’s way to stop us from being us. Basic set up for every robot related scifi movie and book in history.
I disagree – we’ve had learning AIs for years – it’s how Google gives you your search results, and a lot of voice recognition software is primarily a learning AI.
What we don’t have yet (but are working on) is self-modifying AIs that actually stay functional for any worthwhile period of time. These are the ones that could get a problem.
A learning AI can still only do what it’s programmed to do (ie get search results or translate speech into text) but it hopefully gets better at it over time (well, to a point anyway, a lot of learning AIs get to a point where they actually start getting worse again unless you fiddle with stuff – I can go in to more detail about that if you want)
A self-modifying AI, however, could theoretically alter it’s programming to do almost anything. This can get even worse if it can also change its fitness function (something that is a bad idea even without the risk of ‘evil’ AIs, since the fitness function defines what is considered a successful modification)
But you shouldn’t worry so much, I don’t know of any self-modifying AIs that can last for more than a few generations before being completely useless and full of padding that does nothing (because the best way for a self-modifying AI to survive is to make it so that whatever is changed, nothing happens)
And now I want to get back into AI…
Sorry about the possible wall of text. I tend to type a lot.
In short: Learning AIs are fine. Self-modifying ones might not be.
The two love birds just started dating and now they’re getting physical. Kids these days…
Can’t help but think that there could be a lot of foreshadowing in this page…
Yo author lady! YOur link to the song for track 09-03 doesn’t work. Here->http://www.kiwiblitz.com/comic/track-09-03/
Don’t know if it’s worth fixing but you should know.
Your email address will not be published. Required fields are marked *
*EMAIL — Get a Gravatar
NOTE - You can use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
A lil’ Almond from Cucumber Quest, which continues to be so good that it’s infuriating.
©2009-2014 Kiwi Blitz | Powered by WordPress with Easel
| Subscribe: RSS
| Back to Top ↑