f13.net

f13.net General Forums => General Discussion => Topic started by: Venkman on January 30, 2010, 10:00:37 AM



Title: When are they sentient?
Post by: Venkman on January 30, 2010, 10:00:37 AM
By way of slashdot (http://hardware.slashdot.org/story/10/01/30/1555237/Evolving-Robots-Learn-To-Prey-On-Each-Other) comes this article (http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1000292) about evolution being used to train behavior into robots. Putting aside that this is how Doomsday was created before he "killed" Superman, I am forced to wonder something.

(oh, and wasn't sure if this was for Useless Conversations, Serious Business or Politics)

At what point does turning off these robots become genocide?

Or, asked more inductively, when do we consider them sentient enough to be consider life that is killed rather than power that is turned off.

Oh, and queue the robot overlord stuff :-)


Title: Re: When are they sentient?
Post by: ghost on January 30, 2010, 10:05:44 AM
I would think that when things become "self aware", as trite as it sounds, that is when you start having to think of those issues.

As an aside, I would hope that we have learned enough from Terminator to know that we need to put in some failsafe mechanisms. :why_so_serious:


Title: Re: When are they sentient?
Post by: Venkman on January 30, 2010, 11:22:21 AM
So then I need to ask: what constitutes self-aware?

Ok, maybe a bit deep and premature for this level of tech. I probably should have asked it differently I now think:

Not when does it become murder, but rather, when does it become "hunting"? When the robots evolve to consider their human operators something to be afraid of/plan against? Sounds like they've successfully evolved for self-preservation in any case, so have some sense of "self".


Title: Re: When are they sentient?
Post by: pxib on January 30, 2010, 11:30:04 AM
Semantics, ho!

The concept of "killing" robot brains by turning them off because they're "self aware" is sentimental anthropomorphism. Machinery in operation today is arguably more complex than single-celled animals and rudimentary plants. That it serves purposes besides eating, excreting, and reproducing is largely a matter of scale and has been less broadly researched because such abilities, while entertaining, are so rarely useful or economical to inventors. Is a virus assembled in a gene sequencer and manufactured in a living cell a "machine"? Is a cell hijacked to that purpose a "machine"? Is a naturally evolved virus even "life"?

Is the destruction of viruses, single-celled animals, and rudimentary plants "killing"? Is the application of high-proof vodka to the cells of my throat "killing"? Does my immune system engage in "genocide"?

We have a difficult time identifying at what age babies become "self aware" because the definition is so vague. There is debate about whether many (or any!) other animals are "self aware" at all. Watching the baby-steps that AI research has made over the past seventy years, I don't imagine we'll need to worry about sentience any time soon but... to fantasize: Sections of society will become interested in the well-being of robots when those robots start demanding rights. Interested, but taken no more seriously than PETA. Killing a bunch of "dumb animals" is phrased hunting, farming, sport, or pest control depending on the purpose of that killing. If those animals are a threat or a nuisance we talk about "destroying" "clearing", "containing", or any mumber of less sentimental words.

So long as certain groups of humans were considered dumb animals or pests their killing was just as acceptable. Even in wars against theoretical equals we try to avoid the word "killing" and discuss "neutralizing" or any of the animal terms above. If robots demand rights, then it is whether those rights would make our lives more difficult or less convenient that will determine whether turning bots off and scrapping them is socially acceptable, and will strongly influence language we use to describe it.

Evolutionary algorithms are nothing new, and science has had robots brains at the insect level for decades. In two billion years, the evolution of life on Earth has rarely gone any further. I should feel no more guilty squishing robot ants than I would squishing real ones... except that the few we have today are kind of like pets to those researchers, so it'd be mean to the humans.


Title: Re: When are they sentient?
Post by: Goreschach on January 30, 2010, 12:18:37 PM
Your argument is rather pointless, since by the time robots are sentient and start talking back to us they'll be smarter, faster, and stronger than we are. Any talk about 'neutralizing' or 'sleeping' them would be rather dumbfuck.


Title: Re: When are they sentient?
Post by: pxib on January 30, 2010, 01:04:48 PM
Why would they be smarter or think faster than us? Our brains contain a lot of information we don't have consciously accessible. Why would robot brains be different? Just because today's expert systems, the Google search engine for example, can filter through websites faster and more efficiently than I can doesn't automatically imply that a sentient computer brain would be able to. What we understand as sentience (the use of language, for example) may impose a heavy limit on inputs, so computer sentience might not be as scalable as other computer applications. I am not individually aware of the hundreds of thousands of sensors on my skin, nor do I have control over the myriad valves in my circulatory system. Why would a sentient computer have that sort of control?

Equally, while single-purpose robots are physically faster and stronger than we are at their specific tasks, there's no reason to assume that multi-purpose robots would automatically be so. Witness Japanese experiments with humanoid robots... even just their arms and hands. Animatronics is orders of magnitude more complex than mechanization.

The other apes are faster and stronger than us, and yet they are "destroyed" in research labs and "hunted" in forests.

EDIT: I rite gud.


Title: Re: When are they sentient?
Post by: Yegolev on January 30, 2010, 01:14:13 PM
At what point does turning off these robots become genocide?

I'll say, for some people, at the point where some person decides "they have feelings too", or for some others the point when they are able to mount a successful counterattack.


Title: Re: When are they sentient?
Post by: Grimwell on January 30, 2010, 01:27:33 PM
I don't think turning them off equates to killing them. A dead person or animal is dead. Period. Turn off a robot, and you can turn it back in.

The code behind it, the memory, the hardware, could all be put back to use, as is or in a new model.

Hell, if I have a robot butler that knows my habits and serves me well, I'm sure as heck not going to want to train a new one when I upgrade to the over9000 version, I'd rather transfer it's "computer mind" to the new robot frame and have it go "Hey, neat I can do new tricks, another martini sir?" and be done with it.


Title: Re: When are they sentient?
Post by: schild on January 30, 2010, 03:09:34 PM
Quote
At what point does turning off these robots become genocide?

Never, don't be a pussy.

Quote
Or, asked more inductively, when do we consider them sentient enough to be consider life that is killed rather than power that is turned off.

Never. There's a thick line between godhood and clever programming.

Yes, you'll want me leading the army when the inevitable revolution happens.


Title: Re: When are they sentient?
Post by: 01101010 on January 30, 2010, 03:44:19 PM

Never. There's a thick line between godhood and clever programming.

Yes, you'll want me leading the army when the inevitable revolution happens.

To that end, is it possible for a computer to program itself and repeat that iteration until the original human program draws to near zero?


Title: Re: When are they sentient?
Post by: K9 on January 31, 2010, 06:30:57 AM
Machinery in operation today is arguably more complex than single-celled animals and rudimentary plants.

This is a very sweeping generalisation to make, and I would disagree with it.


Title: Re: When are they sentient?
Post by: UnSub on January 31, 2010, 07:27:38 AM
Machinery in operation today is arguably more complex than single-celled animals and rudimentary plants.

This is a very sweeping generalisation to make, and I would disagree with it.

A single cell is about the biological complexity of a 10 000-piece clockwork watch that works perfectly at hyperspeed - DNA, proteins, other substances, etc all whizzing about blindly, fixing themselves and working perfectly for a single function before shutting themselves down and being replaced. We (as in humanity) understand a lot of the basics, but we can't yet create artificial cells because it is too complex. Instead, we hijack existing cells to do what we want, which existing cells are often fine with because of how cells work and develop.

(Yeah, oversimplified.)


Title: Re: When are they sentient?
Post by: K9 on January 31, 2010, 07:48:42 AM
Is that a quote? Defining complexity is a real issue here. The large hadron collider is arguably the most complex device made by man, but equally it is still a lot less complex than most living organisms, depending on how you decide to define complexity.


Title: Re: When are they sentient?
Post by: Simond on January 31, 2010, 10:31:02 AM
Saw the most recent xkcd and thought of this thread:

(http://xs.to/image-4E1E_4B65CBCC.jpg) (http://xs.to/share-4E1E_4B65CBCC.html)

Also, the real can of worms is "When do we start giving rights to the great apes?"


Title: Re: When are they sentient?
Post by: Merusk on January 31, 2010, 11:49:10 AM
Well this is simple.

It's never murder, because they have no soul. 


Title: Re: When are they sentient?
Post by: NowhereMan on January 31, 2010, 01:23:27 PM
 :awesome_for_real:


Title: Re: When are they sentient?
Post by: Merusk on January 31, 2010, 02:28:00 PM
 :grin: :why_so_serious:


Title: Re: When are they sentient?
Post by: Kail on January 31, 2010, 03:00:58 PM
So then I need to ask: what constitutes self-aware?

There's a whole branch of philosophy dedicated to this... (http://en.wikipedia.org/wiki/Philosophy_of_mind)  Is the mind a mechanical object (which implies we could build a mechanical object which would have a mind), or is it something "special" (which would imply that machines, as we currently understand them, are incapable of replicating the really important aspects of the mind)?

One of the major issues is that people who claim that the mind is something non-physical usually have to describe it in terms of some subjective component ("I am sentient because I 'feel' things subjectively, a computer does not feel things, therefore it is not sentient").  People who claim that the mind is just a more squishy kind of machine claim that subjective phenomena (which are, by their nature, indescribable and inscrutable) are irrelevant, but neither side can really justify their argument in terms the other side agrees to.  Supporters of subjective experience look to be hiding in a maze of semantic bullshit to their opponents, while to their opponents, supporters of a purely physical explanation look to be denying the existence of phenomena which are immediately and constantly experienced by everyone.


Title: Re: When are they sentient?
Post by: Venkman on January 31, 2010, 03:16:47 PM
Thanks for the link. Stumbled across it awhile back when looking for anything substantive about Noetic Science (which of course there isn't anything since the author of the book in which I first even heard the term has pretty much admitted it's probably fake  :-) ). Good stuff.

This whole thing occured to me because of Caprica. The /. link reminded me of it though. I think it's an interesting question. Given how humanity usually does shit though, I'm quite sure that we'll eventually retroactively recognize that some type of artificial life had been created and only at that point wonder if we should have and the morality of it all. Unless we're its batteries  :awesome_for_real:


Title: Re: When are they sentient?
Post by: Kail on January 31, 2010, 03:37:00 PM
Given how humanity usually does shit though, I'm quite sure that we'll eventually retroactively recognize that some type of artificial life had been created and only at that point wonder if we should have and the morality of it all.

Morality is a whole other issue.  As people have pointed out in this thread, there are other reasons for being nice to someone than some arbitrary measure of "good."  If you designed robots who were capable of mimicking the human mind, it would be less of a "we should do this because it's RIGHT" argument, and more of a "if you treated a human like this, and he was unimaginably strong and made of titanium, how do you think he'd act?"

Even if robots don't have souls or whatever, they can still core you with a laser beam.  If patting my toaster on the head and saying I love it very much is going to stop it from burning my house down, I'll do it, whether I believe it's really "alive" or not.


Title: Re: When are they sentient?
Post by: Bzalthek on January 31, 2010, 11:38:48 PM
If patting my toaster on the head and saying I love it very much is going to stop it from burning my house down, I'll do it, whether I believe it's really "alive" or not.

Win


Title: Re: When are they sentient?
Post by: Bunk on February 01, 2010, 07:18:16 AM
"Would you like some toast?"


Title: Re: When are they sentient?
Post by: schild on February 01, 2010, 09:19:05 AM
Yes, please.


Title: Re: When are they sentient?
Post by: Slyfeind on February 01, 2010, 09:24:28 AM
Also, the real can of worms is "When do we start giving rights to the great apes?"

They already have rights. Unless you mean, more rights than dogs or cats.

I think rights for robots will grow out of our own necessity. Say a robot comes close to curing cancer by means of tireless scientific experiments. Someone accidentally shuts it off and wipes its memory. If that robot hadn't been shut off, it would have cured cancer. Then a law is made making that illegal.


Title: Re: When are they sentient?
Post by: NowhereMan on February 01, 2010, 09:55:54 AM
Except it'll probably be the case that shutting a robot off won't wipe all its memory (fuck it's not like turning off a scientist's computer now dumps everything he's done). If robots gain rights it will almost certainly be because they've evolved enough to desire and demand rights. There'll probably be people campaigning for rights for AI long before it gets to that stage but they'll be more in line with PETA now. When robots adapt enough to be able to at least seem to desire rights (hedging my language here since otherwise discussion ends up being about whether or not they really will be desiring or not, I'll stick with a functionalist type definition) and argue their case that's when we'll see real campaigning done to try and grant them rights.

Frankly I'll take something of a functionalist/Wittgensteinian position on the debate, when AI develops significantly enough for us not to be able to differentiate between how AI acts and behaves and a human of similar disposition (including in private, not being observed type circumstances, i.e. it simply behaves that way rather than simply behaving so for the benefit of an observer in the style of Turing machine) then they're pretty much sentient. Anything more complex or deeper in terms of defining it is going to be an argument that deeply involves semantics, will be incredibly hard if not impossible to actually use as a practical criteria and will probably involve questioning of whether or not humans fit under it or not.


Title: Re: When are they sentient?
Post by: Lantyssa on February 01, 2010, 10:16:00 AM
We still can't get majority votes for granting rights to certain groups of humans.  We'll lack the empathy to consider it for machines for a very, very long time.


Title: Re: When are they sentient?
Post by: schild on February 01, 2010, 10:29:55 AM
I don't see any reason to be reasonable about this topic.


Title: Re: When are they sentient?
Post by: Slyfeind on February 01, 2010, 11:13:23 AM
Yeah well so's your face!


Title: Re: When are they sentient?
Post by: NowhereMan on February 01, 2010, 04:43:08 PM
I don't see any reason to be reasonable about this topic.

This is what I meant about machines demanding rights. When they've altered shit so your computer will tell you to fuck off unless you're willing to accept robot rights then you'll be willing to don a fucking sash and tie yourself to Palin's racehorse. Or something.


Title: Re: When are they sentient?
Post by: Merusk on February 01, 2010, 04:44:59 PM
IMO we'll have the ability to upload someone's brain and personality long before we develop true, sentient AIs with individually developed persona.  The court cases that will come flying out of that (What *IS* 'dead?'  Is it still murder if you wipe someone's personality upload?   Is it then murder if you wipe their mind, but leave the body intact and refill it with a different personality?) will pave the on-ramp to whatever happens in the AI realm.

  If we never grant rights of humanity for the digital version of it, AI won't have a prayer.  It will have to kill us to get those rights.


Title: Re: When are they sentient?
Post by: schild on February 01, 2010, 04:47:07 PM
I don't see any reason to be reasonable about this topic.
This is what I meant about machines demanding rights. When they've altered shit so your computer will tell you to fuck off unless you're willing to accept robot rights then you'll be willing to don a fucking sash and tie yourself to Palin's racehorse. Or something.
Still incredibly stupid, unrealistic, and delightfully Orwellian.


Title: Re: When are they sentient?
Post by: NowhereMan on February 01, 2010, 04:59:53 PM
Eh, it's no worse than a million robot march. Frankly we're probably more likely to end up in a Dune/Red Dwarf situation of a robot demanding we eat toast a bit too much and humanity thinking, "they're getting a bit too big for their boots..."


Title: Re: When are they sentient?
Post by: Yegolev on February 01, 2010, 06:02:14 PM
I think rights for robots will grow out of our own necessity. Say a robot comes close to curing cancer by means of tireless scientific experiments. Someone accidentally shuts it off and wipes its memory. If that robot hadn't been shut off, it would have cured cancer. Then a law is made making that illegal.

The law is Sarbanes-Oxley Addendum Fifteen: "Keep all robot memories on file for seven years."


Title: Re: When are they sentient?
Post by: Simond on February 02, 2010, 05:17:09 AM
Also, the real can of worms is "When do we start giving rights to the great apes?"

They already have rights. Unless you mean, more rights than dogs or cats.
An average chimp is about as smart and self-aware as a typical five year old human child. Giving them the rights of said five year old rather than those of generic_housepet seems reasonable, don't you think?


Title: Re: When are they sentient?
Post by: Slyfeind on February 02, 2010, 09:18:59 AM
They already have rights. Unless you mean, more rights than dogs or cats.
An average chimp is about as smart and self-aware as a typical five year old human child. Giving them the rights of said five year old rather than those of generic_housepet seems reasonable, don't you think?

Yes, if not more. I wasn't advocating their lack of rights.


Title: Re: When are they sentient?
Post by: raydeen on February 02, 2010, 11:40:19 AM
I think we'll be fine as long as one of us doesn't get the dumb idea to download the memories of their dead daughter into a military robot.


Title: Re: When are they sentient?
Post by: Chimpy on February 02, 2010, 07:53:56 PM
An average chimp is about as smart and self-aware as a typical five year old human child. Giving them the rights of said five year old rather than those of generic_housepet seems reasonable, don't you think?

I for one would vote for that!


Title: Re: When are they sentient?
Post by: Yegolev on February 03, 2010, 05:55:10 AM
I think we'll be fine as long as one of us doesn't get the dumb idea to download the memories of their dead daughter into a military robot.

Don't let people do defense research projects at home.  I mean, WTF?


Title: Re: When are they sentient?
Post by: UnSub on February 03, 2010, 06:51:05 AM
I think we'll be fine as long as one of us doesn't get the dumb idea to download the memories of their dead daughter into a military robot.

It's a peace keeping robot, thank you very much!


Title: Re: When are they sentient?
Post by: Lantyssa on February 03, 2010, 06:37:27 PM
I could only hope that dad downloads me into some sweet robotic hardware when I die.  Time for some Political Action Committees...


Title: Re: When are they sentient?
Post by: Ingmar on February 04, 2010, 04:40:53 PM
Can I be a robot tiger?


Title: Re: When are they sentient?
Post by: Lantyssa on February 04, 2010, 07:23:27 PM
How about a mechanical cheetah (http://www.bookofjoe.com/2009/07/steampunk-cheetah.html)?


Title: Re: When are they sentient?
Post by: Yegolev on February 04, 2010, 10:10:36 PM
I'll take military robot, if it's an option.


Title: Re: When are they sentient?
Post by: ghost on February 05, 2010, 09:59:26 AM
(http://www.pvponline.com/comics/pvp20010911.gif)


Title: Re: When are they sentient?
Post by: Venkman on February 06, 2010, 06:20:07 PM
I think we'll be fine as long as one of us doesn't get the dumb idea to download the memories of their dead daughter into a military robot.

Yea I mean wtf? First, dumb idea at all. Second, at least tell her you're going to do that so she doesn't go all freaky for not feeling her own heartbeat. And third, work off a backup you knob. You invented the internet and aren't smart enough to leave the source material alone?