Ask Experts Questions for FREE Help !
Ask
    Chery's Avatar
    Chery Posts: 3,666, Reputation: 698
    Gone, But Not Forgotten
     
    #21

    May 24, 2006, 06:28 AM
    Quote Originally Posted by talaniman
    If it is self aware how long before it figures out that you are not relevant and must be eliminated. Or that your resistance is futile. Think about that BEFORE you put the batteries in.:cool: :eek:
    Let's not forget the possibility of 'outside' influence - my PC has a few directories in it that I did not place there myself, and all those 'hidden' files - yuck!

    I don't know if this makes any sense, but our own bodies are kept alive due to a certain amount of 'electicity' - this has been proven. Then, if we think of our 'energy' and/or 'spirit' - it leaves our bodies when the flesh no longer requires it. The 'belief' that this could be our 'soul' is common. OK, aren't AIs also 'fed' by electricity?

    Sometimes, when I see or hear of dangerous criminals, I certainly think that those 'batteries' could have been left out there too. - That's just 'wishful thinking' on my part, and even though my 'electical gadgets' really tee me off, they are sometimes better company than some of 'my own kind'.


    Hey out there - what do you think? Be you human or machine, it all starts with 'initial input'.
    Nez's Avatar
    Nez Posts: 557, Reputation: 51
    Senior Member
     
    #22

    May 24, 2006, 06:50 AM
    Just think,if Windows had AI,would it create Linux as a superior brand?
    Sorry.. I just got home from work,and are'nt working on full batteries. :D
    Chery's Avatar
    Chery Posts: 3,666, Reputation: 698
    Gone, But Not Forgotten
     
    #23

    May 24, 2006, 07:02 AM
    Here we are, debating on how far 'we' should go in allowing other things to get the upper hand - because that's what we are really doing.

    We have a brain that we utilized only 1/3 of (some more, i.e. DaVinci, Nostradamus) In order to deflect from that, we create machines and place an abundant amount (controlled, of course)of information into them to suite our needs. And, yes, we then place a 'fail-safe' in an easily accessible location, just in case something goes wrong. It's amazing how cautious we are about our machines going haywire that we do have the foresight to have enabled the choice of 'pulling the plug'. But, we can go on and on about this knowing full well that it's a 'never-ending' issue as we are, even at this very second, progressing on to the next generation...

    Looking at it from an AI's perspective - we could question why the humans only use 1/3 of the 'brain' and wander out trying to improve things outside of our own realm. They would wonder why there is no 'fail-safe' or even logic built into a brain that turns 'criminal' - or even speculate why something so wrong could have happened during the 'programming'.

    To make it short.. Isn't it ironic, that some of us actually do wish sometimes, that we were AIs - to avoid emotional pain, physical pain and hunger. For me, it would be great to sometimes not have the fear of being 'overloaded ' and wish I could just 'pull the plug' or re-boot and start over. But, who would make sure that I re-boot without the unnecessary emotional clutter and 'outside' influence?

    More food for thought..

    We can, with our knowledge, turn off machines. But, our own bodies shut down uncontrolled by us, no matter what we do. Is it, then... power that we seek?
    "Fear of the unkown" prevails - even within.
    Chery's Avatar
    Chery Posts: 3,666, Reputation: 698
    Gone, But Not Forgotten
     
    #24

    May 24, 2006, 07:06 AM
    Quote Originally Posted by Nez
    Just think,if Windows had AI,would it create Linux as a superior brand?
    Sorry..I just got home from work,and are'nt working on full batteries. :D
    We all 'run on empty' sometimes.

    We call this "recharging"?? OK!
    phillysteakandcheese's Avatar
    phillysteakandcheese Posts: 973, Reputation: 356
    Senior Member
     
    #25

    May 24, 2006, 10:08 AM
    Quote Originally Posted by valinors_sorrow
    What happens when the machine that can read grey is created?
    I don't think that can ever happen.

    Everything in a computer is ultimately 0 or 1. A "thinking" machine could assess factors it is aware of, place a value on them, and ultimately calculate one item over another. Multilpy this over and over and you get a machine that looks like it might be "aware", but it is not sentient.

    I do agree that something can become more than just "the sum of its' parts", but I don't think manufactured items (like electronic circuits) can ever achieve that.
    talaniman's Avatar
    talaniman Posts: 54,327, Reputation: 10855
    Expert
     
    #26

    May 24, 2006, 03:31 PM
    So I don't have to fear my toaster trying to take over my home?
    Starman's Avatar
    Starman Posts: 1,308, Reputation: 135
    -
     
    #27

    May 24, 2006, 10:55 PM
    Quote Originally Posted by talaniman
    Sounds almost like they where almost human with the arrogance and uncaring that we have displayed over the centuries on Earth. You would think that intelligent beings would be a little more compassionate than your example. So would you surmise that the technological race who failed to warn the other race of impending disaster had a soul or not?:cool: :eek:
    Since my religious concept of soul is not the one you are referring to, I will rephrase the question in this way which is essentially the same--"Was the alien race which behaved that way created in God's image?" In short, could we consider them our spiritual brethren or intellectual kin based on their reasoning abilities. First, I don't believe that God would create thinking creatures deficient in compassion. So based on this I would say that they had to have fallen from grace and developed a deviant culture which warped their God-given consciences just as there were societies on earth who suffered the same consequences after choosing to distance themselves from God.

    The further such creatures deviate from right thinking and by consequence right conduct the more warped that original image becomes until there is absolutely no similarity with the devine whatsoever. I suppose that's what you would classify as souless.


    BTW
    If indeed we ever succeed in creating an AI which demonstrates all the required signs of being self-aware and ll other human qualities of the mind, such as individuality, then we would have created it in our image and by extension in the image of God albeit imperfect image as we have now. In short, we would be in an analogous position to it as God is to us--creator.

    The question then would be how to treat such a creation.
    educatedhorse_2005's Avatar
    educatedhorse_2005 Posts: 500, Reputation: 78
    Senior Member
     
    #28

    May 24, 2006, 11:10 PM
    None no way no how
    talaniman's Avatar
    talaniman Posts: 54,327, Reputation: 10855
    Expert
     
    #29

    May 25, 2006, 06:10 AM
    As God created us and gave us free choice, I guess your saying we would have to do the same. But the fly in the ointment would have to be the fact that our AI would dwell with us and interact and be able to judge us, where as our Creator is not among us openly and that is what changes our relationship between the AI and us. Essentially my toaster would have first hand knowledge of all my weaknesses and foibles and would be in a position to judge me inferior and do something about it. So I suggest we design our AI with no arms ,legs, eyes or ray guns.:cool: :eek:
    valinors_sorrow's Avatar
    valinors_sorrow Posts: 2,927, Reputation: 653
    I regard all beings mostly by their consciousness and little else
     
    #30

    May 25, 2006, 08:07 AM
    I see Talaniman's points.

    We do things that troubles our creator, I have no doubt of that. However the worst of that doesn't ever threaten our creator's existence, as far as I know, even if our creator is among us (which he/she/it may be!). It only threatens our existence, collectively and individually, which is kind of handy how that is set up if you ask me.

    So I eventually worked my way to two thoughts:
    1. In order to avoid being hypocritical, wouldn't we need to offer similar rights to the AI?
    2. And relationship wise... like we are to our creator, are they not to us? Are they really capable of threatening our existence when we are superior? Are we still so wrapped up in how we threaten each other, we won't see this aspect of it? Though they may threaten some of us, do they really threaten our collective existence? Mind you, I am not signing up to have a war with AI's, but won't they be hamstrung for starting one since it is logical to only start a fight you can win?

    That is where I ended up in my mind before I posted my original thoughts.

    Shrugs and enjoys everyone's thoughts immensely.. .
    Starman's Avatar
    Starman Posts: 1,308, Reputation: 135
    -
     
    #31

    May 25, 2006, 12:03 PM
    The late Isaac Asimov SCI Fi writer solved the problem by proposing that an AI be given prime directives which would prevent it from harming humans directly or by any omission of aid that could be rendered. Perhaps that would be analogous to our creator not making us equal in power.

    Neither is the freedom we were given unrestricted. It must be kept within acceptable parameters. Deviation from those parameters is called sin and sin, we are clearly warned, leads to death.

    Deuteronomy 30:19
    I call heaven and earth to witness against you this day, that I have set before thee life and death, the blessing and the curse: therefore choose life, that thou mayest live, thou and thy seed; KJV


    Similarly, we might include a fail-safe that would cause automatic shutdown of AI whenever and if ever the AI attempts to harm us.

    In short, we determine AI's possibilities by making some eventualities impossible via restrictions of its power and its choices just as we are restricted in ours..

    This will prevent the scenarios we so often see in SCI Fi films where an AI runs amok such as in Terminator, and in the short story "I Have No Mouth But I Must Scream" where humankind is shown at the brink of extinction for not having restricted the power of its own creation.
    StuMegu's Avatar
    StuMegu Posts: 576, Reputation: 64
    Senior Member
     
    #32

    May 25, 2006, 01:27 PM
    I agree, Asimov's three laws, if unbreakable would be adequate. If only there was a pill to provide the same effect in humans!
    orange's Avatar
    orange Posts: 1,364, Reputation: 197
    Ultra Member
     
    #33

    May 25, 2006, 01:34 PM
    I don't have much to add, except that, since I am a bit of a Trekkie haha, I would say that if an AI was like Lt. Data from Star Trek, then I think the AI should have the same rights as a person. Data seemed a lot more "human" than a lot of humans on the show!

    Although, Data DID have an evil brother, Lore, who created all kinds of problems. So in a way I guess an Android like Data should only have the rights of a human if he was not dangerous to society.
    talaniman's Avatar
    talaniman Posts: 54,327, Reputation: 10855
    Expert
     
    #34

    May 25, 2006, 02:13 PM
    Orange-Being an avid trekkie myself,I did see those episodes and also the episode where Data argued for his right not to be taken apart and studied so he could be duplicated. I think he won on a technicality of being unique and a one of a kind.:cool: ;)
    DrJ's Avatar
    DrJ Posts: 1,328, Reputation: 339
    Ultra Member
     
    #35

    May 25, 2006, 03:10 PM
    Didn't "I, Robot" kind of expose a teeny, tiny loophole in the "three laws" theory?
    Starman's Avatar
    Starman Posts: 1,308, Reputation: 135
    -
     
    #36

    May 26, 2006, 12:28 AM
    Quote Originally Posted by DrJizzle
    Didnt "I, Robot" kind of expose a teeny, tiny loophole in the "three laws" theory??
    I'm interested in knowing what that loophole is.
    Can you please explain.
    StuMegu's Avatar
    StuMegu Posts: 576, Reputation: 64
    Senior Member
     
    #37

    May 26, 2006, 12:57 AM
    The loophole, is sometimes referred by Asimov's robots as the zeroth law of robotics, in that it is more important to save humanity than one single human. The needs of the many outweigh the needs of the individual! This meant that the I robot (the film) robots decided to be the masters of humans in order to protect them from themselves.

    An interesting point, but it assumes the laws are breakable, because no matter what your reasons, if you hurt or kill a human, you have broken the first law.
    DrJ's Avatar
    DrJ Posts: 1,328, Reputation: 339
    Ultra Member
     
    #38

    May 26, 2006, 10:17 AM
    I. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    II. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    III. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    Yeah, its pretty much what StuMegu said... in an effort to obey this part of Law I "through inaction, allow a human being to come to harm" AI may conclude that by not controlling the humans, they are, through inaction, allowing us to harm ourselves.

    In the movie, they even terminated problematic humans... however, as Stu pointed out, this is conflicting with the First Law.

    However, if they were to divise a way to contain humans without injuring them, then control their lives so that they will not harm each other or themselves (thru violence, destroying the Earth, wars, pollution, whatever).
    talaniman's Avatar
    talaniman Posts: 54,327, Reputation: 10855
    Expert
     
    #39

    May 26, 2006, 10:45 AM
    I just unplugged my toaster until you guys figure this out.
    Chery's Avatar
    Chery Posts: 3,666, Reputation: 698
    Gone, But Not Forgotten
     
    #40

    May 26, 2006, 10:49 AM
    Since my religious concept of soul is not the one you are referring to, I will rephrase the question in this way which is essentially the same--"Was the alien race which behaved that way created in God's image?" In short, could we consider them our spiritual brethren or intellectual kin based on their reasoning abilities.
    THERE IS A DIFFERENCE BETWEEN ALIENS AND AI'S.. It's totally inconcievable to believe that we are the only planet with life - no matter what degree.

    First, I don't believe that God would create thinking creatures deficient in compassion. So based on this I would say that they had to have fallen from grace and developed a deviant culture which warped their God-given consciences just as there were societies on earth who suffered the same consequences after choosing to distance themselves from God.

    The further such creatures deviate from right thinking and by consequence right conduct the more warped that original image becomes until there is absolutely no similarity with the devine whatsoever. I suppose that's what you would classify as souless.
    UNFORTUNATELY... You've just described many beings on this planet already - and they are human and not machines, and I'm also NOT talking about clones. History gives us an example on one very (in)famous - Hitler.

    I strongly feel that a serious lack of guidance in any culture can 'create' individuals without soul. Or, too much guidance that originates from a warped source.

    So, again, this subject can keep entire think-tanks busy to infinity.


Not your question? Ask your question View similar questions

 

Question Tools Search this Question
Search this Question:

Advanced Search

Add your answer here.


Check out some similar questions!

As an aunt do I rights to sue DCFS for vistation rights to see my niece? [ 10 Answers ]

Hi My name is Rosalyn. I have a 12 year old niece that I just adore. Unfortunately she is in foster care with Illinois's DCFS going on 6 years. My sister, my nieces mom is mentally Ill. Her father was in jail for 10 years (my niece was 2 when he was sentence). Now that he is out jail and a witness...

Ad-aware Se [ 4 Answers ]

Ok here I dl'd this program and it says I have 185 errors/spyware on my computer (obvious reason why computer crashed I think) beats me how they just appear like that... but anyway after I ran Ad-Aware and got to the point of selecting all and deleting the problems, ad-aware just seem to hang there...

Hkey found through ad-aware se [ 3 Answers ]

Every time I scan I com with ad-aware, the result contains many hkey related tracker. How can I remove them cmpletely? I've follow some advice posted about the dso exploit when scanning with spybot. The method is deleting the 1004 file and create a new dzword 1004 and reseting the hex value to...

Ad-aware [ 2 Answers ]

Hello All. Today I reverted my system back to the stahe it was at when purchased due to some problems I'd been having. The first thing I did was check that my webmail was working then I downloaded Ad-aware and scanned my PC. To my dismay, considering the PC was back to being 'new out of the box',...

Spyware and ad-aware [ 6 Answers ]

I've been attacking my computer with Ad-Aware and going places I didn't know existed trying to get rid of the last few thorns in my side: Adserv and something called nitrous something, can't see it as it disappears before I can get all the info. I am also being bugged by Shopnav, the alternative...


View more questions Search