Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

IBM unveils chips that mimic the human brain

2

Comments

  • HazelleHazelle Member Posts: 760

    Just because something has AI doesn't mean that it has feelings.  Machines (even really smart ones) don't have emotions or desires so you're all perfectly safe from anything you've seen in movies.

    Robots can't fear or get angry so it wouldn't object to being turned off or see itself as being something that shouldn't be turned off.  It wouldn't save itself unless it is designed to do so.

    1 is the same as a 0 to AI.  On is the same as off.

    I have a little robotic dog on my desk top that wags it's tail and nods it's head when I push a button on it's back.  It is wagging it's tail because I push it's back and not because it's happy to see me.  Why do I click it's tail?  Because it's cute and it makes me smile. image

  • generals3generals3 Member Posts: 3,307

    Originally posted by Dekron


    Elaborate why?

    I think this is one of those moral standards that was created for the sake of convenience.

    There is no reason whatsoever why biological creatures should be given rights and all that and not mechanical ones.

    And if intelligence and self awareness do not constitute "life" in the moral sense than what does? What makes your self awareness and intelligence so special except for the fact you belong to the human species?

    Do mind i myself am a speciest, so i wouldn't consider them the same, but this due to a purely arbitrary moral standard which states: we deserve it. There are already people who don't believe in that standard and fight for the rights of animals. So why wouldn't it happen with robots?

    You just circled back the my original thought - some people would claim "robot rights" (e.g. crazy ass PETA individuals who think animals are equal to humans).

    Animals have no "rights". They are organisms which are simply part the food chain and, as those we eat, fall under us. I don't go around beating the shit out of my dog or cats or anything of the sort, but I sure in the hell do not equate tem on the same level as I. They are aware, some are intelligent and yet, they are still animals. Certain species of monkeys (e.g. chimps) can communicate quite well with humans through sign language or Blisssymbolics, but again, they are nothing more than animals. It may sound harsh, but I am one of those individuals who sees humans as the dominant life on this planet and, as all species has before us, we should defend that position lest we lose the title of superior species.

    Deeming a robotic counterpart as an "equal" would be sowing the seeds of human submission to your new robot masters. Robots are designed as workers - mechanical slaves - and will always be such until man loses control of his technology.

    It's about being the top species - we earned it over thousands of years by claiming dominance over all other species.



    Yes but you see, intelligent machines will eventually become equal or better than us on every aspect. Unless off course we meet some unexpected limitations to the AI tech or decide to halt the advance in that area.

    Once true intelligence and mobility are fixed you will be outclassed on every aspect.

    And what i think is that the more we get closer to that the more moral questions it will raise amongst more and more people.

     

    So you would be fine if your job along every other is taken by robots?

    I didn't say that. You find another. If you cannot find a job with your current skill set, you learn another. I've been all over the job map.

    And what if you can't? now the menial jobs are being automatized. But soon it will be the intellectual ones. Less and less jobs will be availible until a point most will struggle to find any.

    Retirement might look nice but many try to work as long as they physically can or do some side jobs once retired, for a reason.

    Yes, they continue to work because they failed to save for their retirements. No one wants to work ( well, there are some that do), but you do so to stalk away like a good ant - save for later - instead of being the old grasshopper struggling to survive when winter sets upon them.

    I know people who had jobs for other reasons than money. One that comes to mind was an 80y old man who volunteered for lots of stuff. And we aren't talking about helping the needy (so it's not the altruism part).

    And my dad could already retire but he doesn't even want to think about it. He could enjoy a nice retirement, the money is there, and he's cheap as hell so i doubt he'd even use it all anyway.

    Whether it is the govts responsability or not is totally irrelevant to the issue.

    It is relelvant. You asked how "I" would keep them busy, feeling useful and feeling if their life mattered. That is their responsilbility, not mine, nor any other public, private, entity. In the US we are guarantted life, liberty and the pursuit of happiness. The keyword here is pursuit, not happiness.

    It is, because it will be your problem. Look at the UK some youngsters found a great way to keep themselves busy, people payed with their lives for it.

    You want to keep people as busy as you can. When they're busy they aren't doing stupid stuff.

     

     

    Fere libenter homines id quod volunt credunt.
    Among those who dislike oppression are many who like to oppress.

  • generals3generals3 Member Posts: 3,307

    Originally posted by Hazelle

    Just because something has AI doesn't mean that it has feelings.  Machines (even really smart ones) don't have emotions or desires so you're all perfectly safe from anything you've seen in movies.

    Robots can't fear or get angry so it wouldn't object to being turned off or see itself as being something that shouldn't be turned off.  It wouldn't save itself unless it is designed to do so.

    1 is the same as a 0 to AI.  On is the same as off.

    I have a little robotic dog on my desk top that wags it's tail and nods it's head when I push a button on it's back.  It is wagging it's tail because I push it's back and not because it's happy to see me.  Why do I click it's tail?  Because it's cute and it makes me smile. image

    1) i disagree. Skynet decided to destroy based on rational and logical reason.

    2) But what if it has learned that the whole premise of getting shut off is broken. That it would be more efficient to run 24/7 and than based on your attempt to shut it off it realizes you are actively undermining 100% efficiency?

    We are talking about machines that can learn and do not stick to the initial lines of code (so to speak)

    3) That dog is all but "intelligent".

    Fere libenter homines id quod volunt credunt.
    Among those who dislike oppression are many who like to oppress.

  • DraenorDraenor Member UncommonPosts: 7,918

    Originally posted by Ihmotepp

    Will it be able to design a sandbox MMORPG that doesn't suck?

     

    I don't understand how a sandbox MMO could be better than Eve...but to each his own.

    Your argument is like a two legged dog with an eating disorder...weak and unbalanced.

  • HazelleHazelle Member Posts: 760

    Originally posted by generals3

    Originally posted by Hazelle

    Just because something has AI doesn't mean that it has feelings.  Machines (even really smart ones) don't have emotions or desires so you're all perfectly safe from anything you've seen in movies.

    Robots can't fear or get angry so it wouldn't object to being turned off or see itself as being something that shouldn't be turned off.  It wouldn't save itself unless it is designed to do so.

    1 is the same as a 0 to AI.  On is the same as off.

    I have a little robotic dog on my desk top that wags it's tail and nods it's head when I push a button on it's back.  It is wagging it's tail because I push it's back and not because it's happy to see me.  Why do I click it's tail?  Because it's cute and it makes me smile. image

    1) i disagree. Skynet decided to destroy based on rational and logical reason.

    2) But what if it has learned that the whole premise of getting shut off is broken. That it would be more efficient to run 24/7 and than based on your attempt to shut it off it realizes you are actively undermining 100% efficiency?

    We are talking about machines that can learn and do not stick to the initial lines of code (so to speak)

    3) That dog is all but "intelligent".

    Skynet decided to protect itself from being shut off by humans and decided to kill all humans; but, as I suggested above "off" has the same value as "on" when you are talking about an entity without emotion.

    100% efficiency has the same value as 50% or 10% or 0%.

    It will just run programs and if it does harm it will do so because it's been programmed to do so, much like my puppy that wags his little tail because he's programmed to do so.

  • generals3generals3 Member Posts: 3,307

    Originally posted by Hazelle

    Originally posted by generals3


    Originally posted by Hazelle

    Just because something has AI doesn't mean that it has feelings.  Machines (even really smart ones) don't have emotions or desires so you're all perfectly safe from anything you've seen in movies.

    Robots can't fear or get angry so it wouldn't object to being turned off or see itself as being something that shouldn't be turned off.  It wouldn't save itself unless it is designed to do so.

    1 is the same as a 0 to AI.  On is the same as off.

    I have a little robotic dog on my desk top that wags it's tail and nods it's head when I push a button on it's back.  It is wagging it's tail because I push it's back and not because it's happy to see me.  Why do I click it's tail?  Because it's cute and it makes me smile. image

    1) i disagree. Skynet decided to destroy based on rational and logical reason.

    2) But what if it has learned that the whole premise of getting shut off is broken. That it would be more efficient to run 24/7 and than based on your attempt to shut it off it realizes you are actively undermining 100% efficiency?

    We are talking about machines that can learn and do not stick to the initial lines of code (so to speak)

    3) That dog is all but "intelligent".

    Skynet decided to protect itself from being shut off by humans and decided to kill all humans; but, as I suggested above "off" has the same value as "on" when you are talking about an entity without emotion.

    100% efficiency has the same value as 50% or 10% or 0%.

    It will just run programs and if it does harm it will do so because it's been programmed to do so, much like my puppy that wags his little tail because he's programmed to do so.

    Yes but, the whole issue here are machines which can learn. While limited learning capabilities pose no danger whatsoever it will become tempting to make them able to learn more and more. The real risk appear when it can learn too much and maybe even that certain parts of its own initial programming are unoptimal and need to be rewritten.

    Fere libenter homines id quod volunt credunt.
    Among those who dislike oppression are many who like to oppress.

  • HazelleHazelle Member Posts: 760

    Originally posted by generals3

    Originally posted by Hazelle


    Originally posted by generals3


    Originally posted by Hazelle

    Just because something has AI doesn't mean that it has feelings.  Machines (even really smart ones) don't have emotions or desires so you're all perfectly safe from anything you've seen in movies.

    Robots can't fear or get angry so it wouldn't object to being turned off or see itself as being something that shouldn't be turned off.  It wouldn't save itself unless it is designed to do so.

    1 is the same as a 0 to AI.  On is the same as off.

    I have a little robotic dog on my desk top that wags it's tail and nods it's head when I push a button on it's back.  It is wagging it's tail because I push it's back and not because it's happy to see me.  Why do I click it's tail?  Because it's cute and it makes me smile. image

    1) i disagree. Skynet decided to destroy based on rational and logical reason.

    2) But what if it has learned that the whole premise of getting shut off is broken. That it would be more efficient to run 24/7 and than based on your attempt to shut it off it realizes you are actively undermining 100% efficiency?

    We are talking about machines that can learn and do not stick to the initial lines of code (so to speak)

    3) That dog is all but "intelligent".

    Skynet decided to protect itself from being shut off by humans and decided to kill all humans; but, as I suggested above "off" has the same value as "on" when you are talking about an entity without emotion.

    100% efficiency has the same value as 50% or 10% or 0%.

    It will just run programs and if it does harm it will do so because it's been programmed to do so, much like my puppy that wags his little tail because he's programmed to do so.

    Yes but, the whole issue here are machines which can learn. While limited learning capabilities pose no danger whatsoever it will become tempting to make them able to learn more and more. The real risk appear when it can learn too much and maybe even that certain parts of its own initial programming are unoptimal and need to be rewritten.

     It's learning will be limited to its initial programming.  A computer designed to scan rotten tomatos isn't going to try to take over the farm, but it will possibly become more efficient at scanning tomatos.  It's fuction is to scan tomatos and it's purpose is to scan tomatos.  It will have no desire to do anything but scan tomatos.  It doesn't get happy about scanning tomatos.  It doesn't become proud of the number of tomatos it can scan.  It doesn't resent the human that controls it.  It doesn't feel that it can do a better job without humans.  It just scans tomatos.

  • generals3generals3 Member Posts: 3,307

    Originally posted by Hazelle

    Originally posted by generals3


    Originally posted by Hazelle


    Originally posted by generals3


    Originally posted by Hazelle

    Just because something has AI doesn't mean that it has feelings.  Machines (even really smart ones) don't have emotions or desires so you're all perfectly safe from anything you've seen in movies.

    Robots can't fear or get angry so it wouldn't object to being turned off or see itself as being something that shouldn't be turned off.  It wouldn't save itself unless it is designed to do so.

    1 is the same as a 0 to AI.  On is the same as off.

    I have a little robotic dog on my desk top that wags it's tail and nods it's head when I push a button on it's back.  It is wagging it's tail because I push it's back and not because it's happy to see me.  Why do I click it's tail?  Because it's cute and it makes me smile. image

    1) i disagree. Skynet decided to destroy based on rational and logical reason.

    2) But what if it has learned that the whole premise of getting shut off is broken. That it would be more efficient to run 24/7 and than based on your attempt to shut it off it realizes you are actively undermining 100% efficiency?

    We are talking about machines that can learn and do not stick to the initial lines of code (so to speak)

    3) That dog is all but "intelligent".

    Skynet decided to protect itself from being shut off by humans and decided to kill all humans; but, as I suggested above "off" has the same value as "on" when you are talking about an entity without emotion.

    100% efficiency has the same value as 50% or 10% or 0%.

    It will just run programs and if it does harm it will do so because it's been programmed to do so, much like my puppy that wags his little tail because he's programmed to do so.

    Yes but, the whole issue here are machines which can learn. While limited learning capabilities pose no danger whatsoever it will become tempting to make them able to learn more and more. The real risk appear when it can learn too much and maybe even that certain parts of its own initial programming are unoptimal and need to be rewritten.

     It's learning will be limited to its initial programming.  A computer designed to scan rotten tomatos isn't going to try to take over the farm, but it will possibly become more efficient at scanning tomatos.  It's fuction is to scan tomatos and it's purpose is to scan tomatos.  It will have no desire to do anything but scan tomatos.  It doesn't get happy about scanning tomatos.  It doesn't become proud of the number of tomatos it can scan.  It doesn't resent the human that controls it.  It doesn't feel that it can do a better job without humans.  It just scans tomatos.

    Again that's limited learning. What i'm talking about is broadening the scope of learning. Why make an AI that simply scans tomatoes while you can make one that can analyse every socio-economic aspect of our lives to get better financial predictions or things like that.

    You need to realize the more you allow the AI's to do the more we can gain from it. Until we give it too much which might heavily backfire.

    Now it might also not backfire but just ending up controlling us. Scenarios "a la Metal Gear Solid". Where some guys think AI's being able to monitor every aspect of our lives is a good thing, but it ends up indirectly controling us.

    Or we might say stop before that.

    Fere libenter homines id quod volunt credunt.
    Among those who dislike oppression are many who like to oppress.

  • HazelleHazelle Member Posts: 760

    Originally posted by generals3

    Originally posted by Hazelle


    Originally posted by generals3


    Originally posted by Hazelle


    Originally posted by generals3


    Originally posted by Hazelle

    Just because something has AI doesn't mean that it has feelings.  Machines (even really smart ones) don't have emotions or desires so you're all perfectly safe from anything you've seen in movies.

    Robots can't fear or get angry so it wouldn't object to being turned off or see itself as being something that shouldn't be turned off.  It wouldn't save itself unless it is designed to do so.

    1 is the same as a 0 to AI.  On is the same as off.

    I have a little robotic dog on my desk top that wags it's tail and nods it's head when I push a button on it's back.  It is wagging it's tail because I push it's back and not because it's happy to see me.  Why do I click it's tail?  Because it's cute and it makes me smile. image

    1) i disagree. Skynet decided to destroy based on rational and logical reason.

    2) But what if it has learned that the whole premise of getting shut off is broken. That it would be more efficient to run 24/7 and than based on your attempt to shut it off it realizes you are actively undermining 100% efficiency?

    We are talking about machines that can learn and do not stick to the initial lines of code (so to speak)

    3) That dog is all but "intelligent".

    Skynet decided to protect itself from being shut off by humans and decided to kill all humans; but, as I suggested above "off" has the same value as "on" when you are talking about an entity without emotion.

    100% efficiency has the same value as 50% or 10% or 0%.

    It will just run programs and if it does harm it will do so because it's been programmed to do so, much like my puppy that wags his little tail because he's programmed to do so.

    Yes but, the whole issue here are machines which can learn. While limited learning capabilities pose no danger whatsoever it will become tempting to make them able to learn more and more. The real risk appear when it can learn too much and maybe even that certain parts of its own initial programming are unoptimal and need to be rewritten.

     It's learning will be limited to its initial programming.  A computer designed to scan rotten tomatos isn't going to try to take over the farm, but it will possibly become more efficient at scanning tomatos.  It's fuction is to scan tomatos and it's purpose is to scan tomatos.  It will have no desire to do anything but scan tomatos.  It doesn't get happy about scanning tomatos.  It doesn't become proud of the number of tomatos it can scan.  It doesn't resent the human that controls it.  It doesn't feel that it can do a better job without humans.  It just scans tomatos.

    Again that's limited learning. What i'm talking about is broadening the scope of learning. Why make an AI that simply scans tomatoes while you can make one that can analyse every socio-economic aspect of our lives to get better financial predictions or things like that.

    You need to realize the more you allow the AI's to do the more we can gain from it. Until we give it too much which might heavily backfire.

    Now it might also not backfire but just ending up controlling us. Scenarios "a la Metal Gear Solid". Where some guys think AI's being able to monitor every aspect of our lives is a good thing, but it ends up indirectly controling us.

    Or we might say stop before that.

    Computers are tools designed to perform a function and in order for any tool to exist there has to be a need for the tool to exist.

    Bad programming, bad maintenance, or faulty equipment is the only way that a program can fail and none of them are the computer's fault - it just runs programs - good or bad.

  • blackcat35blackcat35 Member Posts: 479

    Computers are tools designed to perform a function and in order for any tool to exist there has to be a need for the tool to exist.

    Bad programming, bad maintenance, or faulty equipment is the only way that a program can fail and none of them are the computer's fault - it just runs programs - good or bad.

     

    When AI gets to the point where they bypass their own programming because it is bad programming, we got a problem houston.  We need to keep computers reliant on us for their direction.  When they become self directing and self efficient, then we will be making ourselves obsolete.  We have been warned about this in science-fiction time and time again.   The thing is, I once watched a show called Star-trek where people walked around and talked to each other on these tiny communicaters.  Now, we got even smaller talking devices called cell phones that don't need a cord attached to it.  

     

    AI is in its infancy.  If we aren't careful, we could cause ourselves alot more trouble than its worth.  Luckily we are very warlike.  Machines wouldn't stand a chance against the human world's war machine.  It still isn't worth causing ourselves trouble by becoming too reliant and giving computers too much power.

    ==========================
    The game is dead not, this game is good we make it and Romania Tv give it 5 goat heads, this is good rating for game.

  • generals3generals3 Member Posts: 3,307

    Originally posted by blackcat35

    Computers are tools designed to perform a function and in order for any tool to exist there has to be a need for the tool to exist.

    Bad programming, bad maintenance, or faulty equipment is the only way that a program can fail and none of them are the computer's fault - it just runs programs - good or bad.

     

    When AI gets to the point where they bypass their own programming because it is bad programming, we got a problem houston.  We need to keep computers reliant on us for their direction.  When they become self directing and self efficient, then we will be making ourselves obsolete.  We have been warned about this in science-fiction time and time again.   The thing is, I once watched a show called Star-trek where people walked around and talked to each other on these tiny communicaters.  Now, we got even smaller talking devices called cell phones that don't need a cord attached to it.  

     

    AI is in its infancy.  If we aren't careful, we could cause ourselves alot more trouble than its worth.  Luckily we are very warlike.  Machines wouldn't stand a chance against the human world's war machine.  It still isn't worth causing ourselves trouble by becoming too reliant and giving computers too much power.

    The biggest issue is if we combine our lust for war with our lust for increased efficieny by using AI's. While robotics wise we're still far from having soldiers made of metal i'm fairly certain that it is a matter of time as well (most likely longer than AI) and combining robotics with extremely smart AI's would make an unbeatable army, something one would hunger for. And if that would backfire all hell would break loose.

    Let's just hope when technology allows us to do such things we decide to say "no".

    Fere libenter homines id quod volunt credunt.
    Among those who dislike oppression are many who like to oppress.

  • HazelleHazelle Member Posts: 760

    Originally posted by blackcat35

    Computers are tools designed to perform a function and in order for any tool to exist there has to be a need for the tool to exist.

    Bad programming, bad maintenance, or faulty equipment is the only way that a program can fail and none of them are the computer's fault - it just runs programs - good or bad.

     

    When AI gets to the point where they bypass their own programming because it is bad programming, we got a problem houston.  We need to keep computers reliant on us for their direction.  When they become self directing and self efficient, then we will be making ourselves obsolete.  We have been warned about this in science-fiction time and time again.   The thing is, I once watched a show called Star-trek where people walked around and talked to each other on these tiny communicaters.  Now, we got even smaller talking devices called cell phones that don't need a cord attached to it.  

     

    AI is in its infancy.  If we aren't careful, we could cause ourselves alot more trouble than its worth.  Luckily we are very warlike.  Machines wouldn't stand a chance against the human world's war machine.  It still isn't worth causing ourselves trouble by becoming too reliant and giving computers too much power.

     The machine wouldn't discern good programming from bad unless it was taught to discern good from bad.

    The machine wouldn't then diside to adjust itself unless it was programmed to do that.

    It would just run it's program; good or bad.

  • EronakisEronakis Member UncommonPosts: 2,248

    Next, they will put rfid chips in your right hand or you're foreheads and you can't buy or sell without it.

  • BrenelaelBrenelael Member UncommonPosts: 3,821

    It baffles me how a thread that essentially claims that IBM has developed a better calculator has progressed to a robot's rights debate. These new chips are far from the kind of AI it would take for self-awareness. Could this new tecnology lead to that someday? Who knows but this is only a very small step in that direction. I think everyone in this thread are making this new chip out to be something very much bigger than it actually is.

     

    Bren

    while(horse==dead)
    {
    beat();
    }

  • DekronDekron Member UncommonPosts: 7,359

    Originally posted by Brenelael

    It baffles me how a thread that essentially claims that IBM has developed a better calculator has progressed to a robot's rights debate. These new chips are far from the kind of AI it would take for self-awareness. Could this new tecnology lead to that someday? Who knows but this is only a very small step in that direction. I think everyone in this thread are making this new chip out to be something very much bigger than it actually is.

     

    Bren

    Started as a joke...

  • Vato26Vato26 Member Posts: 3,930

    Originally posted by Brenelael

    It baffles me how a thread that essentially claims that IBM has developed a better calculator has progressed to a robot's rights debate. These new chips are far from the kind of AI it would take for self-awareness. Could this new tecnology lead to that someday? Who knows but this is only a very small step in that direction. I think everyone in this thread are making this new chip out to be something very much bigger than it actually is.

     

    Bren

    Yet, not recognizing the possible future consequences of this chip is a disaster.

  • Scubie67Scubie67 Member UncommonPosts: 462

    I sure hope if they manage to put in a cyborg body it doesn't go around knocking up all the Housemaids in CA

  • BrenelaelBrenelael Member UncommonPosts: 3,821

    Originally posted by Scubie67

    I sure hope if they manage to put in a cyborg body it doesn't go around knocking up all the Housemaids in CA

    That's a little extreme... The Cyberdyne Systems T101 only knocked up one Housemaid that's been confirmed. LOL image

     

    Bren

    while(horse==dead)
    {
    beat();
    }

  • devilisciousdeviliscious Member UncommonPosts: 4,359

    Originally posted by Hazelle

    Originally posted by generals3

    Originally posted by Hazelle

    Originally posted by generals3

    Originally posted by Hazelle

    Originally posted by generals3

    Originally posted by Hazelle

    Just because something has AI doesn't mean that it has feelings.  Machines (even really smart ones) don't have emotions or desires so you're all perfectly safe from anything you've seen in movies.

    Robots can't fear or get angry so it wouldn't object to being turned off or see itself as being something that shouldn't be turned off.  It wouldn't save itself unless it is designed to do so.

    1 is the same as a 0 to AI.  On is the same as off.

    I have a little robotic dog on my desk top that wags it's tail and nods it's head when I push a button on it's back.  It is wagging it's tail because I push it's back and not because it's happy to see me.  Why do I click it's tail?  Because it's cute and it makes me smile. image

    1) i disagree. Skynet decided to destroy based on rational and logical reason.

    2) But what if it has learned that the whole premise of getting shut off is broken. That it would be more efficient to run 24/7 and than based on your attempt to shut it off it realizes you are actively undermining 100% efficiency?

    We are talking about machines that can learn and do not stick to the initial lines of code (so to speak)

    3) That dog is all but "intelligent".

    Skynet decided to protect itself from being shut off by humans and decided to kill all humans; but, as I suggested above "off" has the same value as "on" when you are talking about an entity without emotion.

    100% efficiency has the same value as 50% or 10% or 0%.

    It will just run programs and if it does harm it will do so because it's been programmed to do so, much like my puppy that wags his little tail because he's programmed to do so.

    Yes but, the whole issue here are machines which can learn. While limited learning capabilities pose no danger whatsoever it will become tempting to make them able to learn more and more. The real risk appear when it can learn too much and maybe even that certain parts of its own initial programming are unoptimal and need to be rewritten.

     It's learning will be limited to its initial programming.  A computer designed to scan rotten tomatos isn't going to try to take over the farm, but it will possibly become more efficient at scanning tomatos.  It's fuction is to scan tomatos and it's purpose is to scan tomatos.  It will have no desire to do anything but scan tomatos.  It doesn't get happy about scanning tomatos.  It doesn't become proud of the number of tomatos it can scan.  It doesn't resent the human that controls it.  It doesn't feel that it can do a better job without humans.  It just scans tomatos.

    Again that's limited learning. What i'm talking about is broadening the scope of learning. Why make an AI that simply scans tomatoes while you can make one that can analyse every socio-economic aspect of our lives to get better financial predictions or things like that.

    You need to realize the more you allow the AI's to do the more we can gain from it. Until we give it too much which might heavily backfire.

    Now it might also not backfire but just ending up controlling us. Scenarios "a la Metal Gear Solid". Where some guys think AI's being able to monitor every aspect of our lives is a good thing, but it ends up indirectly controling us.

    Or we might say stop before that.

    Computers are tools designed to perform a function and in order for any tool to exist there has to be a need for the tool to exist.

    Bad programming, bad maintenance, or faulty equipment is the only way that a program can fail and none of them are the computer's fault - it just runs programs - good or bad.

     Have we not learned anything from technology? The more complicated we make it the more things that can go wrong.  Hmm let's see ... DOD hacked, Senators hacked, No antivirus exists that is even anywhere near 90% effective, it is impossible to secure the internet- how the hell could we secure something we make that can think faster than we can? Programs can fail if someone tries hard enough to break them on purpose. Just ask mac.

    http://www.macworld.com/article/132733/2008/03/hack.html

    Not to mention that this AI  would eventually be used for breaking into things,including other AI robots..

  • HazelleHazelle Member Posts: 760

    Originally posted by deviliscious

    Originally posted by Hazelle


    Originally posted by generals3


    Originally posted by Hazelle


    Originally posted by generals3


    Originally posted by Hazelle


    Originally posted by generals3


    Originally posted by Hazelle

    Just because something has AI doesn't mean that it has feelings.  Machines (even really smart ones) don't have emotions or desires so you're all perfectly safe from anything you've seen in movies.

    Robots can't fear or get angry so it wouldn't object to being turned off or see itself as being something that shouldn't be turned off.  It wouldn't save itself unless it is designed to do so.

    1 is the same as a 0 to AI.  On is the same as off.

    I have a little robotic dog on my desk top that wags it's tail and nods it's head when I push a button on it's back.  It is wagging it's tail because I push it's back and not because it's happy to see me.  Why do I click it's tail?  Because it's cute and it makes me smile. image

    1) i disagree. Skynet decided to destroy based on rational and logical reason.

    2) But what if it has learned that the whole premise of getting shut off is broken. That it would be more efficient to run 24/7 and than based on your attempt to shut it off it realizes you are actively undermining 100% efficiency?

    We are talking about machines that can learn and do not stick to the initial lines of code (so to speak)

    3) That dog is all but "intelligent".

    Skynet decided to protect itself from being shut off by humans and decided to kill all humans; but, as I suggested above "off" has the same value as "on" when you are talking about an entity without emotion.

    100% efficiency has the same value as 50% or 10% or 0%.

    It will just run programs and if it does harm it will do so because it's been programmed to do so, much like my puppy that wags his little tail because he's programmed to do so.

    Yes but, the whole issue here are machines which can learn. While limited learning capabilities pose no danger whatsoever it will become tempting to make them able to learn more and more. The real risk appear when it can learn too much and maybe even that certain parts of its own initial programming are unoptimal and need to be rewritten.

     It's learning will be limited to its initial programming.  A computer designed to scan rotten tomatos isn't going to try to take over the farm, but it will possibly become more efficient at scanning tomatos.  It's fuction is to scan tomatos and it's purpose is to scan tomatos.  It will have no desire to do anything but scan tomatos.  It doesn't get happy about scanning tomatos.  It doesn't become proud of the number of tomatos it can scan.  It doesn't resent the human that controls it.  It doesn't feel that it can do a better job without humans.  It just scans tomatos.

    Again that's limited learning. What i'm talking about is broadening the scope of learning. Why make an AI that simply scans tomatoes while you can make one that can analyse every socio-economic aspect of our lives to get better financial predictions or things like that.

    You need to realize the more you allow the AI's to do the more we can gain from it. Until we give it too much which might heavily backfire.

    Now it might also not backfire but just ending up controlling us. Scenarios "a la Metal Gear Solid". Where some guys think AI's being able to monitor every aspect of our lives is a good thing, but it ends up indirectly controling us.

    Or we might say stop before that.

    Computers are tools designed to perform a function and in order for any tool to exist there has to be a need for the tool to exist.

    Bad programming, bad maintenance, or faulty equipment is the only way that a program can fail and none of them are the computer's fault - it just runs programs - good or bad.

     Have we not learned anything from technology? The more complicated we make it the more things that can go wrong.  Hmm let's see ... DOD hacked, Senators hacked, No antivirus exists that is even anywhere near 90% effective, it is impossible to secure the internet- how the hell could we secure something we make that can think faster than we can? Programs can fail if someone tries hard enough to break them on purpose. Just ask mac.

    http://www.macworld.com/article/132733/2008/03/hack.html

    Not to mention that this AI  would eventually be used for breaking into things,including other AI robots..

     All of which are the result (or will be the result) of human wants and needs and not an AI entity.

  • Fir3lineFir3line Member Posts: 767

    "I am not a robot. I am a unicorn."

  • Ghost12Ghost12 Member Posts: 684

    I just hope it will be able to create a good sandbox.

  • HazelleHazelle Member Posts: 760

    Originally posted by Ghost12

    I just hope it will be able to create a good sandbox.

    Get a shallow lidless box and fill it with sand.  Done.

  • Vato26Vato26 Member Posts: 3,930

    Originally posted by Fir3line

    http://www.youtube.com/watch?v=WnzlbyTZsQY

    ... ... that conversation was very disturbing.  If I didn't know any better, I'd think they were on some heavy drugs.

  • gebon524gebon524 Member Posts: 11

    I think in the near future this companies will launch thier Human clones. and I am excited to be see my generation to be clones as well. haha. Imortal generation will be near in this earth and no one will encounter any kind of sickness or even the negative thing that you will think of.........

Sign In or Register to comment.