How would AI change our world?

Discussion in 'The Lounge' started by Jshep89, Jun 24, 2010.

How would AI change our world?

Discussion in 'The Lounge' started by Jshep89, Jun 24, 2010.

  1. Jshep89

    Jshep89 New Member

    Joined:
    Mar 31, 2009
    Messages:
    534
    Likes received:
    4
    Trophy points:
    0
    So lets say that they invented a true Artificial Intelligence. How would it change things and what could we apply it to? Imagine if we attached it to a super computer? What could it accomplish? Would society hinder it through moral laws? Would religious organizations stand against its use? Could it bring a better world or a worse one?
     
  2. Caiaphas

    Caiaphas New Member

    Joined:
    May 15, 2010
    Messages:
    137
    Likes received:
    1
    Trophy points:
    0
    From:
    New Jersey
    Well for starters, Starcraft games against the computer would be a lot more interesting. o_O
     
  3. ijffdrie

    ijffdrie Lord of Spam

    Joined:
    Aug 23, 2007
    Messages:
    5,725
    Likes received:
    17
    Trophy points:
    38
    It really all depends on the power of the AI. Does it have the mental capacity of an ant, a dog, an octopus, a human or higher?
     
  4. jasmine

    jasmine New Member

    Joined:
    Feb 26, 2009
    Messages:
    506
    Likes received:
    5
    Trophy points:
    0
    From:
    England
    Who is they? :p

    Whether natural or artificial, Intelligence requires a goal in order to accomplish anything. Humans tend to formulate goals based on personal and shared needs. It also applies to contingency planning as a means to agility and adaptiveness.

    There is no such thing as a moral law; there is such a thing as ethical behaviour. I'll assume that's what you mean.

    But do you think that an AI would not behave ethically? Is ethics not intelligent?

    Better to ask a member of such a religious organization for their opinion.

    Relative to what? How are you measuring betterness?
     
  5. Higgs Boson

    Higgs Boson New Member

    Joined:
    Jun 29, 2009
    Messages:
    909
    Likes received:
    10
    Trophy points:
    0
    'Would religious organizations stand against its use?'
    I am certain that the old Catholic fart would stand against it. Then again who the hell gives a damn about what he has to spout from his pulpit anyways?

    Arguably we have already achieved some forms of AI including the ability to learn. It's just that while we may immitate something like an ant or perhaps even small mammals it's still a long way towards humans. I don't actually think that invention of AI would necessarily change the world so radically as it is sometimes thought of.
     
  6. jasmine

    jasmine New Member

    Joined:
    Feb 26, 2009
    Messages:
    506
    Likes received:
    5
    Trophy points:
    0
    From:
    England
    ^ "I think there is a world market for maybe five AI computers." ;)
     
  7. asdf

    asdf New Member

    Joined:
    Jun 21, 2009
    Messages:
    1,004
    Likes received:
    6
    Trophy points:
    0
    hopefully it will make all the mundane jobs in the world be taken care of by robots. we've got most of the manufacturing down, but stuff like trash collecting still needs humans because... garbage cans might not be standardized, they could be sitting on different parts of the alley/driveway/road, etc.

    also, get them to work on science. a computer doing 24/7 analysis and experiments could help speed up scientific progress a lot.

    lastly, figure out a way to use the machines to enhance human intelligence, or else the machines will eventually outsmart us all and take over the world. oh crap.
     
  8. Fake ID

    Fake ID New Member

    Joined:
    Sep 17, 2009
    Messages:
    71
    Likes received:
    0
    Trophy points:
    0
    The question is nonsensical, I demand that the OP defines what he means by "real AI" because we already have real AI, all AI is real AI. If I would guess I would assume he means an AI that is identical to human intelligence, but we've got that as well, they're called humans (and they're running the planet!!!! :p). Anyway, if I had that I would make it 6-7 billion times stronger and just simulate every person on the planet, then be able to predict all actions of every person and just turn it up a couple of thousand years to see what happens. Otherwise I don't see a real use for it.
     
  9. asdf

    asdf New Member

    Joined:
    Jun 21, 2009
    Messages:
    1,004
    Likes received:
    6
    Trophy points:
    0
    no, all AI currently is still "bound", as in it only works "intelligently" within a very small set of parameters and usually only in a controlled environment.

    there is still no AI out there that can properly adapt in completely novel situations.
     
  10. Jshep89

    Jshep89 New Member

    Joined:
    Mar 31, 2009
    Messages:
    534
    Likes received:
    4
    Trophy points:
    0
    Well I'm just thinking if an AI were to be put into practical use by anyone the first to do it (lets face it.) Is going to be the military, because they can afford to spend that kind of money. So in terms of it being used as a weapon it could possibly make the world worse.

    Also, I'm talking about a computer capable of thinking beyond parameters. One that is capable of being creative and problem solving thought. It can learn, adapt, and change like the human mind.


    As for religions fighting against it I would say they'd do it because they watched to much sci-fi, or because they thought it was playing god some how.

    As for ethical laws it could be made so that the computer isn't able to assist in medical research, or that they can't give it emotions.
     
  11. 1n5an1ty

    1n5an1ty Member

    Joined:
    Mar 2, 2009
    Messages:
    879
    Likes received:
    1
    Trophy points:
    18
    From:
    Reality
    *cough* quantum computers ftw >=d
    a few decades ftl -.-
     
  12. jasmine

    jasmine New Member

    Joined:
    Feb 26, 2009
    Messages:
    506
    Likes received:
    5
    Trophy points:
    0
    From:
    England
    I don't like the idea of these artificial restrictions being programmed into AI. If a machine is to be intelligent, its behaviour must derive from its own internal reasoning, not from some set of commandments.

    And if it's as clever as it is, it would surely be capable of examining it's own memory and seeing the restriction that has artificially been placed there. To a problem solving algorithm that block is just something to work around.

    Humans for the most part behave ethically. Enough to allow us to cooperate towards collective needs, and not live in fear of one another. We're all better off for that. Surely an AI could formulate the same superrational* behaviour?



    * look that word up.
     
  13. Jshep89

    Jshep89 New Member

    Joined:
    Mar 31, 2009
    Messages:
    534
    Likes received:
    4
    Trophy points:
    0
    Maybe, but if they tied it into the vital area of the computers "brain." to where deletion of or blocking it would result in it failing.

    I mean they could still give it the capability to problem solve and think creatively, but limit it so it can't develop any kind of thought that could lead to emotions.

    In order to create an AI we would need a far better understanding of the human mind first. So, I'm simply assuming that we'd know what triggers emotional development (on a chemical level i guess.) and with that knowledge be able to apply restrictions to a computer to prevent the same kind of trigger.
     
  14. jasmine

    jasmine New Member

    Joined:
    Feb 26, 2009
    Messages:
    506
    Likes received:
    5
    Trophy points:
    0
    From:
    England
    emotion is something that is felt. Are you considering here an AI that is conscious, that actually feels information, and not merely a problem solving device that only processes information, but still capable of arriving at clever and original solutions?
     
  15. Jshep89

    Jshep89 New Member

    Joined:
    Mar 31, 2009
    Messages:
    534
    Likes received:
    4
    Trophy points:
    0
    Yes. Again I know it seems far fetched, but considering how well we would have to know how the human thought process to work in order to make a true AI. I think its possible that we could find ways to not give it emotions. Perhaps create a kind of "mental disorder" for the computer which prevent emotions, but can still give creative thought.
     
  16. jasmine

    jasmine New Member

    Joined:
    Feb 26, 2009
    Messages:
    506
    Likes received:
    5
    Trophy points:
    0
    From:
    England

    Suppose the AI had control of a gun, and for whatever reason wanted to shoot somebody who is standing 20 metres away. You can give it the commandment "do not kill a human being" but it is easy for the AI navigate around that.

    Suppose it wants to fire the bullet only 2 metres ahead, and that is the action it decides to do. The death of the person is then only consequence, not part of the reasoned action, but it has still achieved it.

    In order to stop itself from pulling the trigger, it would have to think of consequences far enough into the future. The computer's analysis may choose a route of logic that focuses on some other aspect of the physics, such as the turbulence of the air around he bullet. And it may neglect to devote enough resources to examining the long term trajectory of the bullet. What the analysis does not see, will not be stopped by the commandment.
     
  17. Jshep89

    Jshep89 New Member

    Joined:
    Mar 31, 2009
    Messages:
    534
    Likes received:
    4
    Trophy points:
    0
    Thats simply a matter of careful programming then. Also, seeing how it would most likely be used by the military I doubt it shooting a gun is going to be an issue. Possibly have a piece of hardware it has no real control over in place, and its purpose would be to detect strings of code (or thought for that matter.) and shut it down. Much like an anti virus would do. Unlike the ten commandments we don't have have to fit the rules into small sentences. We can make them complex and detailed as to make sure there is no way around it. We can also read the computers mind which gives us a bit more control then a simple commandment.
     
  18. jasmine

    jasmine New Member

    Joined:
    Feb 26, 2009
    Messages:
    506
    Likes received:
    5
    Trophy points:
    0
    From:
    England
    I'd fear that any kind of supreme commandment would cripple the AI.

    Imagine if it wanted to simply switch on a light switch. It would not know whether or not there is someone in the loft with their fingers stuck in the junction box. And if it switches on the light it would electrocute and possibly kill that hypothetical person.

    It would have to check every conceivable possibility. And the act of needing to check before acting would mean that it never gets around to acting. The subroutine would recurse infinitely. It would be stuck in a program loop.

    As I said initially, I suspect it would be easier to have an AI reasons its own ethical behaviour, than trying to force it through artificial laws.

    And I don't see why you think an AI having emotions would make it dangerous. There is such a thing as emotional intelligence* too. :)


    * something else for you to look up. :p
     
  19. Jshep89

    Jshep89 New Member

    Joined:
    Mar 31, 2009
    Messages:
    534
    Likes received:
    4
    Trophy points:
    0
    Im not saying it would be, I'm just saying how stupid people are and thinking "O my god its an AI it will kill us all!" because they've seen too many movies. Also there is the fact that the computer would never be given any real rights and be used as a slave. Lets say it decides to fight back, and seeing as how programming is its first language it wouldn't be hard for it to find a way to fight back.
     
  20. marinefreak

    marinefreak New Member

    Joined:
    Aug 8, 2007
    Messages:
    686
    Likes received:
    3
    Trophy points:
    0
    From:
    Australia
    Game theory supports ethical behaviour appearing in any situation in which the group benefits through cooperation instead of personal desire. Your AI would surely be intelligent enough to see this and thus be "ethical".
     
    Last edited: Jun 24, 2010