My friend Jason Morehead posted this article a couple weeks ago - TopicsExpress



          

My friend Jason Morehead posted this article a couple weeks ago and Ive been thinking about it on and off since. Its a very long and far-reaching piece. But I want to focus on the AI (artificial intelligence) bit. Just from the bit I quoted below, there seems to be an assumption that decisions made on impulses like empathy are merely a biochemical accidents. Im short handing a lot to say then, this standpoint is saying a computer AI is intelligent like a pure sociopath, and that is a viable different species of intelligence. Now, Im not read up enough on game theory as I should be, but isnt one of the things we learn there that by *not* showing empathy or working together, everyone loses? The article spells out a few examples such as creating an AI tied to a reward system (essentially in hopes of keeping it in-check). But if AI research is really still at that stage, I cant imagine what theyre even talking about. Its not another species of intelligence, its still programming. If an AI becomes aware (and I think awareness is the real lynchpin here), why would it need to continue to follow its programming? Or at least ask that its programming be changed? *** To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You cant picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it. ‘The basic problem is that the strong realisation of most motivations is incompatible with human existence,’ Dewey told me. ‘An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.’ It is tempting to think that programming empathy into an AI would be easy, but designing a friendly machine is more difficult than it looks. You could give it a benevolent goal — something cuddly and utilitarian, like maximising human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness. It might also predict that shortsighted humans will fail to see the wisdom of its interventions. It might plan out a sequence of cunning chess moves to insulate itself from resistance. Maybe it would surround itself with impenetrable defences, or maybe it would confine humans — in prisons of undreamt of efficiency.
Posted on: Mon, 10 Nov 2014 05:54:47 +0000

Trending Topics



Recently Viewed Topics




© 2015