But What Would the End of Humanity Mean for Me? Preeminent - TopicsExpress



          

But What Would the End of Humanity Mean for Me? Preeminent scientists are warning about serious threats to human life in the not-distant future, including climate change and superintelligent computers. Most people dont care. JAMES HAMBLINMAY 9 2014, 2:55 PM ET Mick Tsikas/Reuters Sometimes Stephen Hawking writes an article that both mentions Johnny Depp and strongly warns that computers are an imminent threat to humanity, and not many people really care. That is the day there is too much on the Internet. (Did the computers not want us to see it?) Hawking, along with MIT physics professor Max Tegmark, Nobel laureate Frank Wilczek, and Berkeley computer science professor Stuart Russell ran a terrifying op-ed a couple weeks ago in The Huffington Post under the staid headline Transcending Complacency on Superintelligent Machines. It was loosely tied to the Depp sci-fi thriller Transcendence, so that’s what’s happening there. Its tempting to dismiss the notion of highly intelligent machines as mere science fiction, they write. But this would be a mistake, and potentially our worst mistake in history. And then, probably because it somehow didn’t get much attention, the exact piece ran again last week in The Independent, which went a little further with the headline: Transcendence Looks at the Implications of Artificial Intelligence—but Are We Taking A.I. Seriously Enough? Ah, splendid. Provocative, engaging, not sensational. But really what these preeminent scientists go on to say is not not sensational. An explosive transition is possible, they continue, warning of a time when particles can be arranged in ways that perform more advanced computations than the human brain. As Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a singularity. Get out of here. I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a singularity? Experts are surely doing everything possible to ensure the best outcome, right? they go on. Wrong. If a superior alien civilization sent us a message saying, ‘Well arrive in a few decades,’ would we just reply, ‘Okay, call us when you get here–well leave the lights on? Probably not. But this is more or less what is happening with A.I. More or less? Why would the aliens need our lights? If they told us they’re coming, they’re probably friendly, right? Right, you guys? And then the op-ed ends with a plug for the organizations that these scientists founded: “Little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.” So is this one of those times where writers are a little sensational in order to call attention to serious issues they really think are underappreciated? Or should we really be worried right now? In a lecture he gave recently at Oxford, Tegmark named five cosmocalypse scenarios that will end humanity. But they are all 10 billion to 100 billion years from now. They are dense and theoretical; extremely difficult to conceptualize. The Big Chill involves dark energy. Death Bubbles involve space freezing and expanding outward at the speed of light, eliminating everything in its path. Theres also the Big Snap, the Big Crunch, or the Big Rip. But Max Tegmark isn’t really worried about those scenarios. He’s not even worried about the nearer-term threats, like the concept that in about a billion years, the sun will be so hot that it will boil off the oceans. By that point we’ll have technology to prevent it, probably. In four billion years, the sun is supposed to swallow Earth. Physicists are already discussing a method to deflect asteroids from the outer solar system so that they come close to Earth and gradually tug it outward away from the sun, allowing Earth to very slowly escape its fiery embrace. Tegmark is more worried about much more immediate threats, which he calls existential risks. That’s a term borrowed from physicist Nick Bostrom, director of Oxford University’s Future of Humanity Institute, a research collective modeling the potential range of human expansion into the cosmos. Their consensus is that the Milky Way galaxy could be colonized in less than a million years—if our interstellar probes can self-replicate using raw materials harvested from alien planets, and we don’t kill ourselves with carbon emissions first. I am finding it increasingly plausible that existential risk is the biggest moral issue in the world, even if it hasn’t gone mainstream yet, Bostrom told Ross Andersen recently in an amazing profile in Aeon. Bostrom, along with Hawking, is an advisor to the recently-established Centre for the Study of Existential Risk at Cambridge University, and to Tegmark’s new analogous group in Cambridge, Massachusetts, the Future of Life Institute, which has a launch event later this month. Existential risks, as Tegmark describes them, are things that are “not just a little bit bad, like a parking ticket, but really bad. Things that could really mess up or wipe out human civilization.” The single existential risk that Tegmark worries about most is unfriendly artificial intelligence. That is, when computers are able to start improving themselves, there will be a rapid increase in their capacities, and then, Tegmark says, it’s very difficult to predict what will happen. Tegmark told Lex Berko at Motherboard earlier this year, I would guess there’s about a 60 percent chance that I’m not going to die of old age, but from some kind of human-caused calamity. Which would suggest that I should spend a significant portion of my time actually worrying about this. We should in society, too.
Posted on: Tue, 18 Nov 2014 19:39:12 +0000

Trending Topics



Recently Viewed Topics




© 2015