SUPERINTELLIGENCE -- A Book Report on Nick Bostroms New Work by - TopicsExpress



          

SUPERINTELLIGENCE -- A Book Report on Nick Bostroms New Work by James Jaeger I have just completed Nick Bostroms new book, SUPERINTELLIGENCE: Paths, Dangers, Strategies available at amazon/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ref=sr_1_sc_1?ie=UTF8&qid=1421213298&sr=8-1-spell&keywords=superintelligencep This book took me about eight days to get through because it was quite dense in intellectual ideas, concepts and total uncertainty. I almost went crazy reading this book, literally vacillating from hating it to loving it every other day. Its going to take some time to digest this book because Bostroms writing style -- coldly academic with a dash of scientific indifference and feverish technoterminology -- is anything but friendly. Hopefully Nick will NOT be put in charge of developing the initial conditions for the Eliezer Yudkowskys friendly AI. :) That said, Bostrom is really the first person to tackle the repercussions (pro and con) of building superintelligent machines in a reasonably balanced manner. Interestingly however, Bostrom only used the word singularity a few times, mainly to let the reader know that he feels that this term -- widely popularized, starting with Vernor Vinges seminal essay and continuing with the writings of Ray Kurzweil and others -- has been used confusingly in many disparate senses and has accredited an unholy aura of techno-utopian connotations. He also considers that most of the meanings and connotations connected with the Singularity are irrelevant to his argument and feels that we can clarify by dispensing with the singularity word in favor of more precise terminology. When I read this, I immediately turned to the index to see if Ray Kurzweil was even going to be MENTIONED again in the book. I found that he was, two more times, on page 261 and page 269. So I looked on page 261 and 269 and found that both of these pages are in the footnotes of the book and Rays name is actually not even present on page 261. Okay, so I looked over the acknowledgements at the beginning of the book to see if there is anyone there that is familiar. I found a list of about 90 people, most of which I never heard of (meaning nothing as computer science is not really my professional field). The only ones that were familiar to me were Eric Drexler, James Martin, Elon Musk and Eliezer Yudkowsky -- granted, four very important and impressive people. So I was glad to see them being acknowledged. Eliezer is quoted all over Bostroms book. The others are just mentioned once or twice. Why is any of this important, because Im looking for a consensus on this subject of superintelligent machines given the massive disclaimer Bostrom sets forth in his preface. He states: Many of the points made in this book are probably wrong.... and must be supplemented here by a systematic admission of uncertainty and fallibility. This is not false modesty: for while I believe that my book is likely to be seriously wrong and misleading, I think that the alternative views that have been presented in the literature are substantially worse …. Wow! Now do you see why Im looking for a consensus, any consensus. If Bostrom is correct, everything all of us have read, including the posts here at the MIND-X are worse than seriously wrong and misleading then that means any of us can be or are experts on this brand new human subject -- superintelligence, whether via machines or other. Even though I will ultimately conclude that Bostroms book is brilliant, and that it covers just about every consideration one can have on this subject -- whether awake or tossing and turning in ones sleep -- I am going to take issue with the idea that the term Singularity is confusingly, techno-utopian, unholy, irrelevant and/or imprecise. In coining the term, Singularity Vinge specifically states: [quote]I think its fair to call this event a singularity (the Singularity for the purposes of this paper). It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.[/quote] Given this definition of the period of time in which the Singularity happens, and given the fact that Bostrom admits that he is probably wrong, uncertain and fallible, I dont see that the term Singularity, as defined by Vinge, is any less nebulous than Bostroms admissions of his own nebulosity. The only place where Bostrom may be correct is in the idea that he wishes to narrow down the field connoted by the term Singularity to basically the concept of superintelligence -- whether such superintelligence arises via whole brain emulations, biological cognition, biomedical enhancements, cognitive enhancement, biological cognition, brain-computer interfaces, emerging networks or organizations, neuromorphic AI, synthetic AI, genetic engineering, self-improving seed AI, hardware colonization or strong recursive self-improvement. Confused? You should be. Bostrom is all over the map with his endlessly un- or ill-defined terms. And he says he just wants to clarify by dispensing with the singularity word in favor of more precise terminology. I dont think Vinge or Kurzweil have much to worry about with their singularity word. Even though one can easily see that Nick Bostrom is really wrestling with this subject, he failed to do one of the first things one must do when approaching a new subject. Create a well-defined nomenclature. Bostrom, has used so many ill- or zero-defined two 2- or 3-word terms in his discussions, it makes the mind swim. Here is a non-complete sample of the terms one will encounter in Bostroms Word Zoo: aggressive consequential theories anthropogenic risks argument template artificial intelligence biological humanity catastrophic coordinating failures cognitive enhancement control problem cortical organization dangerous self-improving see AI disruptive technologies epistemic standards existential risks favorable background trends hardware colonization hardware overhang human cognitive enhancement humanitys cosmic endowment impersonal perspective internationally anarchic system large and small projects machine intelligence machine intelligence revolution machine superintelligence macro-structural development Malthusian condition medium scale catastrophes multi-polar outcomes neural code neuromorphic AI neuronal functionality permanently stable totalitarian regimes person-affecting perspective post transition coordinating problems post-transition collaboration pre-transition collaboration principle of differential technological development race dynamic recalcitrance safe AI second transition risks second-guessing arguments sequential computations singleton small scale catastrophes social epistemology state risks step risks strong recursive self-improvement subjective eons take off technology coupling the blinding principle universal accelerators whole brain emulation And these are ONLY the terms I amassed from the last four chapters of the book when I realized that I better start writing these things down to get them defined. Maybe everyone knows these terms and I am just an ignorant new-comer to this field, but I have been here at the MIND-X for as long as the earliest people and most of these terms have never been uttered around here that Im aware of. But if Bostrom is right and no one knows nothing about this subject and not only is he probably wrong but you and I, and everyone here, including Ray and Vernor, are nebulous, non-specific and worse than wrong, maybe what we all better do is try and find some mental convergence on this subject at least by some better-defined and agreed-upon terms. To this end, would someone please write a FRIENDLY comprehensive dictionary that the layperson can understand along with the expert as well. I say both the lay person and the expert because it is not clear that a breakthrough will come from the expert or even the well-funded, large project. Thus it behooves the human race, I would think, to make this subject as widely understandable as possible. Besides, until terms are better-defined how am I going to know if Im being attacked by a dangerous self-improving seed AI, a neuromorphic AI or a permanently-stable, totalitarian regime with a neuronal functionality? Much of what Bostrom says in his book we all have discussed here at the MIND-X, some of it to exhaustion. I dont know if Nick reads this site, but this convergence bodes well for the idea that great minds tend to think along similar lines. One of the interesting things Bostrom discusses is the idea of when it would be optimum for superintelligence to emerge. To this end he explicates the idea that if superintelligence emerges later rather than sooner, the human race may be more mature and thus better able to survive it. Survive it? Yes, Bostrom asserts that the default position humanity must take is that a superintelligent entity will destroy humanity. It may not destroy humanity on purpose, but it could end up doing some meaningless task that ends up depleting Earths resources, such as counting all the grains of sand on the beaches and oceans of the world. Bostrom also notes that, if SAI can be launched at the right time, under the correct conditions and by a friendly, well-intentioned project, in the correct priority with optimum technology couplings, such an intelligence agent could make everyone on the planet wealthier than their wildest imagination AND make it possible to cure all diseases AND give people youthful longevities like never dreamed of AND open up space travel anywhere in the solar system and elsewhere to exploit humanitys cosmic endowment. So much for Rays writings on the Singularity being overly techno-utopian. But then after Bostrom tells us positive stuff like this, he then lets us know that, with AI, as opposed to whole brain emulations it is always possible that somebody (in a cellar) will make an unexpected conceptual breakthrough thus bringing on an intelligence explosion that could be not only unlike a child with an undetonated bomb in its hands … but many (children), each with access to an independent trigger mechanism and that some little idiot is bound to press the ignite button just to see what happens. This is why I have a love-hate relationship with Bostroms book as mentioned at the top of this report. The last thing I want to mention is the idea -- covered here at the MIND X exhaustively -- that superintelligence AI (SAI) will probably be autonomous. The word autonomous does not appear anywhere in Bostroms book. This worries me. Instead, we see the concept of the control problem endlessly appearing in the book. The idea being that Bostroms default position is, and humanitys position must be, that superintelligent agents will be lethal for the human race, unless we successfully handle the control problem. The further idea being that, if we establish the correct initial conditions when giving birth to superintelligence, we will be able to guide such superintelligence into being friendly. This to me is a VERY big if. Its a positive if, and I like it, but its a very big if, IF SAI is autonomous. If SAI is autonomous, then, by definition, it will do whatever it wants -- good, bad or indifferent from the POV of humanity. How computer scientists, politicians or geniuses set initial conditions will not make an iota of difference. Bostrom totally not-ises this concept in the book. He seems to think that hes playing with some sort of magic genie that he, the human race or some world government (made from a superintelligent AI he calls a singleton) will be controllable if only the control problem is solved. I say this is all horseshit. SAI cannot be controlled even if the control problem is solved. By simply making up words and then setting out to accomplish them as targets in some project is little more than verbal masturbation. Superintelligence well exceeds the concept of language and even information itself. Given this, the human race has to be totally willing to accept, and experience, its death if it is going to embark on the development of superintelligent agents. Its like a suicide bomber. He has accepted his death, now hes going to do his deed whether or not the STATE gets him. If the human race is not willing to accept an existential risk, then it is in no position to set off an information explosion because we will NEVER be able to control it and people that think that friendly AI can be developed and controlled have a truncated view of superintelligence. [i]SUPERINTELLIGENCE: Paths, Dangers, Strategies[/i] is a must-read book by all computer and AI enthusiasts and I am grateful to Mr. Bostrom for the insights, most of which were brilliant once one fully grasps the academic style and terminology employed. I will probably re-read this book in a month or so after it has settled in my mind a little. In doing so I may see (or not) that this book report is quite lacking and, if thats the case, I will do my best to own up to it.
Posted on: Wed, 14 Jan 2015 10:28:45 +0000

Trending Topics



Recently Viewed Topics




© 2015