A bit of a long post, but I saw this in another group in response - TopicsExpress



          

A bit of a long post, but I saw this in another group in response to the possibility of artificial intelligence. The answer to this question depends on ones philosophical position on the ontological status of mind; an Eliminative Materialist (only matter/energy is real) would say no, because consciousness is not real. To them consciousness is just its dream of itself, which of course begs the question of who or what is dreaming? This position is, to me, patently absurd, since consciousness is the most self-evident and undoubtable thing to us. The other two categories of position are Emergentism (matter is primary but somehow mind emerges from certain combinations of matter) and Panpsychism (the totality of existence is mind (has sense) and matter is what mind/sense appears as from the outside). Emergentists typically argue that matter is primary and mostly insensate, except in certain special instances where somehow a certain combination of insensate matter gives rise to sensation. This is usually explained by recourse to the non-explanation of emergence, which is really just a scientific term for we dont know wtf is going on here, citing examples such as how water emerges from H20 molecules or weather emerges from the interaction of heat, air particles, pressure etc. The problem with all such analogies however is that they confuse the emergence of novel and unprecedented qualities (as is the case in the claimed emergence of mind from matter) with the emergence of tweaks of behaviour which do not create new qualities but simply modulate already existent ones. The difference is in *all* other cited cases of emergence, one of degree and not of kind. For example, weather is simply an altered, complex, holistic movement of its constituents, which would not otherwise move in this way. For sure there is a phase transition where a very different kind of behaviour emerges. But in the case of the supposed emergence of mind from matter, there is no such phase transition... you cannot even graph the emergence of sense from insensate matter, because there are absolutely *no* commonalities between the two modes or ways of conceiving of existence. I take it from this (very compressed version of the) argument that emergence is a logically untenable position, so that leaves us with panpsychism. From this view the question of whether AIs will be conscious is moot, because everything is, to varying degrees, conscious. The question is more to what degree will AIs be conscious. And that question is analogous to that of to what degree are humans conscious?; the spectrum of human consciousness varies a great deal. Some are practically unconscious and some are so hyper-conscious that an hour of experience to them has a depth and complexity that many would not experience in a lifetime. With AIs, this will also be the case. Narrow AIs, which currently exist, such as google search algorithms or IBMs Watson, are minimally conscious. In fact we dont yet know how to categorise levels of consciousness in any meaningful sense, because there has not yet been an empirically validated theory of consciousness and its intensity. However I think that the Integrated Information Theory of consciousness by Guilio Tononi seems to be going in the right direction, in that aspects of experience/physical process (which in the panpsychist view are simply two views of the same process) have to be integrated together in order for higher levels of consciousness to be expressed. On this view the reason why our nervous systems are highly conscious is because there are so many feedback processes which are going on which are structured in such a way that information in integrated across large areas of the brain. This manifests in the highly conscious individual as the sensitivity of the system to novel information, that it can reconfigure itself across large parts of its network to coherently adapt to the incoming information. The holistic nature of this adaptation is something that AIs currently lack, yet this widespread adaptability of the brain or processor to incoming information, and the brain/processor to itself, is something which AIs currently lack, and therefore they lack unified streams of consciousness. Artifical general intelligences, the kind we associate with human level or above intelligence and flexibility of mind, basically the ability to take what is learned in one situation and apply it to a different situation - to abstract general principles, will I believe require a computational architecture which mirrors somewhat the integrated, holistic and tightly feedbacked processes of the brain. Otherwise what we have is a merely complicated machine, like a calculator, with inputs and outputs but no significant change in the machinery. The adaptation of the machinery to the demands of its environment, as is seen in the human brain as neuroplasticity or the body as a whole as epigenetic modification, is the basis for learning and development, and the optimisation of this adaptation is correlated with the efficiency of the structure of these feedback loops. So if this kind of holistic, adaptive, integrated AI is made, then yes it will display a significant levels of consciousness. Whether it will display self-consciousness is a relatively trivial question which relates to whether or not a self-modelling process is embedded within a significantly conscious architecture.
Posted on: Sun, 21 Dec 2014 16:17:10 +0000

Trending Topics



Recently Viewed Topics




© 2015