How to Make a Bad Analogy: The Legacy of Systems - TopicsExpress



          

How to Make a Bad Analogy: The Legacy of Systems Theory. Jonathan Ratcliffe and Chris Heggie-Brown. “We are compiling a vocabulary and a syntax that is able to describe in a single language all kinds of phenomenon [sic] that have escaped a common language until now. It is a new universal metaphor. It has more juice in it than previous metaphors: Freud’s dream state, Darwin’s variety, Marx’s progress or the Age of Aquarius….In fact the computational metaphor may eclipse mathematics as a form of universal notation.” An analogy is not an equals sign and should never be treated as such. Whether we are talking about the parables of Jesus or that famous science class “lie for children” in which a rainbow is erroneously explained using light reflecting through a glass of water, an analogy is little more than a teaching method on the road towards a higher understanding of things. This is an essay on the abuse of analogies - about how moving the boundaries as to what is entailed in the creation of a valid and an invalid analogy and the very purpose of generating them has led to some of the most ridiculous simplifications and poor methodological decisions in the twentieth century academy. This has largely been in reference to the complex of ideas known as “Systems Thinking,” or imagining that everything can be understood as systems of self-regulating parts through analogical appeals to and dependency upon computing. “Systems Thinking” at its most basic is the analysis of phenomena purely in relation to their interrelations and dynamics: how the parts work together as a mutually reinforcing and self-regulating system of interactions. Key to this are studies of how the target “system” in question, and it could be anything, is affected by “input”, how it processes this as “throughput” and how it then passes such influences onwards as “output”. Within the “system” mutually reinforcing and correcting actions or “feedback” take place as new “information” is exerted upon the system, as its attempts to maintain “equilibrium” (balance) and “equifinallity” (preservation of the same aim). As one may see, much of this language has a thorough affinity with computing jargon, and this will be one of the major criticisms that underlie this essay. “Systems Thinking” imagines that everything from the natural world in the form of ecosystems, equilibrium of the concentration of chemicals in the human biology, and the habits of human societies down to family units, and even the entire cosmos, can be understood as operating like an electronic computing system maintaining itself through reactions to information. The roots of such thinking, as will be shown, precedes the birth of the computer, and actually begins in earnest with Von Bertalanffy’s pre-war work in microbiology, but during the 1950’s crystallised under the computer’s emerging numinous influence. The most important irony one should be aware of in approaching this issue is that many thinkers who took up such new systems based approaches during the 50’s and 60’s did so in order to break free of what they saw as the dualistic, mechanistic Cartesian-Newtonian modernist, mechanistic cosmos. Systems Thinking was to be a holistic and organic solution to what appeared to many as overly simple and reductive approaches inherent in Behaviourism, Newtonian Physics and what remained of “essentialist” Judaeo-Christian approaches to social and natural environments. Of all its adherents from then to now, perhaps only Koestler appears to have ever been able to notice with “uneasy suspicion” that by swapping a clockwork universe for the computer, that one is simply swapping one mechanistic cosmic understanding for another. However, just at this very moment of realisation, Koestler in his Ghost in the Machine, changes topic, displaying very clearly his almost blind faith in this new movement, in spite of the blatantly obvious problems involved. It is this blind faith and lack of desire to realise that what we are dealing with is a technological cultus applied analogically and with little basis beyond twentieth century cultural zeitgeist that will inform my analysis of this topic in the proceeding paragraphs. The difference between natural and machine are indeed deliberately and utterly blurred in Systems Thinking, as is the line between any two things the reductive Systems model can be applied to. Von Bertalanffy famously said of how feedback generates habits in living creatures: “Organisms are not machines, but they can to a certain extent become machines.” Cybernetics according to its inventor Ashby was to take into account “all possible machines…some of them have been made by Man, some of them by Nature”. The father of Ecology, Derek Odum famously took a circuit board into a forest and wrote down the names of animals on each of the circuits. This is in fact the origin of the seemingly innocent web of life found in all school biology textbooks. The issue, then, is not isomorphism, or the sharing of forms and rules in different phenomena on different scales, as inherently wrong, but the preoccupation of modern thinkers with prioritising computational technology in their creation of the “systems” vocabulary and application of it to every other field. At its simplest we should pose an analogical question of our own. What came first: mustang cars or mustang horses? The systems theorist is the sort of person who irrationally assumes the former, ironically “…depicting the natural as though it were artifical, forcing nature with all its diversity, uncertainty and historical contingency into the mould of a manmade object.” In this sense we may compare such machine cosmologies with the technical myths of traditional peoples, such as Promethius or Ptah making humanity on the potter’s wheel. Such myths may certainly be able to tell us a lot about how different peoples have elevated their sacred instruments to the cosmological level, but they are certainly not in any way meta-cultural and coldly “objective” as scientists might imagine of Systems Thinking. The Great Deception. The role of analogy in relation to “Systems Thinking” is obviously a very important one. The original series of rules formulated by Ludwig Von Bertalanffy for GST (General Systems Theory) were taken to be easily transferable between computers, human societies, cosmic evolution, and the human brain – all seen to operate in the same isomorphic manner. Von Bertalanffy in his early postulations jumped incessantly from one field to the other, though he himself was merely a biologist, in search of those already leaning towards his direction and in hope of expanding his Systems Thinking ever outwards into all fields from history to economics. In order to understand this “general theory of everything”, or more accurately “general theory of operation,” we must be very clear on one thing. This is the fact that Von Bertalanffy attempted to legitimise his new all-encompassing methodological model by changing the meaning of the word analogy to a very pejorative one: it is simply a comparison that is not scientific. Here Von Bertalanffy borrows a different word: homology, which is to him a scientific isomorphism, or the sharing of valid rules and patterns in multiple targets of study. But how do we tell an analogy from a homology? In genetics, from whence he borrowed the terminology, analogous simply indicates the illusion that two people or creatures are related because they look alike; homologous means that they share the same ancestry. All Bertalannfy is doing, seemingly, is legitimising through self-substantiating circular argument his own beliefs. Analogy comes to mean illegitimate comparison and homology the opposite and that is all. But surely there must be a kind of verification or repeatability, even a falsification process for dealing with all this? Indeed when the term “theory” is used by scientists, they usually like to maintain that this label indicates a model backed up by a great deal of evidence, and rather like the law is substantiated by the truth “beyond reasonable doubt.” The answer to Bertalanffy’s delineation, however, and that of his successors, is clearly no. Von Bertalanffy never explains how one is to tell an analogy from a homology, and this remarkably blatant piece of sleight of hand has had more influence on the twentieth century and the present than many would at first imagine. In fact this crass and forceful use of homology and analogy as a key principle of Systems Theory would perhaps have more in common with that used by Ezra Pound in relation to Joyce’s Ulysses in which transference of the same thing does not lead to it being received in exactly the same way, but warped each time through the simple fact that complete transference to different scales cannot occur. One cannot simply transfer principles by force, without investigating the differences involved between a cell, the human brain, a forest and a computer system. However, from Cybernetics to Ecology to Chaotics and computer modelling of Climate Change and the spread of epidemics, this has been done and continues to be done with little conscious awareness of the actual issue at hand – assuming that everything works in the same basic systemic way no matter what the scale or difference of the contents of the “system” involved. In order to understand this seeming free-lunch of the transferability of hypostatic principles in Systems Thinking a little more about the concept of isomorphism and the historical and cultural contexts out of which such thinking arose is required. Some Systems Thinkers such as Bateson like to see their first true ancestor in Alfred Russel Wallace, a contemporary of Darwin, who unlike the latter’s preoccupation with micro-evolutionary natural selection, was far more interested in the relations between beings and their environment. Wallace compared organisms with the governor in a locomotive – the device which regulates the intake and expulsion of heat energy: “…in like manner no unbalanced deficiency in the animal kingdom can ever reach any conspicuous magnitude, because it would make itself felt at the very first step, by rendering existence difficult and extinction almost sure soon to follow.” It is from this term that later Cybernetics emerged; reappropriating the Greek term kubernetes (a ship’s pilot) made use of by Plato in the Alcibiades as part of an analogy for the intellect controlling the ship of the body. This new interest of Wallace’s in defining entities purely through their interactions in many ways is exemplary of the fact that the he nineteenth century is the century in which what is often pejoratively tagged as “essentialism” died in the mainstream of academia. What we mean by “essentialism” is the idea that a thing has an inherent qualitative nature – one of the key elements in western traditional thinking. The pejorating of “essentialism” that still takes place today usually involves recourse to demonising two very influential western thinkers: Plato and Aristotle, who are lumped in together without due analysis of their differences. Aristotle’s thinking is that an item or animal possesses a teleological essence that makes it what it is. A dog is always a dog and because of this has specific characteristics. To Plato a dog, for instance, partakes, on the other hand, in ideal doggishness, but it itself is a reflection of this and one can imagine that an entity may partake in many such forms, some very transitory in their habitation of matter. Darwinism and Lamarkian evolution are often touted as the major instigators for the death of such thinking about permanent natures and the move towards species and perceived “laws of nature” as nothing more than statistical averages, a matter on which Von Bertalanffy and Dawkins both agree, but the causes are far wider than this. The true causes are overall preoccupations with change and alteration across the board and most importantly interaction rather than solid “essential” nature. We may also see during the nineteenth century in Marx, Hegel and the Darwinian philosophers Spencer and Teilhard de Chardin greater interest in dynamics and synthesis rather than in things in their formulation of history, and in the theology of Whitehead the notion that a thing can only be understood as like a pebble thrown into a pond- from the ripples, or interactions, it generates. We may see the movement towards the study of culture through collectivity and interrelation in the works of Ratcliffe Brown and other integral figures in Anthropology, who are often cited by Network Theorists such as Talcott Parsons as their earlier progenitors. On the other hand we may also see the fallout of Kant’s supposed “Copernican Revolution” from a century earlier, in which poor arguments in the belief that there must be a world-in-itself out there somewhere, but only understood in small parts by different individuals, begins to ferment to become Phenomenology and Existentialism, in which changing views of things temporarily define them through instrumental-complexes and cultural constructs. Becoming triumphed over being and new mythologies flocked to fill the void. However, prior to the rise of Systems Thinking in the 1950’s and 60’s analogous reasoning was not common in the sciences. In some ways they still should be from an internally consistent scientific perspective based in empirical observation and the quantative measurement of specific entities and statistical analysis. Once again we return to Von Bertalanffy’s sleight of hand and his lack of explanation as to the difference between bad analogies and ones he found legitimate. Traditionally all apparent isomorphism is called a logos – the divine rules on which everything operates – the reason and the intention of a creator. We are not the first to suggest that Systems Thinking is an attempt to create new logos - some have avidly proselytised the idea - and in the absence of any generally accepted creator by most intellectuals, or deliberate cosmic “intention,” what we have here in the elevation of secondary causes to that of a deified primary cause. This is the deification not of divine proportion or reason, but that of dynamics and interaction that are self-creating and substantiating. Darwin may have begun the removal of the idea of there being inherent “purposiveness” in nature and replaced it with the self-directing principle of evolution, but it was not until Cyberneticians such as Weiner, who characterised all living things and machines as simply devices at war with entropy, that any kind of substantial philosophy was offered to support such improbable notions - and as one might imagine this was simply offered through deferred appeals to futuristic self-directing and repairing machines. The very idea of divine intention and proportion running through creation, this logos, is of course central to the Western Tradition and in many other pre-modern cultural spheres. However, perhaps the most important and influential figure in relation to such concepts in the west remains Plato. In his Republic the isomorphic role of the internal justice of the three principles composing a human being (appetitive, courageous and intellectual) and the macrocosmic dynamic of such principles in a human society through the division of labour are integral bases for synchronising man with the world. Plato is also perhaps the first thinker to use the term “organ” in relation to the portions of the body – each with their own specific nature working harmoniously, prefiguring the “organicism” of the Systems Thinkers who emphasised the dynamic of parts, but removed any concept of inherent qualities in these. Each part comes into being merely through its relation to others and develops greater complexity. Even Aristotle’s systemic social model in the Nicomachean Ethics in which everyone produces “goods” through material or service to benefit society and its overall good and end, happiness- “towards which all things aim,” has a firm basis in the idea that each part is inherently purposive. No part has come into being simply by interacting with the others. We find similar qualitative analogous principles in the portions of the human body and the zodiac signs in the Western Tradition, and even if we look to divination in general – through the flight of birds, the casting of wands and the portions of the haruspectic liver and their equivalencies once again to portions of the human body, the cosmos and the state. We may also find similar belief structures all over the world amongst pre-modern peoples and most often the analogous workings inherent in multiple things are taken to represent not merely analogy, but anagogy - a leading back through a symbol towards the intention of the creator or basic principles from which the cosmos sprung. To Dante an analogical reading of a text was nothing more than “the truth concealed behind a beautiful lie” as one worked one’s way towards that anagogical communion with god. Analogous principles as Necessity and Reason, Ying and Yang, the three guṇas, inherent on multiple levels are a proof of both order and intention in the universe – the logos, as detailed. As Northrop Frye has aptly said: “When we pass into anagogy, nature becomes, not the container, but the thing contained.” However, in spite of some cheap and shallow New Age arguments to the effect that this mirroring of dynamics on different levels in Systems Thinking is a rediscovery of pre-modern thinking, when we discuss Plato and other pre-modern analogical cosmologies there is one very important difference. Plato and pre-modern macrocosms and microcosms on one hand and Systems Thinking on the other part ways in that the former are based in the idea that the things in question share qualities – that they share a nature: an orange and the sun; horns and masculine virility; the cunning fox and the thief. However, in Systems Thinking there is no specific nature inherent in the systemic portions involved. In order for Systems Theory to appear and dominate the concept of inherent natures in things had to perish and the concept of mutable “environment” had to rise to replace it. The price of this loss of qualitative essentialist thinking for forming isomorphic analogies means that science must fall back onto its quantitative bases in order to substantiate its analogous statements. We must then have to find quantifiable systems structures inherent in many different systems of different scales. However, as will be shown, this is an utter impossibility. To understand scientifically and quantifiably the dynamics of an ecosystem or human society one must have to empirically measure everything in question within the given “system”. What we mean is that one cannot understand a system of interrelations without knowing about all the parts in detail, and from a scientific perspective the data of “specifics” (the measurement of individual phenomena) rather than qualitative generalities is all important. When we say that everything must be measured, we are not being deliberately obtuse here by reducing scientific efforts to a reductio ad absurdum for our own pleasure. The poor thinking at the root of the Systems Thinking analogy, which was only ever a qualitative blanket-statement with no set rules to verify it, has done this to itself and the lack of realisation of this is stultifying. The fact is that there have been a few, but not many, attempts to actually measure empirically entire ecosystems and human cultures in order to construct systems approaches and predict change over time such as the USIBP Grassland’s project, which for the millions of dollars allocated for it produced little but a great deal of people with doctorates, an excess of unworkable data and dissuaded further interest in any “big ecology” for the immediate future. The more data that is added, the more ridiculous the task becomes, because empirical “specifics” are forever changing and going in and out of existence. Making generalisations about all apparent isomorphism is in fact much easier, and has allowed the Systems Thinking approach to survive so long, as ludicrous as it is. It is for this reason that every school child will tell you that there is a balance in nature, that overpopulation in nature “corrects itself” and that we all belong to one colossal system dependant upon all its other parts for the health of all the others. This is modernist social myth, awkwardly plastered over the world and nothing more and has more in common with If the absurd contradictions inherent in Systems Thinking as a gross and ill-thought out analogical method and its impossibility in scientific practice are not bad enough, we must deal with the issue of the borders of the “system.” When Systems Thinking was formulated, its original grounds were microbial cells and computer systems in laboratories. Some systems were taken to be “open” meaning that they were reliant upon information and dynamics external to the system; others were taken to be “closed” or self-sufficient but entropic and unable to react with the greater world at large. The more one thinks about it, the more obvious it seems that there is no such thing as a “closed” system, not only because of the obvious presence of the scientist viewing and manipulating the system by engaging with it, but also because of the simple fact that from a Systems Thinking perspective everything influences everything else and thereby defines any given part. There would be no system without a world beyond it to create it. Thus once again we degenerate into reductio ad infinitum et absurdum if we take any of this seriously. We are going to need to know the entire nature of the cosmos, empirically and quantifiably, if we are to understand a single thing in it. This may sound ridiculous, as induction or moving from a specific to more general statement is the basis of the scientific method. Indeed, “this swan is white, ergo all swans are white” is indeed a somewhat haphazard way to construct knowledge, but we must admit in the defence of science that it is fairly good at reformulating its principles when an exception appears. However, Systems Thinking remains absolutely holistic and abstract. In this way its scientific authenticity is an utter contradiction in terms. For this reason ecologists and the like once again simply generalise about what is beyond the system and/or ignore it, which contradicts everything they supposedly stand for. One attempt, albeit an utterly absurd one, that has been made to deal with these factors is that of the Chilean biologists Maturana and Varela, whose ideas have also been applied to Sociology. In order to preserve themselves from misrepresenting their “systems” through subjectivity, they simply decided to call all biological “systems” closed and self-referential to the point of solipsism. Like much of the concerns regarding objectivity and science that came to the fore in the 1980’s with the growing awareness of post-modernism, this was little more than a token gesture and a coy playing of hide and seek with oneself that quickly slid into being conveniently forgotten. The realisation that one could not simply count every blade of grass in an ecosystem or know what everyone in a city would be doing on the same day has meant a couple of very important things for the development and self-preservation of Systems Thinking in the past forty years. The first is simply that as computers formed much of the basis of the Systems Thinking approach to begin with, using them to hold the vast quantities of data required to supposedly represent an entire system is a fundamentally obvious step that began in earnest during the 1970’s. The next was to use them to produce models of increasing complexity to represent useable copies of entire societies or even the whole of planet earth. This is seen as utterly fitting because the ways for understanding these other systems have been seen to fit with computers all along. Key to this is the belief in the computer’s unrelenting progress and doctrines such as Moore’s Law, that computing power will continue to double every two months in perpetuity. It is this sort of incredible faith in the computer to solve all the problems that man cannot, and its ability to generate representations of the world that informs so much of the social mythology that has allowed Systems Thinking to continue with little complaint. Indeed bestselling 1971 book The Limits of Growth and its dependence on Forrester’s computer models may have been derided as reductive, uniformed and unscientifically pitched towards secular apocalypse according to experts, but that did nothing to prevent its key author Herbet Gruhl using it as a key hypostasis for the Degrowth Movement to shrink economies, populations and influence modern Greens Parties’ key beliefs. The failure of predicative climate change models to show anything of value and to move the problem beyond earth to the rest of the solar system is simply the latest in a long series of sleights of hand. Man-made climate change most likely exists, but the writers of this article thoroughly believe that no one possesses any scientific tools available to measure it because of the assumptions inherent in Systems Thinking. As a nebulous issue it in itself is also so ingrained with Systems Thinking assumptions about wholes, parts, feedback, equilibrium and the magic of computer modelling that the idea in itself should drastically be rethought. However the writers feel quite sure that it will not at the present time, and it will be far more likely for yet another child-paradigm to emerge and defer the issue, just as global cooling, global warming and other models at least in popular myth have replaced one another in the past without letting go of the holist Systems Thinking approach. The crux of the problem is that people, both expert and layman, give Systems Thinkers an infinite number of second chances because the computers will always be twice as good tomorrow and because Systems Thinking has become so embedded in so many fields that it has become a hydra. World Systems with its focus on central human states defining peripheral ones has simply been updated by the “greened” PC holism of ecological world systems thinkers in the past twenty years. The idea of a balance in nature may have been abandoned as romantic by some, but ecological dynamics are seen to remain present. In Social Work, Network Theory and Business Management, Systems Theory may have seemingly made clients victims of their environment, but by centring clients as unique individual social products through the post-modern mythos, they assume that people are then re-empowered as active participants. Everyone seemingly criticises the surface elements of older forms of the Systems Thinking manifestation, but refuses to take notice of the issues of the “bad analogy” forming the most basic assumptions as Systems Thinking’s basis. Thus, in the post-modern order a new magical progress trajectory is formed wherein eventually the computer is preordained to represent the entire world accurately, as soon as humanity manages to rid itself of its politically incorrect and humanist assumptions – an approach both horribly cloying and from what has been said regarding the inability to let go of the systemic model: unworkable anyway. With regard to technique, Systems Thinking has grown in the past thirty years to try to preserve itself with through the introduction of elements from Chaos Theory as it became less and less realistic from a Systems perspective that what once passed as the expected measurable order of “specifics” amongst human and animal populations, the weather and the orbits of planets is increasingly less than realistic. The maths, computers and the systems generated become bigger and more complex, yet no one is questioning the idea that the concept of the “system” itself can be transferred between phenomena at will. In the end there is only one solution in order to preserve this absurd cosmological model from eventual downfall. This is, and we say this without reservation, the integration of everything in the world, from men to animals to cars and trees, into an electronic system. This is necessitated because the systemic rules of how to measure the world are at their very basis absolutely flawed and not transferable outside of the laboratory and nebulous theorisation in the first place. Whether this is done to control the environment by technocrats or in some seemingly innocent attempt to improve quality of life, this is unimportant. This desperation to preserve the model is comparable to the collapse of the Ptolemaic cosmos in the seventeenth century through its frantic recourse to adding more and more crystalline spheres and epicycles in order to preserve itself in the face of the obvious irregularity of planetary orbits. However, the situation we may face when Systems Thinking attempts to undo its collapse may be a little more serious than the cosmological arguments of the Baroque era. The “system” is a key shibboleth to the lives of billions of people on this planet in the worship of the most ubiquitous form of electronic systems thinking – the internet. The internet’s emergence from a government tool to a commercially available novelty into the rise of something permeating all life as “the New Aesthetic” in the developed world in the past seven or so years through the growth of social media and its “eruption into the physical world” is truly staggering. It is mythologised as something all apparent, the very glue of civilisation. The internet is not any longer a separate tribe or dual identity or dual citizenship. It is viewed as belonging to the world as much as any other aspect of it like air or grass or speaking English. Some might say that it has changed human ontology. As Heidegger supposed, in the future his conception of being – that of a lived presence removed from all abstraction would dominate for good or evil, and the integration of everything into a massive System renders all things all equally distant and ever-present– or so the popular mythology seems to have it. This new futurism is steeped utterly in Systems Thinking and all the contradictions and difficulties inherent in it already described. We hope that we are not alone when we knit our brows and desperately try to find the point at seeing recent notions of all the components of a car from the wheels up being connected as “smart” devices to one another through the internet as a colossal “Internet of Things (IoT)” ; all of the paving stones and roadways being integrated and ready to fix themselves at will as part of “smart cities”; and every surface being part of a colossal system able to act as a computer. Everything from energy companies to university degrees will be reliant upon system of online begging in the form of crowd funding. Large companies and central governments, the evils that they are, will collapse or become redundant and everyone will be recognised as a unique contributor within networks of other equal and unique contributors. There is naivety, which may to some extent be forgiven, but then there is simply having no understanding of where one’s value judgements and assumptions have come from, which is utterly concerning. Complete electronic integration of man and his “environment” will most likely be attempted during the lives of the writers of this article. As you read this, please consider these last ideas not merely as part of the renewal of futurist kitsch taking place at the moment with the new growth in the faith of electronic networks as a political tool or as childish ephemera. Instead we ask you to consider them as something resulting from the absurd process of the history of Systems Thinking and the desperate dissonance of the need to validate a century of computer cultism and unquestioned acceptance of the “rotten analogy” at the very base of this entire mythic paradigm.
Posted on: Mon, 01 Sep 2014 07:18:43 +0000

Trending Topics



Recently Viewed Topics




© 2015