Keeping it real 28.11.2013-11-28 Placebos 2 Last week we - TopicsExpress



          

Keeping it real 28.11.2013-11-28 Placebos 2 Last week we looked at what is a placebo, and why it is important. We concluded that the effect is real, and is utilised in many ways by many practitioners, but generally steps are taken to ensure that the placebo effect is incidental to the genuine therapeutic effect. In medications, this is done by controlled studies. Let’s look at these studies and the implications in other practice. To test how effective a pill is, it must first be given to someone with the targeted health problem. The placebo effect, or expectancy, will generally ensure that there is a result in a number of subjects. To ensure that the effect is due to the substance in the pill, a control is used: that is, a neutral pill, with no active ingredient. To ensure that the placebo effect does not influence the subjects’ reactions, they are not told which pill is the active one, and which the neutral. So the effect of the real pill should have a greater effect than the control, regardless of the expectancies of the subjects. However, the person administering the pill, whether doctor, or researcher, also will have expectancies, and these have been shown to affect the outcome, somehow influencing the patient, whether consciously or not. Therefore the therapist must also be unaware which of the pills is the active pill and which the placebo. The pills must be randomly assigned, with an identifying protocol, by an independent body or process. This is your basic random controlled test: double blind, randomly assigned subjects and therapists. This model is the basis for most testing, whether it’s for medicine, or other treatments: acupuncture for pain or smoking, cognitive behavioural therapy for depression, flu shots, and so on. In the process, adverse effects are also tested for; again, the double blind, randomly assigned controlled structure should give a good idea of whether or not this is due to the active ingredient, or something else going on – progress of the problem, or whatever. However, rarely will all of the active recipients show an improvement, and neither will all of the control group show no improvement. A good result will merely show that more of the subjects given the active ingredient show improvement than those given the placebo. So is this a result? Not necessarily. There is always a chance that this result is purely coincidence: that those given the active ingredient (the experimental group) just got better, regardless of the pill. This certainly happens. So you need a minimum number of people in both groups to reduce this chance effect. Even better, you replicate the test with a different cohort of subjects; getting a similar result will indicate a much higher certainty that the result is valid. While this is the gold standard, some treatment tests are much more complicated. If you are testing for a treatment for depression, or schizophrenia, for example, then what are you measuring? That is, how do you objectively define improvement? This becomes a complex issue, in terms of what constitutes the problem: often there is a syndrome, with more than one factor involved, and it may be necessary to monitor various symptoms of the problem to get a valid measure of improvement. This makes the result a lot more complicated to obtain. You need more subjects (people in the test), for example, and keeping track on improvements and side effects, and measuring them against each other, becomes a highly complex process, one that was not possible or practical before computers came along. And it also needs another science to drive it: statistical research methods. This is an arcane branch of mathematics that ideally defines how you do your randomised controlled test, although the analysis can be cobbled together to fit any design post-hoc. Depending on the variables measured before the test and the outcomes, the statistics will tell you how many subjects you need, depending on the level of certainty you are setting for the outcome (there is no such thing as a absolute certainty in science; no matter how many times you observe something, the next outcome may be different: see Derren Brown’s excellent show on racecourse punting for that, occasionally repeated on SBS). Generally, an acceptable error rate is one in twenty; however, statistical theory is moving towards error ranges rather than rates, which we wont get into here. Perhaps this is why people give up on the science of testing, and rely on their emotions or gut instinct, often saying that statistics lie, or that you can’t trust the scientists, they will tell you what they are paid to say. There is some justification in this, to an extent. In fact, the statistics don’t lie; it’s just that you have to understand what they say, and people are generally not good at interpreting them (this includes some people in research). And researchers don’t often tell you what they are paid to say, although I have seen some dodgy papers that have been influential their fields, and with highly respected authors involved. and there is no doubt that people and companies with a vested interest will confuse the statistics to make their point or product look good. Generally, however, the research is independent and reliable insofar as it goes. Most of the problems come from extrapolating from the data, which is inevitable given the ethical minefield of experimenting on human subjects; it’s much easier to get an ethics clearance when you are using white mice. Stay with us, there is a point coming up... Next week: how does this affect us in daily life, and why should everyone have basic statistics and critical thinking taught at school?
Posted on: Fri, 29 Nov 2013 00:43:45 +0000

Trending Topics



Recently Viewed Topics




© 2015