Maravilhoso esse artigo do New England desta semana sobre - TopicsExpress



          

Maravilhoso esse artigo do New England desta semana sobre percepção do risco e decisões médicas! MEDICINE AND SOCIETY Invisible Risks, Emotional Choices — Mammography and Medical Decision Making Lisa Rosenbaum , M.D. N Engl J Med 371:1549 - 1552 | October 16 , 2014 A childs risk of getting cancer from asbestos insulation in a school building is about one third the chance of being struck by lightning. Nevertheless, in 1993, frightened New York City parents agitated for asbestos removal from schools. As often occurs, public fear trumped expert risk assessment; the parents demands were met, the victory was celebrated, but then the celebration crashed. It turned out that removing the asbestos would mean closing the schools for weeks, disrupting parents lives. “As the costs of the removal came on-screen,” writes behavioral economist Cass Sunstein, “parents thought much more like experts, and the risks of asbestos seemed tolerable: Statistically small, and on balance worth incurring.”1 It is partly because our perceptions of risk are so influenced by our changeable emotions that we turn to experts to perform cost–benefit analyses. From environmental regulations to nuclear energy, such expert assessments inform policies meant to improve public health and welfare. We would not ask airline passengers to create standards for aviation safety or car owners to optimize fuel-emission standards, and in medicine, too, we still depend on expert-generated guidelines. Increasingly, however, in this era of patient-centered care and shared decision making, those guidelines emphasize the role that patient preference should play in the weighing of risk and benefit for any given evidence-based recommendation. This approach, with virtue on its side, is driven by the aspiration that we can, with the proper tools, empower patients to think like experts. But can we? Many medical decisions involve considerable uncertainty and complex tradeoffs, but none seem to highlight the tension between emotions and risk assessment more than mammography screening. Although the U.S. Preventive Services Task Force (USPSTF) recommended in 2009 that women under 50 years of age not undergo routine mammography screening, and that those between 50 and 75 years of age be screened less frequently, screening rates have apparently held steady or perhaps even increased. There are many possible reasons for this trend: physicians habits, conflicting guidelines, medicolegal concerns, radiologists preference for the status quo, and the mandating of screening coverage for women of all ages in the Affordable Care Act. But I suspect that the trends also reflect the powerful role that emotions play in both reinforcing womens commitment to screening and the challenge of communicating the potential harms of mammography. Consider a discussion with a 45-year-old woman with no family history of breast cancer about the most likely harm of screening: a false positive result. Maybe you say, “For someone like you, there is around a 50% chance that if you have regular screening over the next 10 years, you will have a false positive result. That could lead to repeat testing, potentially including a biopsy, and lots of worry and anxiety.”2 But though doctors striving to reduce unnecessary testing tend to emphasize the psychological stress involved, this possibility does not seem to loom large for women facing this decision. In one survey, for instance, although more than 99% of women were aware that mammography can produce false positive results, only 38% believed that this possibility ought to be considered in decisions about having the test. In addition, among respondents who had had a false positive result, more than 90% still believed that mammography could not harm a woman who ended up not having breast cancer.3 In another study assessing attitudes about various types of cancer screening, respondents who had had a false positive result described the experience as “very scary” or the “scariest time of my life,” but 98% of them were glad to have had the initial test.4 Perhaps these results reflect the likelihood that, when facing tough tradeoffs, we anticipate and try to avoid regret, rather than anxiety. Despite the demonstrable harms on the population level, cancer screening rarely begets regret for the individual. As Ransohoff and colleagues have written about the persistence of prostate-cancer screening, “the screening process is one without negative feedback. A negative test provides reassurance. A positive one is accompanied by gratitude that disease was caught early. And a false positive test, regardless of the distress it may cause, is nevertheless followed by relief that no cancer was ultimately found.”5 So women who have had false positive mammograms may spend the rest of their lives worrying that they are at heightened risk for breast cancer. But they are not left with regret about having had the test in the first place. What about the risk of overdiagnosis — being diagnosed with and treated for a tumor that would never have become clinically significant? The potential toxic effects of treatments, ranging from chemotherapy and radiation to lumpectomy and mastectomy, make overdiagnosis the greatest potential harm of mammography screening. Though overdiagnosis has been notoriously difficult to quantify, a recent analysis of data on mammography screening over the past 30 years suggests that of all breast cancers diagnosed, 22 to 31% are overdiagnosed.6 Nevertheless, there are few risks of this magnitude that are more “off-screen” than overdiagnosis. The first challenge in conveying this risk to women is that many are simply unaware that overdiagnosis occurs. One survey showed that only 7% of women believed that there could be tumors that grow so slowly that an affected woman would need no treatment; another study showed that women found the concept confusing even after a brief educational intervention. After being educated, women thought the information should be considered in decision making, but most believed it would not affect their own intent to be screened.3,7 This disconnect between awareness and intent speaks to the fundamental challenge of conveying the potential harms of mammography screening. That is: we do not think risk; we feel it. As research on risk perception has shown, we are often guided by intuition and affect.8 For example, when our general impressions of a technology are positive, we tend to assume that its benefits are high and its risks are low. We estimate our personal risks of disease not on the basis of algorithms and risk calculators, but rather according to how similar we are, in ways we can observe, to people we know who have the disease. And when we fear something, we are far more sensitive to the mere possibility of its occurrence than its actual probability. That may be why overdiagnosis does not resonate emotionally. We do not see women walking around with “an overdiagnosis.” Instead, we see breast-cancer survivors. We do not hear people complaining about having endured radiation, chemotherapy, and a lumpectomy. What we hear instead is, “Thank goodness I had a mammogram and caught it early.” Our relatives do not eye us critically when we get a mammogram that reveals a nascent tumor. But people shake their heads and say, “I wish she had taken better care of herself,” when we are diagnosed after not having been screened. Thus, we can be educated about overdiagnosis. We can refine our estimates about its likelihood and incorporate them into our recommendations, as the USPSTF did in 2009. But it is hard to summon fear of a risk that remains invisible. So how do we balance the goal of engaging women in decision making with the reality that emotions play a powerful role in shaping our understanding of benefit and risk? Some experts emphasize the need to address sources of misperception that inform beliefs far outside clinical encounters. Researchers at Dartmouth, for example, have described the misleading nature of various screening-advocacy campaigns. One advertisement by the Komen Foundation, for instance, features a photo of a beautiful young woman, with a caption reading, “The 5-year survival rate for breast cancer when caught early is 98%. When its not? 23%.”9 Though 5-year survival rates, because of lead-time bias and overdiagnosis, do not actually tell you whether the test saves lives, the visceral appeal of “catching something early” easily eclipses the difficult mental calculations one must undertake to figure out why early detection does not necessarily mean living longer. The problem is that once impressions have formed, whatever their source, educational efforts to address misperceptions often fail and can even backfire. In a recent randomized trial evaluating approaches to vaccine education, for example, researchers found that, among parents least likely to vaccinate their children, exposure to information emphasizing that there is no link between vaccines and autism mitigated misperceptions but nevertheless further reduced their intention to vaccinate.10 Indeed, the fact that sound scientific information that challenges beliefs can simply intensify those beliefs has been recognized by cognitive psychologists for decades. What was more disappointing in this study was that more creative attempts to engage parents emotionally, such as using images or narratives of children dying of measles, not only failed to increase vaccination intent but also cemented some parents conviction that there is a link between vaccines and autism. If there is tension between belief and sound medical information regarding vaccines, for which the benefits so clearly outweigh the risks, the tension is only heightened for decisions with more complex tradeoffs. The vaccine study thus raises two key challenges for the profession. The first is empirical. As the locus of decision making shifts toward the patient, this study reminds us how little we know about how beliefs inform interpretation of medical evidence — or about how to negotiate those beliefs in pursuit of better health. Closing this empirical gap is daunting. Not only does each person have his or her own belief system, but the particular beliefs that are relevant for a decision regarding, say, elective percutaneous coronary intervention or palliative chemotherapy may be quite different from those relevant to childhood vaccination or mammography screening. Moreover, even though it is more practical and financially feasible to conduct a study that looks at how interventions affect knowledge and intent, what we really need are long-term studies of how new approaches to sharing information affect downstream behaviors and outcomes. Which brings us to the second challenge, more ethical than empirical: How do we balance the need to honor preferences and values with the imperative to translate our evidence base into better population health? Our current default, particularly since medical recommendations are increasingly debated publicly, is to emphasize that decisions are “personal.” After the 2009 guidelines were published, the Obama administration and many physician leaders were all over the news reminding us of the importance of personal preferences. But even as more data accrue, including a recent review suggesting that the harms of mammography are greater than we once thought and the benefits fewer,11 the message we hear is not “Lets do fewer mammograms.” Rather, it is “Lets honor patients preferences.” Though we certainly need to be sensitive to patients values, it is often hard to distinguish values from an emotional understanding of risk. Consider the decision to initiate statin therapy for primary prevention of cardiovascular disease. One patient, an avid tennis player, may recognize the potential for improved cardiovascular health but feel that the prospect of myalgias simply outweighs any potential benefit. That is a preference. Another patient hates drug companies and therefore believes that statins must lack cardiovascular benefit and be highly likely to cause myalgias and liver disease. That is an emotional understanding of risk. Both patients arrive at the same choice, but should we really celebrate them as equally informed decisions? The tangled nature of emotions and values is particularly relevant to mammography screening, as evidenced in qualitative research done since the 2009 guidelines were released. One study explored the beliefs and attitudes of an ethnically diverse sample of women in their 40s. Though many were unaware of the guidelines, the researchers found that educating them about the new recommendations strengthened rather than diminished their commitment to screening. Women also expressed fears that the guidelines were an attempt by insurers to save money and keep them from getting the care they needed. Many women, expressing their abiding conviction that mammograms save lives, said they would have “no use” for a decision aid and viewed the weighing of benefits and harms as “irrelevant.” In fact, many said they wanted to be screened more than once a year and beginning before the age of 40 years. Finally, many believed that it was unjust that laywomen had been left out of the guideline-development process and the weighing of potential benefits and harms that it entailed.12 Such responses echo a broader debate among leading scholars of risk perception about whom we should rely on to evaluate risk. Some, such as Sunstein,1 recognizing our general difficulties in thinking about probabilities, argue that this task ought to be left to experts who can create policies to maximize public welfare. But the psychologist Paul Slovic has argued that the very concept of risk is subjective. Whereas experts tend to conceive of risk as “synonymous with expected annual mortality,” Slovic reminds us that riskiness means more to people than mortality rates.13 Undoubtedly, the recognition of the affective nature of risk perception is critical to the physicians role in helping patients live longer, higher-quality lives. But even if we can, in some general way, address misleading statistics that drive inflated perceptions of the benefits of mammography, what do we do about the 38-year-old woman who insists on annual screening because she just lost her best friend to breast cancer? Or the 43-year-old with fibrocystic breasts who last year had a false positive mammogram and is now convinced her risk is even higher? Is there some hierarchy of emotional reasoning dictating that certain causes of heightened fears are more acceptable than others? Or because we know it is often impossible to tease out sources of belief, much less rank them, is a better approach the more paternalistic one: definitive guidelines on which physicians base their recommendations, with less emphasis on the role that patient preference ought to play? One of the hallmarks of heuristic reasoning, as emphasized by Daniel Kahneman,14 is that faced with a hard question, we answer an easier one instead. In some sense, then, as a profession, we have fallen into a collective heuristic trap. Rather than confront these thorny ethical questions head on, we have answered an easier question: Should we respect patients values and preferences? The right answer will always be yes. The much harder question is how to balance that respect with our professional responsibility to use our expertise to translate clinical science into better population health. Defaulting to patient preference in the face of uncertainty has become the moral high ground. But it is as much our job to figure out how to best help our patients lead healthier lives as it is to honor their preferences. No matter how well we can define the tradeoffs of a medical decision, the threshold at which we decide that benefits outweigh harms is as subjective as individual patients perceptions of those tradeoffs. But this recognition does not stop us from making rigorous attempts to quantify the tradeoffs, any more than it should stop us from trying to better understand how our patients feelings and beliefs inform their understanding of those numbers, consequent behaviors, and health outcomes. As Slovic has emphasized, experts efforts to communicate risk will fail in the absence of a structured two-way process. “Each side, expert and public, has something valid to contribute,” he notes. “Each side must respect the insights and intelligence of the other.”13 Disclosure forms provided by the author are available with the full text of this article at NEJM.org. References Sunstein C. Laws of fear: beyond the precautionary principle. Cambridge, United Kingdom: Cambridge University Press, 2005. HG Welch, HJ PassowQuantifying the benefits and harms of screening mammography.JAMA Intern Med2014;174:448-454 LM Schwartz, S Woloshin, HC Sox, B Fischhoff, HG WelchUS womens attitudes to false positive mammography results and detection of ductal carcinoma in situ: cross sectional survey.BMJ2000;320:1635-1640 LM Schwartz, S Woloshin, FJ Fowler, HG WelchEnthusiasm for cancer screening in the United States.JAMA2004;291:71-78 DF Ransohoff, M McNaughton Collins, FJ FowlerWhy is prostate cancer screening so common when the evidence is so uncertain? A system without negative feedback.Am J Med2002;113:663-667 A Bleyer, HG WelchEffect of three decades of screening mammography on breast-cancer incidence.N Engl J Med2012;367:1998-2005 J Waller, E Douglas, KL Whitaker, J WardleWomens responses to information about overdiagnosis in the UK breast cancer screening programme: a qualitative study.BMJ Open2013;3:e002703-e002703 Slovic P, Finucane M, Peters E, MacGregor DC. The affect heuristic. In: Gilovich T, Griffin D, Kahneman D, eds. Heuristics and biases: the psychology of intuitive judgment. New York: Cambridge University Press, 2002:297-420 (faculty.psy.ohio-state.edu/peters/lab/pubs/publications/2002_Slovic_etal_Affect_Heuristic.pdf). S Woloshin, LM SchwartzHow a charity oversells mammography.BMJ2012;345:e5132-e5132 B Nyhan, J Reifler, S Richey, GL FreedEffective messages in vaccine promotion: a randomized trial.Pediatrics2014;133:e835-e842 LE Pace, NL KeatingA systematic assessment of benefits and risks to guide breast cancer screening decisions.JAMA2014;311:1327-1335 JD Allen, SM Bluethmann, M Sheets, Womens responses to changes in U.S. preventive task forces mammography screening guidelines: results of focus groups with ethnically diverse women.BMC Public Health2013;13:1169-1169 P SlovicPerception of risk.Science1987;236:280-285 Kahneman D. Thinking fast and slow. New York: Farrar, Straus and Giroux, 2011. Source Information Dr. Rosenbaum is a national correspondent for the Journal.
Posted on: Sat, 18 Oct 2014 13:16:44 +0000

Trending Topics



y" style="min-height:30px;">
Kehdas 100% Virgin Human Hair, We have in stock Cambodian,
Hey Videolanders! :D Its the weeeekeeeend BABY! :) Why not
Leather Storage Flip-Top Ottoman - Dark Brown ozzyomd njic00.Home

Recently Viewed Topics




© 2015