「なにを考えているかが外側からわかる」 - TopicsExpress



          

「なにを考えているかが外側からわかる」  英語で申し訳ないのですがむしろぜひリンク先を  脳の活性化された部位を評価することでなにを考えているかが わかる可能性があるというのはずいぶん現実的になりました  障碍者やお年寄りの生活の質はむこう10年でかなり上がるのでしょうね nature/news/brain-decoding-reading-minds-1.13989 ┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏┏ Brain decoding: Reading minds By scanning blobs of brain activity, scientists may be able to decode peoples thoughts, their dreams and even their intentions. Kerri Smith1 23 October 2013 Article toolsPDF Rights & Permissions Cracking the code See how scientists decode vision, dreamscapes and hidden mental states from brain activity. Jack Gallant perches on the edge of a swivel chair in his lab at the U niversity of California, Berkeley, fixated on the screen of a computer that is trying to decode someones thoughts. Why a scorpion-eating mouse feels no pain Worlds first known venomous crustacean paralyses prey with neurotoxin US executions threaten supply of anaesthetic used for surgical procedu res On the left-hand side of the screen is a reel of film clips that Galla nt showed to a study participant during a brain scan. And on the right side of the screen, the computer program uses only the details of tha t scan to guess what the participant was watching at the time. Anne Hathaways face appears in a clip from the film Bride Wars, engag ed in heated conversation with Kate Hudson. The algorithm confidently labels them with the words woman and talk, in large type. Another clip appears ? an underwater scene from a wildlife documentary. The p rogram struggles, and eventually offers whale and swim in a small, tentative font. “This is a manatee, but it doesnt know what that is,” says Gallant, talking about the program as one might a recalcitrant student. They h ad trained the program, he explains, by showing it patterns of brain a ctivity elicited by a range of images and film clips. His program had encountered large aquatic mammals before, but never a manatee. Groups around the world are using techniques like these to try to deco de brain scans and decipher what people are seeing, hearing and feelin g, as well as what they remember or even dream about. Listen Neuroscientists can predict what a person is seeing or dreaming by loo king at their brain activity. Go to full podcastMedia reports have suggested that such techniques br ing mind-reading “from the realms of fantasy to fact”, and “could i nfluence the way we do just about everything”. The Economist in Londo n even cautioned its readers to “be afraid”, and speculated on how l ong it will be until scientists promise telepathy through brain scans. Although companies are starting to pursue brain decoding for a few app lications, such as market research and lie detection, scientists are f ar more interested in using this process to learn about the brain itse lf. Gallants group and others are trying to find out what underlies t hose different brain patterns and want to work out the codes and algor ithms the brain uses to make sense of the world around it. They hope t hat these techniques can tell them about the basic principles governin g brain organization and how it encodes memories, behaviour and emotio n (see Decoding for dummies). Expand Applying their techniques beyond the encoding of pictures and movies w ill require a vast leap in complexity. “I dont do vision because it s the most interesting part of the brain,” says Gallant. “I do it be cause its the easiest part of the brain. Its the part of the brain I have a hope of solving before Im dead.” But in theory, he says, “y ou can do basically anything with this”. Beyond blobology Brain decoding took off about a decade ago1, when neuroscientists real ized that there was a lot of untapped information in the brain scans t hey were producing using functional magnetic resonance imaging (fMRI). That technique measures brain activity by identifying areas that are being fed oxygenated blood, which light up as coloured blobs in the sc ans. To analyse activity patterns, the brain is segmented into little boxes called voxels ? the three-dimensional equivalent of pixels ? a nd researchers typically look to see which voxels respond most strongl y to a stimulus, such as seeing a face. By discarding data from the vo xels that respond weakly, they conclude which areas are processing fac es. Decoding techniques interrogate more of the information in the brain s can. Rather than asking which brain regions respond most strongly to f aces, they use both strong and weak responses to identify more subtle patterns of activity. Early studies of this sort proved, for example, that objects are encoded not just by one small very active area, but b y a much more distributed array. Related stories Scientists read dreams Neuroscience: The mind reader Brain imaging: fMRI 2.0 More related stories These recordings are fed into a pattern classifier, a computer algor ithm that learns the patterns associated with each picture or concept. Once the program has seen enough samples, it can start to deduce what the person is looking at or thinking about. This goes beyond mapping blobs in the brain. Further attention to these patterns can take resea rchers from asking simple where in the brain questions to testing hy potheses about the nature of psychological processes ? asking questio ns about the strength and distribution of memories, for example, that have been wrangled over for years. Russell Poldrack, an fMRI specialis t at the University of Texas at Austin, says that decoding allows rese archers to test existing theories from psychology that predict how peo ples brains perform tasks. “There are lots of ways that go beyond bl obology,” he says. In early studies1, 2 scientists were able to show that they could get enough information from these patterns to tell what category of object someone was looking at ? scissors, bottles and shoes, for example. “We were quite surprised it worked as well as it did,” says Jim Haxb y at Dartmouth College in New Hampshire, who led the first decoding st udy in 2001. Soon after, two other teams independently used it to confirm fundament al principles of human brain organization. It was known from studies u sing electrodes implanted into monkey and cat brains that many visual areas react strongly to the orientation of edges, combining them to bu ild pictures of the world. In the human brain, these edge-loving regio ns are too small to be seen with conventional fMRI techniques. But by applying decoding methods to fMRI data, John-Dylan Haynes and Geraint Rees, both at the time at University College London, and Yukiyasu Kami tani at ATR Computational Neuroscience Laboratories, in Kyoto, Japan, with Frank Tong, now at Vanderbilt University in Nashville, Tennessee, demonstrated in 2005 that pictures of edges also triggered very speci fic patterns of activity in humans3, 4. The researchers showed volunte ers lines in various orientations ? and the different voxel mosaics t old the team which orientation the person was looking at. ILLUSTRATION BY PETER QUINNELL; PHOTO: KEVORK DJANSEZIAN/GETTY Edges became complex pictures in 2008, when Gallants team developed a decoder that could identify which of 120 pictures a subject was viewi ng ? a much bigger challenge than inferring what general category an image belongs to, or deciphering edges. They then went a step further, developing a decoder that could produce primitive-looking movies of w hat the participant was viewing based on brain activity5. From around 2006, researchers have been developing decoders for variou s tasks: for visual imagery, in which participants imagine a scene; fo r working memory, where they hold a fact or figure in mind; and for in tention, often tested as the decision whether to add or subtract two n umbers. The last is a harder problem than decoding the visual system s ays Haynes, now at the Bernstein Centre for Computational Neuroscience in Berlin, “There are so many different intentions ? how do we cate gorize them?” Pictures can be grouped by colour or content, but the r ules that govern intentions are not as easy to establish. Gallants lab has preliminary indications of just how difficult it wil l be. Using a first-person, combat-themed video game called Counterstr ike, the researchers tried to see if they could decode an intention to go left or right, chase an enemy or fire a gun. They could just about decode an intention to move around; but everything else in the fMRI d ata was swamped by the signal from participants emotions when they we re being fired at or killed in the game. These signals ? especially d eath, says Gallant ? overrode any fine-grained information about inte ntion. The same is true for dreams. Kamitani and his team published their att empts at dream decoding in Science earlier this year6. They let partic ipants fall asleep in the scanner and then woke them periodically, ask ing them to recall what they had seen. The team tried first to reconst ruct the actual visual information in dreams, but eventually resorted to word categories. Their program was able to predict with 60% accurac y what categories of objects, such as cars, text, men or women, featur ed in peoples dreams. The subjective nature of dreaming makes it a challenge to extract furt her information, says Kamitani. “When I think of my dream contents, I have the feeling Im seeing something,” he says. But dreams may enga ge more than just the brains visual realm, and involve areas for whic h its harder to build reliable models. Reverse engineering Decoding relies on the fact that correlations can be established betwe en brain activity and the outside world. And simply identifying these correlations is sufficient if all you want to do, for example, is use a signal from the brain to command a robotic hand (see Nature 497, 176 ?178; 2013). But Gallant and others want to do more; they want to wor k back to find out how the brain organizes and stores information in t he first place ? to crack the complex codes the brain uses. That wont be easy, says Gallant. Each brain area takes information fr om a network of others and combines it, possibly changing the way it i s represented. Neuroscientists must work out post hoc what kind of tra nsformations take place at which points. Unlike other engineering proj ects, the brain was not put together using principles that necessarily make sense to human minds and mathematical models. “Were not design ing the brain ? the brain is given to us and we have to figure out ho w it works,” says Gallant. “We dont really have any math for modell ing these kinds of systems.” Even if there were enough data available about the contents of each brain area, there probably would not be a ready set of equations to describe them, their relationships, and the ways they change over time. “Media reports have suggested that such techniques bring mind-reading from the realms of fantasy to fact.” Computational neuroscientist Nikolaus Kriegeskorte at the MRC Cognitio n and Brain Sciences Unit in Cambridge, UK, says that even understandi ng how visual information is encoded is tricky ? despite the visual s ystem being the best-understood part of the brain (see Nature 502, 156 ?158; 2013). “Vision is one of the hard problems of artificial intel ligence. We thought it would be easier than playing chess or proving t heorems,” he says. But theres a lot to get to grips with: how bunche s of neurons represent something like a face; how that information mov es between areas in the visual system; and how the neural code represe nting a face changes as it does so. Building a model from the bottom u p, neuron by neuron, is too complicated ? “theres not enough resour ces or time to do it this way”, says Kriegeskorte. So his team is com paring existing models of vision to brain data, to see what fits best. Real world Devising a decoding model that can generalize across brains, and even for the same brain across time, is a complex problem. Decoders are gen erally built on individual brains, unless theyre computing something relatively simple such as a binary choice ? whether someone was looki ng at picture A or B. But several groups are now working on building o ne-size-fits-all models. “Everyones brain is a little bit different, ” says Haxby, who is leading one such effort. At the moment, he says, “you just cant line up these patterns of activity well enough”. Standardization is likely to be necessary for many of the talked-about applications of brain decoding ? those that would involve reading so meones hidden or unconscious thoughts. And although such applications are not yet possible, companies are taking notice. Haynes says that h e was recently approached by a representative from the car company Dai mler asking whether one could decode hidden consumer preferences of te st subjects for market research. In principle it could work, he says, but the current methods cannot work out which of, say, 30 different pr oducts someone likes best. Marketers, he says, should stick to what th ey know for now. “Im pretty sure that with traditional market resear ch techniques youre going to be much better off.” Companies looking to serve law enforcement have also taken notice. No Lie MRI in San Diego, California, for example, is using techniques rel ated to decoding to claim that it can use a brain scan to distinguish a lie from a truth. Law scholar Hank Greely at Stanford University in California, has written in the Oxford Handbook of Neuroethics (Oxford University Press, 2011) that the legal system could benefit from bette r ways of detecting lies, checking the reliability of memories, or eve n revealing the biases of jurors and judges. Some ethicists have argue d that privacy laws should protect a persons inner thoughts and desir es as private, but Julian Savulescu, a neuroethicist at the University of Oxford, UK, sees no problem in principle with deploying decoding t echnologies. “People have a fear of it, but if its used in the right way its enormously liberating.” Brain data, he says, are no differe nt from other types of evidence. “I dont see why we should privilege peoples thoughts over their words,” he says. Haynes has been working on a study in which participants tour several virtual-reality houses, and then have their brains scanned while they tour another selection. Preliminary results suggest that the team can identify which houses their subjects had been to before. The implicati on is that such a technique might reveal whether a suspect had visited the scene of a crime before. The results are not yet published, and H aynes is quick to point out the limitations to using such a technique in law enforcement. What if a person has been in the building, but doe snt remember? Or what if they visited a week before the crime took pl ace? Suspects may even be able to fool the scanner. “You dont know h ow people react with countermeasures,” he says. Other scientists also dismiss the implication that buried memories cou ld be reliably uncovered through decoding. Apart from anything else, y ou need a 15-tonne, US$3-million fMRI machine and a person willing to lie very still inside it and actively think secret thoughts. Even then , says Gallant, “just because the information is in someones head do esnt mean its accurate”. Right now, psychologists have more reliabl e, cheaper ways of getting at peoples thoughts. “At the moment, the best way to find out what someone is going to do,” says Haynes, “is to ask them.” Journal name: Nature Volume: 502, Pages: 428?430 Date published: (24 October 2013) DOI: doi:10.1038/502428a ┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛┛
Posted on: Thu, 31 Oct 2013 00:10:14 +0000

Trending Topics



Recently Viewed Topics




© 2015