A lire sur: http://www.atelier.net/en/trends/articles/will-computers-soon-be-able-reconstruct-human-thought_423451
A group of Dutch researchers have
successfully met the challenge ofteaching a computer to recognise
letters which a person is reading. So could a computer one day go a step
further and actually decipher human thought?
Magnetic resonance imaging (MRI) has up to
now been used in cognition research primarily to determine which brain
areas are active while test subjects perform a specific task. But might
it be possible to use this technology to partly reconstruct a person’s
thoughts? A research group at the Donders Institute for Brain, Cognition
and Behaviour at Radboud University in
Nijmegen, the Netherlands have successfully used data from an MRI
scanner combined with a mathematical model to determine which letter of the alphabet a test subject is looking at.
Prior knowledge improves performance
The researchers ‘taught’ the scanner what the letters of the alphabet look like. “This improved its recognition of the letters enormously,” stressed Donders InstituteAssistant Professor Marcel van Gerven. The researchers then taught a model how small volumes of 2x2x2 mm from the brain scans – known as voxels – correspond to individual pixels. By combining all the information about the pixels from the voxels, it became possible to reconstruct the image being viewed by the subject. “Our approach is similar to how we believe the brain itself combines prior knowledge with sensory information,” explained van Gerven, continuing: “And this is exactly what we’re looking for: models that show what is happening in the brain in a realistic fashion.” In the future the team’s research will focus on applying the models to working human memory or to subjective experiences such as dreams and visualisations. The reconstructions will serve to indicate how far the models created by the academic researchers correspond to reality.
Applying the research to more detailed, complex images
For the moment the result obtained by the letter reconstruction is not a clear image, but a somewhat fuzzy, speckled pattern. “In our further research we will be working with a more powerful MRI scanner,” reveals Sanne Schoenmakers, a PhD student at the University who is working on a thesis about decoding thoughts, explaining:“Due to the higher resolution of the scanner we hope to be able to link the model to more detailed images.” Specifically, the system, which is currently linking images of letters to 1,200 voxels, will in fact then be able to link images of faces to 15,000 voxels.
Prior knowledge improves performance
The researchers ‘taught’ the scanner what the letters of the alphabet look like. “This improved its recognition of the letters enormously,” stressed Donders InstituteAssistant Professor Marcel van Gerven. The researchers then taught a model how small volumes of 2x2x2 mm from the brain scans – known as voxels – correspond to individual pixels. By combining all the information about the pixels from the voxels, it became possible to reconstruct the image being viewed by the subject. “Our approach is similar to how we believe the brain itself combines prior knowledge with sensory information,” explained van Gerven, continuing: “And this is exactly what we’re looking for: models that show what is happening in the brain in a realistic fashion.” In the future the team’s research will focus on applying the models to working human memory or to subjective experiences such as dreams and visualisations. The reconstructions will serve to indicate how far the models created by the academic researchers correspond to reality.
Applying the research to more detailed, complex images
For the moment the result obtained by the letter reconstruction is not a clear image, but a somewhat fuzzy, speckled pattern. “In our further research we will be working with a more powerful MRI scanner,” reveals Sanne Schoenmakers, a PhD student at the University who is working on a thesis about decoding thoughts, explaining:“Due to the higher resolution of the scanner we hope to be able to link the model to more detailed images.” Specifically, the system, which is currently linking images of letters to 1,200 voxels, will in fact then be able to link images of faces to 15,000 voxels.
Aucun commentaire:
Enregistrer un commentaire