Unleash the power of your mind, experience the benefits of MindZoom for yourself! Order MindZoom today
The first mind reading machine?
Imagine a machine that when you switch it on, it starts to record what you are thinking. Would you believe it if I told you that one does exist?
Well, Microsoft Corporation's department of research is on the track on doing just that. Trying to read your mind. And they are having great progress!
The purpose of their new technology is not to find out your deepest thoughts -for now ;) - , but what they are trying to do is solve a problem in Web Search and more specifically searching for images on the web.
For example: in order to search for an image of a car, search engines must rely on "tags". These are text snippets, in this case associated with the image. Computers can't differentiate (for now) the image of a car from an image of a plane. So text must be used.
So people must tag images by hand, and when you have millions of images to process this becomes an enormous task.
To solve this problem the research team is developing a way to tag images automatically by reading people's brain scans while they look at images. The people did not even have to specifically think about trying to tag the image; they merely had to passively observe it.
The technique requires using an electroencephalograph (EEG), a cap with electrodes placed on the scalp in regular locations that can each measure brain activity in their local area.
The brain reacts differently when a person views different kinds of stimuli. For example, the following diagram shows the average brain response when the user views a picture of a face and a picture of a non-face (i.e. anything else). Red areas show high activity and blue areas show low activity. Each line on the graph corresponds to the activity levels recorded by a single electrode.
As can be seen, the brain response is not static but varies over time. However, the graphs show that the brain's changing responses over time are predictable based on what kind of stimulus is shown.
The researchers used a machine learning algorithm called Regularized Linear Discriminant Analysis (RLDA) to develop an image tagging system. The researchers recruited several test users. The researchers presented a series of images to the users, and an EEG reading of the user's brain was made upon presentation of each image. Users were not required to think about tagging the image or think about what kind of image it was. As is common in psychological experiments, they were given a distracter task, a task that ensures they are paying attention to the images but does not specifically relate to the experiment. In this case, they were asked to remember the images so they could identify them later in a post-experiment test. The RLDA algorithm could then take as input the associated pairs - image and EEG scan - and learn to recognize what kinds of EEGs were associated with what kinds of images. This learning is often termed the building of a learned model, which represents everything that the artificial intelligence knows.
The system can then be used to automatically tag images. A user wearing an EEG is shown an image to which the tags are unknown, and the system uses the learned model to predict from the EEG what the appropriate tag for the image would be (e.g. this is a face or this is not a face).
The users only had to view the images for short periods of time for the system to work, which means that many images could be labeled quickly by presenting them in rapid succession. The researchers experimented with different time lengths, but there was no difference between allowing the subject to view the image for 500 ms, 750 ms, or 1 whole second.
Results
The following graph shows how accurate the system was at assigning the correct tag. The vertical axis shows “classification accuracy,” which is the percent of time the system assigned the correct tag. The different lines show the accuracy on different kinds of tagging tasks, e.g. tagging faces versus inanimate objects, faces versus animals, etc. The horizontal axis shows “number of presentations,” which is how many times a user was shown each image (EEG readings are noisy so multiple EEG readings for the same image made the system more reliable).
Conclusion
So we are not there just yet, but these are important first steps into a real Mind reading machine.
It is interesting to notice how different images have a different impact on the mind, and how a brief exposure is enough for the brain to capture and react in a consistent manner.
Will we be able to construct such a machine that could record all our thoughts accurately? Maybe replay them later like TiVo! This would open up great possibilities in many areas, but also could become a breach in our privacy.
This reminds me of the "Batman Forever" Movie: The Riddler: "What's on all our minds? Brain waves...." Bruce Wayne: "Manipulating brain waves, that just raises too many questions."
Sincerely,
Dino Ruales