It sounds like something out of a science fiction movie, but four Japanese scientists from Kyoto University released a study recently titled “Deep image reconstruction from human brain activity.”


Similar studies have been done, but only with basic one dimensional shapes. This study had the participants look at or imagine the image of a cheetah, an owl, and stained glass, to name a few. Two different kinds of AI were used, one that uses neural networks that try and simulate the way the brain learns, and another called deep generative AI. The deep generative AI created the pixel values that were optimized to visualize multiple layers of human brain activity.

The neural network uses the results from an FMRI machine which detects changes in blood flow to analyze electrical activity in the brain.

What resulted, to these scientists, was a success. Visual information in the brain can be decoded to show not just images, but our perception of them. Some images were like an owl participants looked at, and some like the cheetah they were asked to imagine. The artificial intelligence visualization of something like a cheetah may be blurry, but there were details like the expression of the eyes that reflected the human thought behind the image.

What could this mean?

In his book "1984," George Orwell may have made the first and most memorable observation that in the future, our thoughts would be private and not our own.

In "1984," the protagonist, Winston, has to be very careful in terms of what he thinks, as there are “thought police” in Big Brother's employ. The picture he paints of people being persecuted for “thought crime” is nothing less than terrifying.

As of now, to our knowledge this as far as AI can read our thoughts. However, with a plethora of research being done and “chipping” possibly becoming mainstream in the near future, you have to wonder how far away we could be from Orwell's world of thought police.

Other movies and TV shows have touched on the horrific side of what could happen if thoughts and memories could be replayed, even sold or altered. These scientists may have good intentions to learn more about human thought and evolve AI, but sadly, the world we live in would not likely use this for good. Being able to communicate with someone in a coma would be comforting for the families of unconscious loved ones, but would it help them regain their consciousness?

This study puts forth an ethical question that many have tackled, but none have given a satisfactory answer to. How far is it safe to go with our creation of Artificial Intelligence? Major IT names like Elon Musk and Stephen Hawking already think we've gone too far. Do you think up-and-coming technology could infringe upon Human Rights?