IEEE Digital Privacy Podcast: Episode 16

 

Gurvirender TejayWhen Machines Listen to Your Thoughts: The Rise of Mental Privacy Risks
A Conversation with Dr. Gurvirender Tejay

Hofstra University

Listen to Episode 16 (MP3, 16 MB)

 

Part of the IEEE Digital Privacy Podcast Series

 

Episode Transcript:

Gurvirender Tejay: Who owns your thoughts? Ideally, it should be you. But imagine thinking a private thought and a computer can figure out what it is. Not just the idea, but full sentences without you saying a word. It sounds like science fiction, but it’s real. Researchers in the US and Japan are using AI and brain imaging to decode thoughts. The last truly private place, your own mind, where your thoughts have always been yours, could soon be accessible to machines.

Nick Napp: We heard those words from Gurvinder Tejay, co-director of the Cybersecurity Innovation and Research Center at Hofstra University, joining us from Hempstead, New York. And today Tejay is gonna help us understand how AI, artificial intelligence, and neuro technology connect.

We’re also going to discuss how AI is being used to decode human thoughts from brain activity and the remarkable potential that this holds for medicine. But as we understand the opportunities, we’re also going to look at some of the challenges and what could go wrong when deeply sensitive neural data is created, stored and shared. I’m Nicholas Napp, Chair of the IEEE Digital Privacy Framework and Foundation Subgroup and your host for this episode of the IEEE Digital Privacy Podcast. Let’s dive in.

Dr. Tejay, welcome. To begin with, how does decoding the human brain with AI work? Are we talking chips, implants, scanning, something else? How does it actually function?

Gurvirender Tejay: So, when most people hear brain decoding, we immediately imagine brain chips or implants in our brains.

But most significant breakthroughs currently that we are seeing right now are non-invasive, which basically means there’s no surgery, no implants, no chips in our head. It’s essentially neuroscience tools that are simply scanning our brain from outside, and they’re able to pick these thoughts. And we are talking about tools like fMRI machines, which are functional magnetic resonance imaging machines, the classic bed scanners where we put on a bed and… we are slid into this big machine. So, these tools basically are scanning brain activity from outside the body. And what AI models are able to do, they’re basically able to take a look at these brain activities and they’re able to scan these brain activities and create patterns that map to certain brain areas. So, as these machines scan our brain or as we look at certain images, as we look at certain pictures that are put in front of us, our brain lights up. There’s an activity in our brain. These scanners are able to map those activities or images in our brain. And then AI models are able to map these images to different patterns. In the past, we were able to essentially look at these images and decipher or link it back to a certain image or a particular emotion or a thought. But now these AI models in combination with our brain imaging, we are able to decipher full sentences. So that’s what the scary part is.

Nick Napp: Wow, that’s come a long way. Okay. So just for the sake of clarity, at the moment with today’s technology, we’re talking about a large machine to do this, like a functional MRI machine, not something that someone can casually walk past you with a smartphone-sized device and read your mind, right? We are not there yet.

Gurvirender Tejay: No, we’re not there yet. So right now, just to decipher one full sentence, it takes about at least four to eight hours, if not 10 hours of brain scanning using these heavy, immobile machines along with their viewing multiple hundreds to thousands of video clips of 10 to, let’s say 10 to 20 seconds video clips. So we’re not there yet, but as we know about most of the technology, how we look at the technology trajectory, it starts bulky and expensive, but soon enough, fast forward, these devices become easier to use. These are lightweight and inexpensive as well. If you look back 20 years ago, our computer machines or desktop machines were very bulky. Nowadays, we have more power in the palm of our hands through our smartphones now. So yeah, they’re bulky. We are not there yet, but at the same time, the way the technology progresses, it’s soon gonna be upon us.

Nick Napp: Okay, so my paranoia is in check for now, but what are the main benefits of decoding human thoughts? Why is this information valuable?

Gurvirender Tejay: So, it’s phenomenal from a medical point of view now, so in that particular area especially folks who have problems with language, who have issues with communication. I’ll think about folks who might have had paralysis. So this particular technology can be extremely useful there. It allows folks to basically articulate their thoughts. This technology may be much beneficial and faster than our other slower, easy technologies or how they are interacting currently with our certain machines. So in that sense, it’s pretty phenomenal. It could also be useful for certain medical breakthroughs in terms of understanding more about certain diseases. That could be very helpful there. Understanding more about sudden psychiatric issues, medical researchers are able to interject a little bit earlier before the disorder sets in. When we look at the commercial side, there could be some benefits over there as well. But currently, I think medical field is, it’s pretty beneficial there.

Nick Napp: So you touched on a number of different medical uses. I can see immediate and fairly scary applications in intelligence and law enforcement. What other fields could use this technology?

Gurvirender Tejay: You already touched about defense and law enforcement. So that’s the very lucrative and very scary application there. It’s the whole idea of extracting information directly from the brain. You can imagine what kind of applications we can develop. And certainly it can also draw a lot of investments. We have definitely applications there. Beyond that in commercial world, we can think about, of course, are marketing advertisements that could be these brain thoughts could be exploited a little bit and applied there. It could be helpful in personalization of certain applications. So that’s the positive piece along with gaming elements. And one can also think about applications in education field. But I think the most attractive ones or potentially dangerous ones are going to come from how defense and law enforcement going to use this technology. Once it becomes really usable and more functional in terms of that we do not need these heavy bulky machines. So then it becomes, I think those applications become critical there.

Nick Napp: Once somebody’s private thoughts are decoded, and this is very much a literal billion dollar question, who owns that data? And what are the risks and the opportunities for misuse that we should really be looking out for?

Gurvirender Tejay: So that’s one of the most important and unsettled questions. If we look across different countries, there’s no clear legal framework on who owns this data. There’s no clear framework. There’s no clear definition. Who actually owns this data? Once the brain derived data, this is generated through different devices. That’s the major problem. Because as you know, once it’s decoded, it becomes digital data. And the moment it becomes digital data, now we can play with it, we can store it, we can copy it and analyze it. So that’s really a touchy point, the ownership of our neural data. Now, what kind of risks or misuse should we be watching for?

It means commercial misuses there. So companies, different corporations misusing our neural data for their benefit for monetization issue. Government or law enforcement misuses there, especially we touched upon a little bit, think about interrogation or surveillance scenarios. They have serious human rights issues overall. And challenge with brain data is unlike passwords, passwords we can reset. We can’t really change our brain patterns or whatever we thought about, right? So we don’t have much control on that. And once it’s done and once it’s captured, now it can have serious consequences in interrogation or surveillance kind of situations. I think overall, if you think about really what’s the biggest risk, it’s losing agency over our thoughts. Means that’s the last private space we have. These are our personal internal thoughts. And what if we lose ownership? What if we lose agency of that? So that’s the biggest risk ultimately.

Nick Napp: Yeah, that makes sense. So tell us about your work with your various peers within IEEE to protect digital and cognitive privacy in this space.

Gurvirender Tejay: So with our initiative on IEEE digital privacy, I’ve been working with researchers and practitioners across the globe. And we have been working on enacting the guardrails to help protect us from these kinds of situations.

And one of our major efforts has been to establish IEEE digital privacy model. This model is pretty unique since it approaches privacy risk from a user perspective. The model has two layers. It starts with what are the expectations of individual. And then it also, the second layer brings into account the different environmental forces in play. So in the case of our brain data that we’re talking about, this is deeply behavioral. So if you look at expectations from inference, identity, behavior elements, it’s really unique in that sense. But when we look at the influences, technology, policy, regulations, or societal point of view, we’re not there yet. So we need certain guardrails. So through this initiative, my peers, my colleagues, we have been working collectively to build these guardrails. I’m also chairing the global joint task force on digital privacy graduate curriculum and the intention behind that was to create these blueprint, what courses, what knowledge areas should we be teaching to develop our future privacy professionals who can actually design privacy systems to safeguard against these kinds of situations that we have been talking about today.

Nick Napp: So, based on the acceleration that you’ve seen in the field, because obviously AI tools are developing very, very rapidly, would you care to prognosticate about the, you know, how far away are we from this becoming a very mainstream technology? What do you think? Are we a couple of years away? Is it still decades? What’s your best guess?

Gurvirender Tejay: So it depends on how you look at it. We, in terms of sentences deciphering our thoughts, it’s already been accomplished. Now, the question is, can we bring people into a room and decipher their thoughts? If in that particular scenario, we are a few years away. It could be potentially three to four years. Now, can we randomly put on these gadgets as gadgets, not as medical devices and decipher human thoughts? Like currently we have certain headsets that are available in the market that can basically tell us or detect and inform us about our emotional states, our attention levels, that that’s already there. So do we want this technology in terms of deciphering our thoughts to be ready to be there as a wearable gadget? That might be a few additional years away. So I’ll put that as five to six years away, hopefully tentatively, cautiously at the same time. But we all get surprised and prediction is a very dangerous game when it comes to technology.

Absolutely. So we can be surprised very quickly, very fast. So yeah, let’s hope it’s a few days away giving us a little bit more time to work on our guardrails, talk about these neural rights. So we have these protections in place.

Nick Napp: So what advice would you give to everyday folks to just help them protect their mental privacy as we go down this road?

Gurvirender Tejay: So I would say start with general awareness, ask some basic questions such as what data is being collected, where it’s going to be stored, who has access to it. So I would always start with that basic awareness whenever we are dealing with any kind of a neural gadget or device. Also limit what data you allow to be collected. Do not consent for everything. Ask questions. What’s the policies regarding what kind of a data is going to be stored?

Be cautious with the neuro technology itself. When you’re about to engage or allowing any device to collect your neural data, ask for policies. Are those policies transparent? Do you understand their privacy policy? If it’s not explicit, do not engage with that particular neuro technology. And then again, let’s approach these devices more as medical devices rather than as fun gadgets. So that can help us.be little bit careful when we deal with these technologies. At the same time, pay attention to the business model. Business model is important. If we are getting a neuroscience or this particular device for free, the company is going to make money somewhere and most likely it’s based on your data. They’re going to manipulate it or try to monetize it. So be careful with the business model and then pick up a company or a gadget or a device whose business model aligns with your privacy position. If you’re comfortable with how they’re doing business, it aligns with your privacy concerns, go for it. But do approach these gadgets or devices as gadgets for fun, but be careful. Ultimately, it’s our neural data. So awareness and transparency are the two big takeaways.

Nick Napp: Yep. any concluding thoughts? I mean, there’s kind of a wealth of things to poke at in this. Obviously it’s very much a classic Pandora’s box in terms of great benefits and great potential dangers. But do you have any concluding thoughts you’d like to leave the audience with?

Gurvirender Tejay: So for the first time in human history, technology can reach into our minds and actually read our thoughts. Now AI combined with neuro technology, they can decode these thoughts. So it becomes very dangerous. It could be potentially advantageous with a lot of medical breakthroughs. It could be helpful that way. But at the same time, it’s open for abusers as well. And then as individuals, as humans, we have always considered our mind, our brain as our private space. Now, if this space is not being properly protected, it could be misused by corporations, governments, anyone at workplaces. So if you look at it, there’s growing need for neural rights. So there are foundations and some societies and the researchers across the world started calling for neural rights.

Concept of neuro-consent is generating some support. So advocating for these legal and ethical perspectives becomes important. And at the end of the day, protecting mental privacy, it’s not only about technology, it’s also about our dignity as human being. It’s also about our autonomy. And then ultimately, it’s about safeguarding what we consider as fundamental to being a human being. So it becomes very critically important, not only for policymakers, technologists, but for overall society as well to be a little bit careful when it comes to neural data.

Nick Napp: Yeah, that makes a lot of sense. Well, Dr. Tejay, thank you for sharing your expertise and helping us understand what’s coming next in this interesting intersection of neural technology and AI. These questions about brain data and neural privacy really aren’t futuristic anymore. They’re here and they are very important.

You’ve been listening to the IEEE Digital Privacy Podcast. You can find more episodes and conversations like this at digitalprivacy.ieee.org.

From me and everyone on the program, thank you for listening. Stay safe, stay aware, and stay private.