The human brain might be the grandest computer of all, but in this episode, we talk to two experts who confirm that the ability for tech to decipher thoughts, and perhaps even manipulate them, isn't just around the corner – it's already here. Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, as the New York Times reported this month, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to it – a Pandora’s box of epic proportions.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.) 

Neuroscientist Rafael Yuste and human rights lawyer Jared Genser are awestruck by both the possibilities and the dangers of neurotechnology. Together they established The Neurorights Foundation, and now they join EFF’s Cindy Cohn and Jason Kelley to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind. 

In this episode you’ll learn about:

  • How to protect people’s mental privacy, agency, and identity while ensuring equal access to the positive aspects of brain augmentation
  • Why neurotechnology regulation needs to be grounded in international human rights
  • Navigating the complex differences between medical and consumer privacy laws
  • The risk that information collected by devices now on the market could be decoded into actual words within just a few years
  • Balancing beneficial innovation with the protection of people’s mental privacy 

Rafael Yuste is a professor of biological sciences and neuroscience, co-director of the Kavli Institute for Brain Science, and director of the NeuroTechnology Center at Columbia University. He led the group of researchers that first proposed the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative launched in 2013 by the Obama Administration. 

Jared Genser is an international human rights lawyer who serves as managing director at Perseus Strategies, renowned for his successes in freeing political prisoners around the world. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and he is outside general counsel to The Neurorights Foundation, an international advocacy group he co-founded with Yuste that works to enshrine human rights as a crucial part of the development of neurotechnology.  

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

RAFAEL YUSTE: The brain is not just another organ of the body, but the one that generates our mind, all of our mental activity. And that's the heart of what makes us human is our mind. So this technology is one technology that for the first time in history can actually get to the core of what makes us human and not only potentially decipher, but manipulate the essence of our humanity.
10 years ago we had a breakthrough with studying the mouse’s visual cortex in which we were able to not just decode from the brain activity of the mouse what the mouse was looking at, but to manipulate the brain activity of the mouse. To make the mouse see things that it was not looking at.
Essentially we introduce, in the brain of the mouse, images. Like hallucinations. And in doing so, we took control over the perception and behavior of the mouse. So the mouse started to behave as if it was seeing what we were essentially putting into his brain by activating groups of neurons.
So this was fantastic scientifically, but that night I didn't sleep because it hit me like a ton of bricks. Like, wait a minute, what we can do in a mouse today, you can do in a human tomorrow. And this is what I call my Oppenheimer moment, like, oh my God, what have we done here?

CINDY COHN: That's the renowned neuroscientist Rafael Yuste talking about the moment he realized that his groundbreaking brain research could have incredibly serious consequences. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.

CINDY COHN: On this show, we flip the script from the dystopian doom and gloom thinking we all get mired in when thinking about the future of tech. We're here to challenge ourselves, our guests and our listeners to imagine a better future that we can be working towards. How can we make sure to get this right, and what can we look forward to if we do?
And today we have two guests who are at the forefront of brain science -- and are thinking very hard about how to protect us from the dangers that might seem like science fiction today, but are becoming more and more likely.

JASON KELLEY: Rafael Yuste is one of the world's most prominent neuroscientists. He's been working in the field of neurotechnology for many years, and was one of the researchers who led the BRAIN initiative launched by the Obama administration, which was a large-scale research project akin to the Genome Project, but focusing on brain research. He's the director of the NeuroTechnology Centre at Columbia University, and his research has enormous implications for a wide range of mental health disorders, including schizophrenia, and neurodegenerative diseases like Parkinson's and ALS.

CINDY COHN: But as Rafael points out in the introduction, there are scary implications for technology that can directly manipulate someone's brain.

JASON KELLEY: We're also joined by his partner, Jared Genser, a legendary human rights lawyer who has represented no less than five Nobel Peace Prize Laureates. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and together with Rafael, he founded the Neurorights Foundation, an international advocacy group that is working to enshrine human rights as a crucial part of the development of neurotechnology.

CINDY COHN: We started our conversation by asking how the brain scientist and the human rights lawyer first teamed up.

RAFAEL YUSTE: I knew nothing about the law. I knew nothing about human rights my whole life. I said, okay, I avoided that like the pest because you know what? I have better things to do, which is to focus on how the brain works. But I was just dragged into the middle of this by our own work.
So it was a very humbling moment and I said, okay, you know what? I have to cross to the other side and get involved really with the experts that know how this works. And that's how I ended up talking to Jared. The whole reason we got together was pretty funny. We both got the same award from a Swedish foundation, from the Talbert Foundation, this Liaison Award for Global Leadership. In my case, because of the work I did on the Brain Initiative, and Jared, got this award for his human rights work.
And, you know, this is one, good thing of getting an award, or let me put it differently, at least, that getting an award led to something positive in this case is that someone in the award committee said, wait a minute, you guys should be talking to each other. and they put us in touch. He was like a matchmaker.

CINDY COHN: I mean, you really stumbled into something amazing because, you know, Jared, you're, you're not just kind of your random human rights lawyer, right? So tell me your version, Jared, of the meet cute.

JARED GENSER: Yes. I'd say we're like work spouses together. So the feeling is mutual in terms of the admiration, to say the least. And for me, that call was really transformative. It was probably the most impactful one hour call I've had in my career in the last decade because I knew very little to nothing about the neurotechnology side, you know, other than what you might read here or there.
I definitely had no idea how quickly emerging neuro technologies were developing and the sensitivity - the enormous sensitivity - of that data. And in having this discussion with Rafa, it was quite clear to me that my view of the major challenges we might face as humanity in the field of human rights was dramatically more limited than I might have thought.
And, you know, Rafa and I became fast friends after that and very shortly thereafter co-founded the Neurorights Foundation, as you noted earlier. And I think that this is what's made us such a strong team, is that our experiences and our knowledge and expertise are highly complimentary.
Um, you know, Rafa and his colleagues had, uh, at the Morningside Group, which is a group of 25 experts he collected together at, uh, at Columbia, had already, um, you know, met and come up with, and published in the journal Nature, a review of the potential concerns that arise out of the potential misuse and abuse of neurotech.
And there were five areas of concerns that they had identified that include mental privacy, mental agency, mental identity, concerns about discrimination and the development in application of neurotechnologies and fair use of mental augmentation. And these generalized concerns, uh, which they refer to as neurorights, of course map over to international human rights, uh, that to some extent are already protected by international treaties.
Um, but to other extents might need to be further interpreted from existing international treaties. And it was quite clear that when one would think about emerging neuro technologies and what they might be able to do, that a whole dramatic amount of work needed to be done before these things proliferate in such an extraordinary sense around the world.

JASON KELLEY: So Rafa and Jared, when I read a study like the one you described with the mice, my initial thought is, okay, that's great in a lab setting. I don't initially think like, oh, in five years or 10 years, we'll have technology that actually can be, you know, in the marketplace or used by the government to do the hallucination implanting you're describing. But it sounds like this is a realistic concern, right? You wouldn't be doing this work unless this had progressed very quickly from that experiment to actual applications and concerns. So what has that progression been like? Where are we now?

RAFAEL YUSTE: So let me tell you, two years ago I got a phone call in the middle of the night. It woke me up in the middle of the night, okay, from a colleague and friend who had his Oppenheimer moment. And his name is Eddie Chang. He's a professor of neurosurgery at UCSF, and he's arguably the leader in the world to decode brain activity from human patients. So he had been working with a patient that was paralyzed, because of a Bulbar infarction, a stroke in her, essentially, the base of her brain and she had a locking syndrome, so she couldn't communicate with the exterior. She was in a wheelchair and they implanted a few electrodes and electrode array into her brain with neurosurgery and connected those electrodes to a computer with an algorithm using generative AI.
And using this algorithm, they were able to decode her inner speech - the language that she wanted to generate. She couldn't speak because she was paralyzed. And when you conjure – we don't really know exactly what goes on during speech – but when you conjure the words in your mind, they were able to actually decode those words.
And then not only that, they were able to decode her emotions and even her facial gestures. So she was paralyzed and Eddie and her team built an avatar of the person in the computer with her face and gave that avatar, her voice, her emotions, and her facial gestures. And if you watch the video, she was just blown away.
So Eddie called me up and explained to me what they've done. I said, well, Eddie, this is absolutely fantastic. You just unlocked the person from this locking syndrome, giving hope to all the patients that have a similar problem. But of course he said, no, no, I, I'm not talking about that. I'm talking about, we just cloned her essentially.
It was actually published as the cover of the journal Nature. Again, this is the top journal in the world, so they gave them the cover. It was such an impressive result. and this was implantable neurotechnology. So it requires a neurosurgeon that go in and put in this electrode. So it is, of course, in a hospital setting, this is all under control and super regulated.
But since then, there's been fast development, partly, spurred by all these investments into neurotechnology that, uh, private and public all over the world. There's been a lot of development of non-implantable neurotechnology to either record brain activity from the surface or to stimulate the brain from the surface without having to open up the skull.
And let me just tell you two examples that bring home the fact that this is not science fiction. In December 2023, a team in Australia used an EG device, essentially like a helmet that you put on. You can actually buy these things in Amazon and couple it to generative AI algorithm again, like Eddie Chang. In fact, I think they were inspired by Eddie Chang's work and they were able to decode the inner speech of volunteers. It wasn't as accurate as the decoding that you can do if you stick the electrodes inside. But from the outside, they have a video of a person that is mentally ordering a cappuccino at a Starbucks. No. And they essentially decode, they don't decode absolutely every word that the person is thinking. But enough words that the message comes out loud and clear. So the coding of inner speech, it's doable, with non-invasive technology. Not only that study from Australia since then, you know, all these teams in the world, uh, we work as we help each other continuously. So, uh, shortly after that Australian team, another study in Japan published something, uh, with much higher accuracy and then another study in China. Anyway, this is now becoming very common practice to choose generative AI to decode speech.
And then on the stimulation side is also something that raises a lot of concerns ethically. In 2022 a lab in Boston University used external magnetic stimulation to activate parts of the brain in a cohort of volunteers that were older in age. This was the control group for a study on Alzheimer's patients. And they reported in a very good paper, that they could increase 30% of both short-term and long-term memory.
So this is the first serious case that I know of where again, this is not science fiction, this is demonstrated enhancement of, uh, mental ability in a human with noninvasive neurotechnology. So this could open the door to a whole industry that could use noninvasive devices, maybe magnetic simulation, maybe acoustical, maybe, who knows, optical, to enhance any aspect of our mental activity. And that, I mean, just imagine.
This is what we're actually focusing on our foundation right now, this issue of mental augmentation because we don't think it's science fiction. We think it's coming.

JARED GENSER: Let me just kind of amplify what Rafa's saying and to kind of make this as tangible as possible for your listeners, which is that, as Rafa was already alluding to, when you're talking about, of course, implantable devices, you know, they have to be licensed by the Food and Drug Administration. They're implanted through neurosurgery in the medical context. All the data that's being gathered is covered by, you know, HIPAA and other state health data laws. But there are already available on the market today 30 different kinds of wearable neurotechnology devices that you can buy today and use.
As one example, you know, there's the company, Muse, that has a meditation device and you can buy their device. You put it on your head, you meditate for an hour. The BCI - brain computer interface - connects to your app. And then basically you'll get back from the company, you know, decoding of your brain activity to know when you're in a meditative state or not.
The problem is, is that these are EEG scanning devices that if they were used in a medical context, they would be required to be licensed. But in a consumer context, there's no regulation of any kind. And you're talking about devices that can gather from gigabytes to terabytes of neural data today, of which you can only decode maybe 1% of it.
And the data that's being gathered, uh, you know, EEG scanning device data in wearable form, you could identify if a person has any of a number of different brain diseases and you could also decode about a dozen different mental states. Are you happy, are you sad? And so forth.
And so at our foundation, at the Neurorights Foundation, we actually did a very important study on this topic that actually was covered on the front page of the New York Times. And we looked at the user agreements for, and the privacy agreements, for the 30 different companies’ products that you can buy today, right now. And what we found was that in 29, out of the 30 cases, basically, it's carte blanche for the companies. They can download your data, they can do it as they see fit, and they can transfer it, sell it, etc.
Only in one case did a company, ironically called Unicorn, actually keep the data on your local device, and it was never transferred to the company in question. And we benchmark those agreements across a half dozen different global privacy standards and found that there were just, you know, gigantic gaps that were there.
So, you know, why is that a problem? Well take the Muse device I just mentioned, they talk about how they've downloaded a hundred million hours of consumer neural data from people who have bought their device and used it. And we're talking about these studies in Australia and Japan that are decoding thought to text.
Today thought to text, you know, with the EEG can only be done in a relatively. Slow speed, like 10 or 15 words a minute with like maybe 40, 50% accuracy. But eventually it's gonna start to approach the speed of Eddie Chang's work in California, where with the implantable device you can do thought to text at 80 words a minute, 95% accuracy.
And so the problem is that in three, four years, let's say when this technology is perfected with a wearable device, this company Muse could theoretically go back to that hundred million hours of neural data and then actually decode what the person was thinking in the form of words when they were actually meditating.
And to help you understand as a last point, why is this, again, science and not science fiction? You know, Apple is already clearly aware of the potential here, and two years ago, they actually filed a patent application for their next generation AirPod device that is going to have built-in EEG scanners in each ear, right?
And they sell a hundred million pairs of AirPods every single year, right? And when this kind of technology, thought to text, is perfected in wearable form, those AirPods will be able to be used, for example, to do thought-to-text emails, thought-to-text text messages, et cetera.
But when you continue to wear those AirPod devices, the huge question is what's gonna be happening to all the other data that's being, you know, absorbed how is it going to be able to be used, and so forth. And so this is why it's really urgent at an international level to be dealing with this. And we're working at the United Nations and in many other places to develop various kinds of frameworks consistent with international human rights law. And we're also working, you know, at the national and sub-national level.
Rafa, my colleague, you know, led the charge in Chile to help create a first-ever constitutional amendment to a constitution that protects mental privacy in Chile. We've been working with a number of states in the United States now, uh, California, Colorado and Montana – very different kinds of states – have all amended their state consumer data privacy laws to extend their application to narrow data. But it is really, really urgent in light of the fast developing technology and the enormous gaps between these consumer product devices and their user agreements and what is considered to be best practice in terms of data privacy protection.

CINDY COHN: Yeah, I mean I saw that study that you did and it's just, you know, it mirrors a lot of what we do in the other context where we've got click wrap licenses and other, you know, kind of very flimsy one-sided agreements that people allegedly agree to, but I don't think under any lawyer's understanding of like meeting of the minds, and there's a contract that you negotiate that it's anything like that.
And then when you add it to this context, I think it puts these problems on steroids in many ways and makes 'em really worse. And I think one of the things I've been thinking about in this is, you know, you guys have in some ways, you know, one of the scenarios that demonstrates how our refusal to take privacy seriously on the consumer side and on the law enforcement side is gonna have really, really dire, much more dire consequences for people potentially than we've even seen so far. And really requires serious thinking about, like, what do we mean in terms of protecting people's privacy and identity and self-determination?

JARED GENSER: Let me just interject on that one narrow point because I was literally just on a panel discussion remotely at the UN Crime Congress last week that was hosted by the UN Office in Drugs and Crime, UNODC and Interpol, the International Police Organization. And it was a panel discussion on the topic of emerging law enforcement uses of neurotechnologies. And so this is coming. They just launched a project jointly to look at potential uses as well as to develop, um, guidelines for how that can be done. But this is not at all theoretical. I mean, this is very, very practical.

CINDY COHN: And much of the funding that's come out of this has come out of the Department of Defense thinking about how do we put the right guardrails in place are really important. And honestly, if you think that the only people who are gonna want access to the neural data that these devices are collecting are private companies who wanna sell us things, like I, you know, that's not the history, right? Law enforcement comes for these things both locally and internationally, no matter who has custody of them. And so you kind of have to recognize that this isn't just a foray for kind of skeezy companies to do things we don't like.

JARED GENSER: Absolutely.

JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.
Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast you might like. Have a listen to this:
[WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Rafael Yuste and Jared Genser.

CINDY COHN: This might be a little bit of a geeky lawyer question, but I really appreciated the decision you guys made to really ground this in international human rights, which I think is tremendously important. But not obvious to most Americans as the kind of framework that we ought to invoke. And I was wondering how you guys came to that conclusion.

JARED GENSER: No, I think it's actually a very, very important question. I mean, I think that the bottom line is that there are a lot of ways to look at, um, questions like this. You know, you can think about, you know, a national constitution or national laws. You can think about international treaties or laws.
You can look at ethical frameworks or self governance by companies themselves, right? And at the end of the day, because of the seriousness and the severity of the potential downside risks if this kind of technology is misused or abused, you know, our view is that what we really need is what's referred to by lawyers as hard law, as in law that is binding and enforceable against states by citizens. And obviously binding on governments and what they do, binding on companies and what they do and so forth.
And so it's not that we don't think, for example, ethical frameworks or ethical standards or self-governance by companies are not important. They are very much a part of an overall approach, but our approach at the Neurorights Foundation is, let's look at hard law, and there are two kinds of hard law to look at. The first are international human rights treaties. These are multilateral agreements that states negotiate and come to agreements on. And when a country signs and ratifies a treaty, as the US has on the key relevant treaty here, which is the International Covenant and Civil and Political Rights, those rights get domesticated in the law of each country in the world that signs and ratifies them, and that makes them then enforceable. And so we think first and foremost, it's important that we ground our concerns about the misuse and abuse of these technologies in the requirements of international human rights law.
Because the United States is obligated and other countries in the world are obligated to protect their citizens from abuses of these rights.
And at the same time, of course that isn't sufficient on its own. We also need to see in certain contexts, probably not in the US context, amendments to a constitution that's much harder to do in the US but laws that are actually enforceable against companies.
And this is why our work in California, Montana and Colorado is so important because now companies in California, as one illustration, which is where Apple is based and where meta is based and so forth, right? They now have to provide the protections embedded in the California Consumer Privacy Act to all of their gathering and use of neural data, right?
And that means that you have a right to be forgotten. You have a right to demand your data not be transferred or sold to third parties. You have a right to have access to your data. Companies have obligations to tell you what data are they gathering, how are they gonna use it? If they propose selling or transferring it to whom and so forth, right?
So these are now ultimately gonna be binding law on companies, you know, based in California and, as we're developing this, around the world. But to us, you know, that is really what needs to happen.

JASON KELLEY: Your success has been pretty stunning. I mean, even though you're, you know, there's obviously so much more to do. We work to try to amend and change and improve laws at the state and local and federal level and internationally sometimes, and it's hard.
But the two of you together, I think there's something really fascinating about the way, you know, you're building a better future and building in protections for that better future at the same time.
And, like, you're aware of why that's so important. I think there's a big lesson there for a lot of people who work in the tech field and in the science field about, you know, you can make incredible things and also make sure they don't cause huge problems. Right? And that's just a really important lesson.
What we do with this podcast is we do try to think about what the better future that people are building looks like, what it should look like. And the two of you are, you know, thinking about that in a way that I think a lot of our guests aren't because you're at the forefront of a lot of this technology. But I'd love to hear what Rafa and then Jared, you each think, uh, science and the law look like if you get it right, if things go the way you hope they do, what, what does the technology look like? What did the protections look like? Rafa, could you start.

RAFAEL YUSTE: Yeah, I would comment, there's five places in the world today where there's, uh, hard law protection for brain activity and brain data in the Republic of Chile, the state of Rio Grande do Sul in Brazil, in the states of Colorado, California, and Montana in the US. And in every one of these places there's been votes in the legislature, and they're all bicameral legislature, so there've been 10 votes, and every single one of those votes has been unanimous.
All political parties in Chile, in Brazil - actually in Brazil there were 16 political parties. That never happened before that they all agreed on something. California, Montana, and Colorado, all unanimous except for one vote no in Colorado of a person that votes against everything. He's like, uh, he goes, he has some, some axe to grind with, uh, his companions and he just votes no on everything.
But aside from this person. Uh, actually the way the Colorado, um, bill was introduced by a Democratic representative, but, uh, the Republican side, um, took it to heart. The Republican senator said that this is a definition of a no-brainer. And he asked for permission to introduce that bill in the Senate in Colorado.
So he, the person that defended the Senate in Colorado, was actually not a Democrat but a Republican. So why is that? So as quoting this Colorado senator is a no brainer, this is an issue where it doesn't, I mean, the minute you get it, you understand, do you want your brain activity to be decoded with what your consent? Well, this is not a good idea.
So not a single person that we've met has opposed this issue. So I think Jared and I do the best job we can and we work very hard. And I should tell you that we're doing this pro bono without being compensated for our work. But the reason behind the success is really the issue, it's not just us. I think that we're dealing with an issue which is a fundamental widespread universal agreement.

JARED GENSER: What I would say is that, you know, on the one hand, and we appreciate of course, the kind words about the progress we're making. We have made a lot of progress in a relatively short period of time, and yet we have a dramatically long way to go.
We need to further interpret international law in the way that I'm describing to ensure that privacy includes mental privacy all around the world, and we really need national laws in every country in the world. Subnational laws and various places too, and so forth.
I will say that, as you know from all the great work you guys do with your podcast, getting something done at the federal level is of course much more difficult in the United States because of the divisions that exist. And there is no federal consumer data privacy law because we've never been able to get Republicans and Democrats to agree on the text of one.
The only kinds of consumer data protected at the federal level is healthcare data under HIPAA and financial data. And there have been multiple efforts to try to do a federal consumer data privacy law that have failed. In the last Congress, there was something called the American Privacy Rights Act. It was bipartisan, and it basically just got ripped apart because they were adding, trying to put together about a dozen different categories of data that would be protected at the federal level. And each one of those has a whole industry association associated with it.
And we were able to get that draft bill amended to include neural data in it, which it didn't originally include, but ultimately the bill died before even coming to a vote at committees. In our view, you know, that then just leaves state consumer data privacy laws. There are about 35 states now that have state level laws. 15 states actually still don't.
And so we are working state by state. Ultimately, I think that when it comes, especially to the sensitivity of neural data, right? You know, we need a federal law that's going to protect neural data. But because it's not gonna be easy to achieve, definitely not as a package with a dozen other types of data, or in general, you know, one way of course to get to a federal solution is to start to work with lots of different states. All these different state consumer data privacy laws are different. I mean, they're similar, but they have differences to them, right?
And ultimately, as you start to see different kinds of regulation being adopted in different states relating to the same kind of data, our hope is that industry will start to say to members of Congress and the, you know, the Trump administration, hey, we need a common way forward here and let's set at least a floor at the federal level for what needs to be done. If states want to regulate it more than that, that's fine, but ultimately, I think that there's a huge amount of work still left to be done, obviously all around the world and at the state level as well.

CINDY COHN: I wanna push you a little bit. So what does it look like if we get it right? What is, what is, you know, what does my world look like? Do I, do I get the cool earbuds or do I not?

JARED GENSER: Yeah, I mean, look, I think the bottom line is that, you know, the world that we want to see, and I mean Rafa of course is the technologist, and I'm the human rights guy. But the world that we wanna see is one in which, you know, we promote innovation while simultaneously, you know, protecting people from abuses of their human rights and ensure that neuro technologies are developed in an ethical manner, right?
I mean, so we do need self-regulation by industry. You know, we do need national and international laws. But at the same time, you know, one in three people in their lifetimes will have a neurological disease, right?
The brain diseases that people know best or you know, from family, friends or their own experience, you know, whether you look at Alzheimer's or Parkinson's, I mean, these are devastating, debilitating and all, today, you know, irreversible conditions. I mean, all you can do with any brain disease today at best is to slow its progression. You can't stop its progression and you can't reverse it.
And eventually, in 20 or 30 years, from these kinds of emerging neurotechnologies, we're going to be able to ultimately cure brain diseases. And so that's what the world looks like, is the, think about all of the different ways in which humanity is going to be improved, when we're able to not only address, but cure, diseases of this kind, right?
And, you know, one of the other exciting parts of emerging neurotechnologies is our ability to understand ourselves, right? And our own brain and how it operates and functions. And that is, you know, very, very exciting.
Eventually we're gonna be able to decode not only thought-to-text, but even our subconscious thoughts. And that of course, you know, raises enormous questions. And this technology is also gonna, um, also even raise fundamental questions about, you know, what does it actually mean to be human? And who are we as humans, right?
And so, for example, one of the side effects of deep brain stimulation in a very, very, very small percentage of patients is a change in personality. In other words, you know, if you put a device in someone's, you know, mind to control the symptoms of Parkinson's, when you're obviously messing with a human brain, other things can happen.
And there's a well known case of a woman, um, who went from being, in essence, an extreme introvert to an extreme extrovert, you know, with deep brain stimulation as a side effect. And she's currently being studied right now, um, along with other examples of these kinds of personality changes.
And if we can figure out in the human brain, for example, what parts of the brain, for example, deal with being an introvert or an extrovert, you know, you're also raising fundamental questions about the, the possibility of being able to change your personality and parts with a brain implant, right? I mean, we can already do that, obviously, with psychotropic medications for people who have mental illnesses through psychotherapy and so forth. But there are gonna be other ways in which we can understand how the brain operates and functions and optimize our lives through the development of these technologies.
So the upside is enormous, you know. Medically and scientifically, economically, from a self-understanding point of view. Right? And at the same time, the downside risks are profound. It's not just decoding our thoughts. I mean, we're on the cusp of an unbeatable lie detector test, which could have huge positive and negative impacts, you know, in criminal justice contexts, right?
So there are so many different implications of these emerging technologies, and we are often so far behind, on the regulatory side, the actual scientific developments that in this particular case we really need to try to do everything possible to at least develop these solutions at a pace that matches the developments, let alone get ahead of them.

JASON KELLEY: I'm fascinated to see, in talking to them, how successful they've been when there isn't a big, you know, lobbying wing of neurorights products and companies stopping them from this because they're ahead of the game. I think that's the thing that really struck me and, and something that we can hopefully learn from in the future that if you're ahead of the curve, you can implement these privacy protections much easier, obviously. That was really fascinating. And of course just talking to them about the technology set my mind spinning.

CINDY COHN: Yeah, in both directions, right? Both what an amazing opportunity and oh my God, how terrifying this is, both at the same time. I thought it was interesting because I think from where we sit as people who are trying to figure out how to bring privacy into some already baked technologies and business models and we see how hard that is, you know, but they feel like they're a little behind the curve, right? They feel like there's so much more to do. So, you know, I hope that we were able to kind of both inspire them and support them in this, because I think to us, they look ahead of the curve and I think to them, they feel a little either behind or over, you know, not overwhelmed, but see the mountain in front of them.

JASON KELLEY: A thing that really stands out to me is when Rafa was talking about the popularity of these protections, you know, and, and who on all sides of the aisle are voting in favor of these protections, it's heartwarming, right? It's inspiring that if you can get people to understand the sort of real danger of lack of privacy protections in one field. It makes me feel like we can still get people, you know, we can still win privacy protections in the rest of the fields.
Like you're worried for good reason about what's going on in your head and that, how that should be protected. But when you type on a computer, you know, that's just the stuff in your head going straight onto the web. Right? We've talked about how like the phone or your search history are basically part of the contents of your mind. And those things need privacy protections too. And hopefully we can, you know, use the success of their work to talk about how we need to also protect things that are already happening, not just things that are potentially going to happen in the future.

CINDY COHN: Yeah. And you see kind of both kinds of issues, right? Like, if they're right, it's scary. When they're wrong it's scary. But also I'm excited about and I, what I really appreciated about them, is that they're excited about the potentialities too. This isn't an effort that's about the house of no innovation. In fact, this is where responsibility ought to come from. The people who are developing the technology are recognizing the harms and then partnering with people who have expertise in kind of the law and policy and regulatory side of things. So that together, you know, they're kind of a dream team of how you do this responsibly.
And that's really inspiring to me because I think sometimes people get caught in this, um, weird, you know, choose, you know, the tech will either protect us or the law will either protect us. And I think what Rafa and Jared are really embodying and making real is that we need both of these to come together to really move into a better technological future.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listener feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.