Close menu
VIDEO TRANSCRIPT • BROWSE ALL TRANSCRIPTS

‘TRUTH’ with Robert F. Kennedy Jr. Featuring Dr. Rafael Yuste—Season 2 Episode 5

The following is a transcript of this video.

– Hi there.

– Hey Dr. Yuste, thank you so much for joining me.

– Hi, can I call you, how should I call you, Bobby, Robert?

– Hey, you can call me Bobby.

– Bobby, you can call me Rafa, by the way, you don’t have to call me Dr. Yuste–

– Let me ask you–

– I come from Spain, and Rafa is just like the tennis player, my first name.

– And where are you today?

– I’m calling in from New York City. Upper West Side, right by Columbia University.

– And do you teach at Columbia?

– Yes, so I am a professor of Neuroscience at Columbia. But, most of the time I do research. I’m a researcher, I run a lab. We are neurobiologist, we investigate how the brain works. We use animals, mice. And also we study the brains of the small creatures, the first ones in evolution to have a nervous system called Hydra. They’re hydrarians.

– And let’s talk about, what we can do. I know the you have done something, we’re interested in the issue of, and our listeners are very interested in the issue. The capacity that now the big data, big technology firms are gonna have at some point to read our brains, to read our feelings, and those companies are putting huge amounts of money into these new subdermal chips, and other methodologies for, really frightening stuff, actually to, not only to read our brain, but ultimately, perhaps even to control our brain, our feelings, our wants, our desires. And, I know that, maybe we should start by talking about some of the experiments that you’ve done on mice. Where you inject them with, I think proteins

– Yeah.

– And allow you to actually create hallucinations that you can dictate to those mice.

– Yeah. Actually, let me start Bobby by telling you, how I got into this business. I’m actually an MD, I was trained in Spain. And when I was rotating in the psychiatry wards, they put me to treat the worst patients. The paranoid schizophrenics and those are behind bars. You have to interview them with a bodyguard because they’re actually dangerous. And at the same time, this type of schizophrenics, they have incredible minds. They’re super smart. They’re actually not unlike Sherlock Holmes. I bet you Sherlock Holmes is actually, is a classic paranoid schizophrenic. And I was so frustrated. That I see this–

– He was by the way, a heroin or a cocaine addict.

– Yeah .

– So you have these patients, they’re super smart. And there’s something wrong in their brains. And instead of using their intelligence to help society, they devastate society, they devastate, they’re dangerous to themselves and the people around them. So at that point, I decided to go into basic research. And I’ve been doing that ever since. It’s been now 33 years and counting. And people like me are trying to figure out, how the brain works so that we can help not just, our paranoid schizophrenics, but, patients that have neurological or mental diseases. And I’m talking about Alzheimer’s, Parkinson, epilepsy, mental retardation, schizophrenia, anxiety, depression, you name it. It’s the dark corner of medicine. We don’t help this patient because we don’t understand what goes on in the brain because we cannot fix it. And because of that, we ended up, people like myself, and the field that I represent, doing experiments with mice to try to understand the basic phenomenon that goes on in the brain. In the cerebral cortex. And that’s coming back to your question. That’s why we put these proteins, to make neurons light sensitive inside the cortex of mice. And we use lasers to essentially do two things. To read the activity of the neurons and to change, like to write activity into the neurons. Why do we do that? Because we’re trying to figure out, what neurons are telling themselves. Because if we can decode the message, the next day we go to a schizophrenic and say, “You know what? “Let me try fixing your problem.” And by doing this in mice, in normal mice actually. We found that we can implant a vision, like an image of something that the mouse hasn’t seen. And the mouse will behave as if it’s seen this image. In other words, we can implant, how you say, hallucination, and that triggers a behavior in the mouse. So we can, in a way, push the mouse to the one thing or the other by putting into his brain, one pattern of activity or the other–

– Specifically. You taught the mouse, when he saw three bars, to be thirsty and to go to a certain place to drink water.

– Yep.

– And by manipulating these lasers, and by injecting proteins in the mouse, you could make that mouse think that it saw three bars, even if they weren’t there,

– Exactly

– And trigger the drinking appetite?

– Yeah, exactly. And you know what we were after? A very simple question, what is a thought? Okay, what is a thought? Can you see a thought, like, when you have a thought, when you think of something, what happens in your brain. So, by showing the mice, these three different visual stimuli, we generate a thought in his brain that he’s seen something. And then by activating the neurons, when we make the mouse lick, as if he had seen this stimulus, we can make the argument that, hey, if these neurons are firing together in this pattern, this is a thought or this is the perception. And let me go back to schizophrenia. It turns out–

– Just let me, rephrase that.

– Yeah.

– The thought actually produces a unique fingerprint of electrical activity.

– Exactly.

– So if you can, if you can look into that mouse’s brain, and, or a human’s brain, perhaps when they see a beautiful woman. And you can, you can then record that fingerprint of electrical activity, then you can, if you can figure out a way to reproduce that same pattern of electric activity when the neurons fire, you can make that human being think he saw a beautiful woman at subsequent times, even when there’s nobody there.

– Yep, this will happen, because we’re doing this with mice already. It’s a matter of time until this is done with humans. And you say, well, this is terrible. Well, it depends. Actually, there’re mice that have the same mutation that some human schizophrenics have. And when we look at these schizophrenic mice, okay, guess what, they have abnormal thought patterns. So these patterns of neuron that we activate, we can tell by looking at them, this guy is schizophrenic and this guy’s normal. So there’s a grad student in my lab, who’s trying to now to go to schizophrenic mouse and fix these thought patterns with lasers playing back the neurons in a particular way, to see if we can remove his symptoms. So, I’m telling you this because this is a clear medical pipeline. But at the same time, as you pointing out, in humans, you could use this type of technology, we call it neurotechnology. So this is essentially, it’s very simple, it’s technology to read, or write activity into the brain. And you can use this neurotechnology to fix a problem with a mental disease patient or reinforce the memories of an Alzheimer patient. Or you could use it to put in a an image of a pretty woman on the head of someone that you want to manipulate.

– Or maybe you could put it in, a sudden hunger for Captain Crunch cereal.

– Exactly .

– Could you do that? Can you do that too?

– Actually, we are equivalent. We’re not that far with the mice. But, by the way, this work has been replicated now in different labs including one at Stanford, California. And we make them lick sugar water, so it’s not exactly Captain Crunch, but it’s closer to Coke version for the mouse. Something like that.

– But you could, you may be able to give them an appetite for a specific product.

– Yeah, that’s right. And this brings me to the larger question that, the brain happens to be the organ that generates the mind, or the mental activity. And I’m talking about your perception, your thoughts, your memories, your imagination, your desires, your emotions, all of that comes out of the firing of these neurons inside your head. So if we go in, and we can read those neurons, and activate them, in principle, we can decipher the contents of your mind, and we can change it. And we can manipulate it. So, it’s serious stuff. This is not just another organ, this is not like we’re doing technology on the liver. No, no, this, we’re doing it on the brain, which generates our mental and cognitive abilities.

– You actually put the, you put, you inject those proteins into the mouse?

– Exactly, we use a virus to put them in. And they transfect the neurons so that they express two types of proteins, ones we use to read their sensors that tell, light up when the animal is active, when the neurons are active, sorry. And the other ones we use to write with a different laser, we can turn on these neurons, actually, in a way it’s like playing the piano with the brain.

– Put one virus in to read the thoughts and you put another virus in to write thoughts onto the–

– Exactly.

– And you trigger certain actions.

– Exactly.

– Do you look at any of these technologies like Bill Gates or Elon Musk are coming up today to, you know Elon Musk, that demonstration recently, where he had actually an external eyes chip connected to, the internalized chip in the pigs brain.

– Yep.

– And he was able to trick the pig to do certain behaviors Do you keep up with that stuff? And what are the things that most concern you?

– I, not only keep up with that, I’m in the middle of that field. In fact, we’re hosting, here at Columbia, although it’s going to be virtual, an international meeting on the heart of this technology, which is what you’re talking about. Which we refer to as Brain Computer Interfaces. Essential devices that connect the brain to a computer or to the outside. And we’re hosting a webinar here actually. A meeting, it’s open to the public, is free, on November 17th, on this topic. Sorry, November 18th, I should say. And we have people coming from all these companies. From Facebook, from Microsoft, there are people coming from Academia. We’re waiting to see if Neuralink also sends a representative. And, in answer to your question. Let me step back and tell you there’s two types of neurotechnology. Okay, so neurotechnology, we’re all clear what it means. You can read and write. And you can do that with optics, you can do that with electrons, you can do that with magnets. But at the end of the day, you’re just reading or writing. Now you can do it in an invasive way, with something that you put inside your skull that requires neurosurgery. And this is the Brain Computer Interfaces from Elon Musk and other people. And those, I’m not super worried about them, because they’re a medical procedure. Which means that if you want to get a chip inserted, well, it’s not your decision. You’re going to have to go through a panel of doctors, and they’re going to say, “No, no, no, no, “you don’t need it.” Or neurosurgeon is going to look at you and said, “No, you don’t need it. “We’re not going to do this.” It’s forbidden by medical ethics, deontology. But, there’s a second type of neurotechnology which we call non-invasive. And that one doesn’t require neurosurgery. And that’s like a helmet or, or a baseball cap type of device, or a them. It’s a wearable. You remember Google Glasses, that was the beginning of that. And those I’m worried about because they don’t, they’re not medical procedures. They’re consumer electronics. And they’re essentially not regulated. So you can, you could buy one of these things, put it in your head and start piping information from your brain out or in. And that’s, that technology it’s in the future. Is not here yet. Well, let me put it this way. People are starting to sell wearable neurotechnology, but it’s not very potent yet, so, there’s not too much to worry about. But, what’s in the pipeline is something that we should be very alert. So, let me give you one example. Facebook has a project that they call Thought to Text. Which is a wearable neurotechnology like a helmet, that will read out the activity of your cortex, and decode the word that you want to type in the screen. Okay? So instead of using your fingers, you can just think of something, I want to type the word, Bobby, boom, and it gets written directly. Now, I am a poor typer. So, for someone like me, this is fantastic. Like, oh, finally, I don’t have to type. I never learned how to touch type. So, but imagine that it’s not just the word that you want to write that gets piped out, but maybe some other private thoughts. And so this is an example of wearable neurotechnology, which in our minds, should be regulated. And we’re actually talking to Facebook, we’re talking to the same team that is building this technology. They’re going to be participating in our meeting. And that I know, I know them personally. And I can tell you, Bobby, also, these are not bad people. They’re really concerned and worried about these issues. They’re not the bad guys in the movies. They have ethics. They want to push humanity forward with technology. But technology, it’s always got ahead of the society. It’s always been the case. And we need to think carefully, before we embark in one of these things. What are the, like the, the rails in the road so that the cars don’t get off the road? What are the rules of the game?

– Yeah. And I think that, that’s generally true for every technology in history that has been misused by powerful people or totalitarian states. As the scientists who designed it genuinely, or at least they told themselves, they were doing something that could bring great benefits to humanity. There is this documentary out right now, which is called “Social Dilemma”.

– Yes, yes, I saw it, yeah.

– Yeah, and it’s really, to me, it’s a fantastic documentary, very, very scary, because you have the top people from Facebook, and Pinterest, and, and all of the other social media groups that have designed these algorithms that are self teaching. And the purpose of that algorithm was essentially, to make the screen addictive to people.

– Yeah.

– And make people stay on that screen as long as possible. And the algorithms are instructed to do that.

– Yeah.

– But the technicians who designed them have completely lost touch with, lost control of the algorithms. And they don’t even really understand how they work anymore. But they do understand that human beings will stay on that screen longer if you tell them things that they wanna know. In other words, if you tell them things that reinforce their worldviews. And reinforce their prejudices and their bigotries and the way that they see, tell them that they’re right. And they’re correct in the way that they perceive the world. If you’re a Democrat, you will end up only seeing information from the social media that reinforces your worldview. And if you’re a republic, you see information, even if you live next door to each other, you’re seeing an entire different litany and panoply of information. And because you’re only seeing, there’s no neutral information that’s seen by both people. And so the information you see tends to polarize the population. Push us all further and further apart. These where people, without exception. Invented that technology because they thought it was gonna bring the world together. And they’re, the great thing about that film was showing how horrified they were, by the way, it’s been misused, and it’s gotten out of their control. And we all know that, those kind of powers that you’re talking about right now will be abused if they’re created. Unless we have extraordinary safeguards on them. Every technology ultimately is used for human beings to control each other. And for totalitarian forces, to control all of us. I know this is something you’re worried about.

– Exactly. In fact, I’m glad you brought this up with “The Social Dilemma”. There’s this famous, paradox, they call paradox, which is when a new technology is invented, you don’t really know what it can do. But it’s very easy to regulate. But then, if you let it run for a while, you know perfectly well what you can do on what you cannot do. But it’s too late to regulate it because it’s become entrench. And this has happened with social media.

– What do you call it? What is the name of it?

– Yeah, yeah. So, now I’m a neuro technologist, and one of these people in the front line of medical research, inventing methods to try to help all these patients. But I’m also the first one to see what the future is bringing. And, me and other group of people that I represent, we’re the first ones to alert our colleagues and the society that we need to regulate this technology. And, our position is that we’re dealing with an issue of human rights. Okay, so why do I say that? Well, this is not just any other good old technology. This is a technology that affects the human brain. And because of what I was saying earlier that the human brain generates the human mind, it’s going to touch the essence of who we are. And that’s the human rights. So, three years ago, we got together here at Columbia, actually in a building next to the Physics Department, which is the National Monument because in the basement of that department, they build the first atomic reactor in the world. And the Manhattan Project got started. The atomic bomb. That’s why they call it Manhattan Project because it started here in Manhattan. But, what you should know that the same physicist who build the first atomic bomb and change the course of history, were the first ones in line, advocating for regulation of the atomic energy. And partly due to their lobbies, President Eisenhower, in the famous speech at the UN, instituted the creation of the Atomic Energy Commission. Which so far has kept things relatively under control in terms of atomic bombs. So, well, that’s another, is another discussion. But let me tell you that, here we were 25 of us, coming from different countries representing neuro technologists. Representing also people from the AI industry. People coming from all the brain initiatives of different governments. Looking at this Physics Department where the Manhattan Project got started. So we were, in a way poised to propose these ethical guidelines for neurotechnology. And, we wrote a paper we published in this journal called Nature. Which is in science, is the mostly, most widely read journal. Proposing a human rights solution, we call it NeuroRights, as rights of the brain or neural rights. And we argued that the Universal Declaration of Human Rights, which was written in 1948, and hasn’t been touched since, needs to be updated. And just like the 1948 Declaration, protects the body and the life of people. We have to protect now the minds of people. We need to add this new human rights. And, I’m very glad to talk to you today because tomorrow, the Senate of Chile, of the Republic of Chile is going to introduce an amendment to their constitution, for neuro rights protection. And a Bill of Law that specifies, that Chilean citizens will be protected from abuses of neurotechnology. So, we’re gonna have at least one country in the world where this has caught on and hopefully sooner than later we’ll do it in other countries and maybe in the US as well.

– You talked about that technology that they already have where you can put some kind of sensors on your head externally, like earphones and then you can write text messages just by thinking about it. How far are we away from an the police will be able to put something on your head and be able to read your thoughts. Will that ever happen?

– It could happen but not if, we are successful. So let me tell you, well technically, I would say, we’re still maybe a few years away from that. Well, okay, you can start to be able to decode basic images that you may conjure in your mind, if they scan you with a fMRI machine. And the training algorithm to decode these patterns. So, they can make a good case as to what image, if you thinking of an image, what images you’re thinking. That can be done today, that, that’s basic, that’s relatively primitive brain decoding, but to able to write a word or to, yeah, as you’re saying, police interrogation by looking through your brain. We may, I mean, I don’t know unless my colleagues in Facebook tell me otherwise. I think we’re still a few years away with non-invasive technology. Sorry, I’m talking about non-invasive. With invasive technology, there was actually a study this year earlier in UCSF, in which, with electrodes inside the motor area of a patient, they were able to decode 95% of speech. By just looking at the–

– Oh my God. That’s really scary.

– Well, but wait, I mean, you make, don’t be, don’t be scared. I mean, you have to realize, first of all, the people that’re doing this, we’re not crazy lunatics. We are MD, PhDs, we’re trying to help people, we’re trying to figure out how the brain works. And this is gonna be of huge benefit to humanity. We’re going to be able, just imagine, imagine that you can understand your mind, your brain from the inside, scientifically, it’s gonna be the first time that this happens. Imagine all the, tragedy of human experience. The pain, the meaning of pain, the violence. I mean, you know in 2020, humans still kill each other for no good reason in all these wars. And these wars are generated by the brain. So, if we could understand how the brain generates human behavior, we may have a chance at solving some of these problems that we’ve carried, since the Paleolithic, all of our history. Not to talk about the patients, know the patient that are urgently demand that we help them today. So, we have the urgent mandate to try to do this. Now, and then on top of that, imagine also the economic benefits if we can reinvent computer technology using smarter algorithms based on how the brain works. So, all of these are reasons to be positive. Now, I completely agree. And I’m telling you, I’m the first guy in line to prevent abuses of the technology. Because every technology, all the way from the fire, today, you can use it for good or for bad. Just imagine the first person that invents the fire said, “Oh, yeah, we can warm ourselves up “in our cave during the winter. “Or I can use this fire, “and burn the cave of the neighbor.” So, I mean, you could use it one way or the other.

– Let me ask you, one other question about this present capacity. There are pain centers in your brain, correct?

– Yes.

– Could you, could a totalitarian state, hook an electrode, an external electrode and stimulate those pain centers that you would experience indescribable agony. Maybe worse

– Actually.

– Physical pain, any kind of, the worst kind of torture in the world.

– Bobby, I’m going to tell you, something personal here. There’s a syndrome called Complex Regional Pain Syndrome.

– Yeah, .

– Which is pretty much like that. It turns out, I suffered it. When we were in medical school, we had sort of the game, to pick up the worst disease that you could ever get. And it was this one. Back then they used to call it Sudek syndrome. And this is so bad, that this is one of the few cases where the patients kill themselves. This leads to suicide, because it’s pain is 10 out of 10. I can tell you, I’ve suffered personally. I love playing soccer. I had a terrible fracture, playing soccer with my nieces and nephews. It was a good game, but my arm was shattered. I had surgery, it didn’t help me. And I got hit with this Complex Regional Pain Syndrome. And this is, just imagine my face when they tell me the diagnosis. When this is the same disease that back in medical school we voted, this is the worst thing you could ever get. So, I went to the best, one of the best pain specialists in Manhattan. And he told me, “Okay, you’re an MD, PhD. “I’m going to just tell you straight.” And I said, “Etiology.” So etiology is like, what is the origin of the disease. Unknown, pathophysiology. That’s how does this disease generate the symptoms, unknown. Prognosis, unclear. Treatment, nothing worked. So, he gave me, he said, “I’m going to give you everything we have, “in case something sticks.” He gave me nine courses of medicines at the maximum doses, altogether. And this is opioids, benzodiazepines, antidepressants, steroids, you name it, everything. And I was in agony, day and night. I had to medicate myself to sleep. I couldn’t fall asleep from the pain. It was 10 out of 10, just to give you a feeling for what it was, is as if your hands and your arms and legs are put in boiling water all the time. And even though it started in one arm, it spread to both arms, and both, to all limbs. Anyway, I was, the most miserable times in my life. I mean, just imagine how, how it was. And, I started exercising. And I also started meditating. I joined mindfulness, stress reduction curriculum, which was started at the University of Massachusetts’ pain clinic by Jon Kabat-Zinn to treat people with chronic pain. And, after the eight week of meditation, maybe because also the exercise, the pain started to go away. I ended up with only two courses of medicines daily, and six months later, I was fine. So, I can tell you that this is the kind of reason why we need new technology so that we can go in, figure out what the pain is, go in and cure these people and prevent them from killing themselves. Now, and these are thoughts that I had during this time. And I have to confess this to you , you think about that too.

– Could you induce that experience of pain? Can you if you, if you wanted to, or could a police agency, induce an experience of agonizing unbearable pain? Simply by inflating–

– Not yet

– A part of the brain.

– So, we’re still ignorant of how the brain works. And this ignorance is preventing us from doing what you say . So we don’t know enough to be able to do good or to do harm. It works both ways. But, it will happen. I guarantee this will happen. Listen, this is just like any other part of the body. We’re going to understand it using science and medicine, we’ll get there. And when we get there, we have to make sure that, again, this, the abuses of this knowledge and this technology don’t happen. And I am actually, a positive kind of guy, I think we can do it together. I told you what we do in terms of new human rights. There’s another initiative that we’re proposing. And this is something that we call the Technocratic Oath. I’m going to run it by you to see what you think, okay. So– So, you know doctors, we all swear the Hippocratic Oath when we finish Med school, okay. And we take it upon our conscience, a personal pledge, to use our knowledge for the beneficence of the patients. So, that’s to do good. Under the Principle of Justice, so that we treat all patients the same. You don’t treat one patient good, and the other you don’t treat. And also respecting the dignity of the patient. So that people, you treat people not as as a commodity but actually as a human being. So, the idea is to import this oath into the tech industry, for neurotechnology and AI. And we would call this, the Technocratic Oath. So just imagine that you’re an engineer at Google. And you swear, the Technocratic Oath, that you’re going to use your knowledge for the beneficence of people under the Principle of Justice, Universal Principle of Justice, and with treating people with dignity. So, and this will not be something coming–

– Let me tell you one of the problems I see with that Dr. Yuste.

– Yeah.

– Every physician that swears a Hippocratic oath and that Hippocratic Oath obligates them to treat that patient. Not to treat all of society. Not to do something for that patient, to benefit his neighbor. But to treat that patient as an individual and that physician’s relationship is with that individual. But today, in the last two years, can many, many of our obligation to the individual has been discarded. And physicians have now become the agents of the state. Through mandatory vaccination policies. You are treating this patient, even if the patient does not want the vaccine. Even if he has not given informed consent. You’re gonna give them that vaccine to benefit somebody else, to create herd immunity or to do things, the greater society. And the problem is that there’s a philosophical problem that has very real world implications, is once that physician’s job becomes treating, preserving all of society. And that patient’s needs quickly become irrelevant. And for example, if you’re treating an older patient who has a lot of medical problems, and is on Medicare, and it may now be a better thing for society, simply, to let that patient die. Because you’re no longer treating that patient, you’re acting as an agent for the state and as an agent for society. And Hippocrates, 2000 years, 2300 years ago, understood that. The job of the physician was not to treat, not to save society from disease but to treat that individual patient. And I can see very easily the same kind of slippery slope. And the oath that you’re proposing, which is.

– Yeah.

– That suddenly it becomes the physicians duty or the technocrats duty, or the technician duty, to preserve all of society, protect them from crime, protect them from terrorism, protect them from dissent. And we see how easily the pediatricians have gone along with this scheme. And that the oath really didn’t bind them at all.

– Well, so this is, this is a difficult question, Bobby. I think. I mean, you definitely have a point that there’s complicated ethical decisions that have to be made, balancing the good of the individual versus the good of society. But by the way, this is not just in medicine, this is in general. This is a problem with Ethics 101. This is the famous, rail car going astray, thought experiment. So you’ve heard about that. Okay, so but, so this is not specific to medicine, but this is–

– Yuste, just let the people, the rail car experiment.

– All right.

– Rail car experiment is if a train is coming toward an individual and you are holding the switch. Oh you, let’s say that train is coming and it’s gonna kill three people for sure. You can switch it, to another track, but in that track it will kill one individual. Do you pull it, you have a right to pull it and make a decision about who’s gonna die. And generally, what the philosopher will say is, “Yeah, you have an obligation to pull it “and save that one person, the individual has. “But you do not allow the government “to make those decisions. “Because the government will abuse it.” And that’s really, that’s kind of the answer. The accepted answer, to the railroad dilemma is that you don’t allow the government to hold that, and decide who will live and who will die.

– So, going back to your your point of the question. I would argue that the world’s not perfect. And here we are, people like you and me, we’re trying to improve it. And fix the problems that we see and make it even better. Okay. And it’s our duty, with the education and the political or scientific background that we have to put that to good use for the benefit of the society. But this has to be done respecting the individual. And I think the Hippocratic Oath or this Technocratic Oath that we’re we’re arguing for, will have the individualized score. So this doesn’t, this doesn’t discuss the issue of what the whole society should do, except for the Principle of Justice. And, again, so, embedded in the Principle of Justice is the idea that you cannot leave people behind. You have to treat everyone under the same assumption of fairness, because people don’t choose who they are. So this is the famous arguments of the invisible veil, that when you’re born, you don’t choose where you could be born. Black or white, men or women, smart or dumb, blonde, or bald. You got your . And because of that, the rules of the society according to should be rules that are universal, applied justice. So that you would set up the rules of the society without knowing what role, what you’re going to get. And I would encourage you, perhaps to think of that, when we would talk about neurotechnology or this Hippocratic Oath or Technocratic Oath. So, I’m not saying that medicine is perfect. But what I would tell you is that if you look back 2000 years. Medicine is a profession that because every single doctor going back to Hippocrates has made this personal pledge between you and your consciousness. And you never forget that, the rest of your life you know what you did. That makes doctors, medicine a humanistic profession. So you could say, and people say, well, doctors are, they want to make money. Well, listen, I think in general, most doctors want to help people. It’s a humanistic profession. And this is something that’s missing from the tech industry. And that’s why may be bringing in this Technocratic Oath, we could turn the needle a little bit in the tech industry, and say, you know what, you guys also have the duty of helping mankind. Helping people, just like doctors. Use your knowledge to do good. Treat people justly, and treat people with dignity. And I think the dignity issue, I think could cover some of the problems that you’re raising.

– Dr. Yuste, thank you very much. Can you tell the audience ’cause I think we have a lot of people who are interested in what you’re doing. And this conference that is coming up, the public is invited, right, with Facebook, and Google and Microsoft there.

– Yes.

– We’re all talking about how they’re gonna control our minds in the future.

– Yes, I got the date wrong. Okay, it’s November, 19th. This is a week before Thanksgiving, Thursday. It’s organized by the Neuro Technology Center at Columbia University. The conference, it’s called Brain-Computer Interfaces. And if you just put it in the search engine, Columbia University, Brain Computer interfaces, or Neuro Technology Center, you should come out with the link. And you can register for free. And attend the conference via Zoom, or you can also watch it in YouTube, we will be broadcast live on YouTube. So either way, it’s a whole day of event. And we’re going to review the heart of this neurotechnology that we’ve discussing today. The best people in the world, building these devices to both read and write, both invasive and non-invasive. So these are engineers and scientists. And then we’re going to switch to people that are experts in terms of data privacy. Like what algorithm you use to take data out and these are people coming from the banking industry that use Blockchain, et cetera. And then we’re going to end up talking about ethics and societal . Exactly, so, that’s gonna be the end of the conference. And you know what I’m going to say. I’m going to be pushing for human rights and Technocratic Oath. That’s pretty good. Dr. Rafael Yuste, from Columbia University. Thank you for joining us today. And thank you for your generosity. Keep fighting for dignity and human rights.

– Thank you for inviting me, Bobby. It’s a pleasure to talk to you too.

– You too.

Sign up for free news and updates from Children’s Health Defense. CHD focuses on legal strategies to defend the health of our children and obtain justice for those injured. We can't do it without your support.