Neuroscience and AI: What artificial intelligence teaches us about the brain (and vice versa)

This week, we talk with Surya Ganguli about the neuroscience of AI, and how advances in artificial intelligence could teach us about our own brains
Nicholas Weiler
Image
From Our Neurons to Yours Wu Tsai Neuro Podcast

The powerful new generation of AI tools that has come out over the past few years —  DALL-E, ChatGPT, Claude, Gemini, and the rest — have blown away our old ideas about what AI can do and raised questions about what it means for computers to start acting... intelligent?

This week, we ask what the rise of these systems might teach us about our own biological intelligence — and vice versa. What does modern neuroscience have to say about how AI could become as flexible, efficient, and resilient as the human brain. 

Few people are better positioned to speak to the intersection of neuroscience and AI than today's guest: Surya Ganguli

Ganguli's lab produced some of the first diffusion models — which are at the foundation of today's AI revolution — and is now working to understand how complex emergent properties arise from biological and artificial neural networks. Ganguli is a member of the Neuroscience Theory Center at the Wu Tsai Neurosciences Institute, a Senior Fellow at Stanford's Institute for Human-Centered Artificial Intelligence (HAI), and an associate professor in Stanford's Department of Applied Physics. 

Listen to the episode to learn more!

View all episodes

SUBSCRIBE on Apple Podcasts, Spotify, Amazon Music and more.

References

Image
Surya Ganguli, associate professor of applied physics

Visit us!

Want to learn more about AI and Neuroscience? Join us at Wu Tsai Neuro's annual symposium on October 17, 2024, which will showcase the frontiers of biological and artificial intelligence research. (More details coming soon!)

Episode Credits

This episode was produced by Michael Osborne at 14th Street Studios, with production assistance by Morgan Honaker. Our logo is by Aimee Garza. The show is hosted by Nicholas Weiler at Stanford's Wu Tsai Neurosciences Institute
 

If you're enjoying our show, please take a moment to give us a review on your podcast app of choice and share this episode with your friends. That's how we grow as a show and bring the stories of the frontiers of neuroscience to a wider audience. 


Episode Transcript

Nicholas Weiler:

Over the past few years, a powerful new generation of AI tools like DALL-E, ChatGPT, Claude, Gemini, and the rest have blown away our old ideas about what AI can do and raised questions about what it means for computers to start acting intelligent.

Naturally, as a neuroscience nerd, the question that arose for me was what the rise of these systems might teach us about our own biological intelligence and vice versa. That is, whether modern neuroscience has anything to say about how AI could become as flexible, efficient, and resilient as the human brain. Few people are better positioned to speak to the intersection of neuroscience and AI than today's guest.

Surya Ganguli:

My name is Surya Ganguli. I'm a professor in the Applied Physics Department. We work on sort of a combination of neuroscience and artificial intelligence, trying to understand how complex emergent properties have come out of these really interesting neural networks, both biological and artificial.

Nicholas Weiler:

This is From Our Neurons To Yours, from the Wu Tsai Neurosciences Institute at Stanford University, bringing you to the frontiers of brain science. Let's jump in. Surya, welcome to From Our Neurons To Yours. Thank you for joining us.

Surya Ganguli:

Yeah, thanks for having me. Great to be here.

Nicholas Weiler:

There's a huge amount of conversation about artificial intelligence these days. And it's all very exciting. Everyone's exploring these tools that are mostly using these things called large language models or LLMs, and some people talk about them as foundation models, which I think is maybe a bit of a more general term for this kind of algorithm. And I know there are many different kinds of AI.

So when people talk about AI, it could be a neural network, it could be foundation models, and so on. But one question that's been on my mind, the more I've read about and experimented with this new revolution in AI, is what are the ways in which this artificial intelligence is similar to our own biological intelligence, and what are some ways that it's pretty fundamentally different? And since you and your team contributed some of the foundational technologies to this new revolution in AI, you seem like the perfect person to help me understand this.

Surya Ganguli:

It's a great question. I think it's partially an open research question, but I mean, these LLMs and these diffusion models, which sort of came out of my lab that generate images and so forth, they're remarkable. They're incredible. They can do things that none of us predicted they could do a year ago or two years ago, but they also make egregious mistakes that humans would never make. They're not as good at long-range planning or reasoning or so forth. So I think they're similar in some ways but very different in other ways.

Nicholas Weiler:

I'm not always very good at long-range planning, I have to admit. So I give them a little credit there.

Surya Ganguli:

That's true. But you are better than LLMs at it. I can guarantee that.

Nicholas Weiler:

Well, that's comforting. In terms of how they're built, though, I mean, one of the things that I find interesting is, as you said, they're just trying to predict the next word, right. They're, in a sense-

Surya Ganguli:

Right.

Nicholas Weiler:

... very advanced statistics where they're saying, "Based on everything I've read on the internet, and based on your question, here is what I think the most likely thing for me to say is, and I'll say it."

And I don't know, I sometimes wonder, isn't that what I'm doing, right? I've got a lifetime of experience. I've been through a lot of things in different contexts, and in a particular conversation, I don't always know what I'm going to say before I say it. And so part of me wonders is my brain doing something similar. Based on all of my experience, I'm compressing it and deciding on what seems like the right thing to say next.

Surya Ganguli:

Possibly. But the amount of experience that we've had is a lot less than the amount of experience that, say, GPT has had in the language domain. So if GPT-3 read all of the data that it read at the same rate that humans read, it would take 20,000 years to read that amount of data. We have a lot less data, though we do have data from visual domains and so forth. So we are much more data efficient than these systems, and we are able to extract knowledge and capabilities much more efficiently from the external world in these systems can do.

And so, I do think that the way that we organize our knowledge might be quite different from these systems, and definitely the way that we extract knowledge is very different from these systems. One of the things that has occurred to me, it may be that short-range reasoning that humans do might not be as sophisticated as we think it might be because these LLMs can do short-range reasoning where there's not many deductive steps that are required to go from premise to conclusion. So there might be a lack of sort of simple form of information retrieval or shortcuts that we use in short-range reasoning-

Nicholas Weiler:

Right.

Surya Ganguli:

... that might be similar to what these LLMs do. But again, these are open research questions.

Nicholas Weiler:

I made this argument to a friend the other day about we are able to do what we can do with our brains so much faster and with so much less data and with so much less power, which I think we should get into, than these AI algorithms. And they said, "Well, yeah, but we've also had millions of years of evolution," which is another form of training and data compression in a sense, which I thought was a very interesting argument.

And so, our brains are specialized in a number of ways. We have special brain regions for memory, the hippocampus, which we've talked a lot about on this show. We have all these different specializations and built-in structures. So it's not just a all-purpose learning network in our brains. There are a bunch of particular pieces that are doing particular jobs to give us things like memory and long-range planning and so on. So yeah, that sort of set me back and made me think a little bit more about some of those similarities.

Surya Ganguli:

Yeah. So I do think that evolution is discovered an incredible sort of multimodular brain architecture with different brain regions with specialized learning roles. And we don't understand the theoretical principles governing how multiple brain regions work together. But if you think about the bottleneck of inheritance through DNA, the number of bits in DNA isn't enough to fully specify the detailed wiring diagram of an adult.

So it's a combination of a little bit of a nature and a lot of nurture that leads to an adult brain. It's not that much nurture compared to the amount of nurture given to an LLM, right. So there's still something magical and mysterious about the data efficiency of human learning within a lifetime, despite the fact that vertebrate brain evolution occurred over 500 million years, right. That did happen. But the amount of information that seeds the process of a single human is very little, despite 500 million years of evolution.

Nicholas Weiler:

Right. Because we are incredibly adaptable and can do all kinds of things and can learn any language, and any culture, and whatever we're born in. Well, on this topic, one of the things that I'm interested to ask you as an expert, both in AI and in neuroscience, are there things that our understanding of the brain, how the brain does this incredible compression of experience and long-range planning and efficiency of computation?

Are those things that we could leverage to make AIs more efficient and more effective? Because one of the critiques or concerns about this revolution in AI is that they're incredibly energy-hungry. We're in the middle of a global energy crisis, and so having a very energy-hungry technology that we want to rest our 21st-century economy on seems thorny. So yeah, can the brain help us out here? Can what we know about the brain help us make AI better?

Surya Ganguli:

I think it can, but the news is not good. So we've worked in two things, energy efficiency and data efficiency. On the energy efficiency, the news is not good because I think where we went wrong was the choice of digital computation itself as a medium for computation. The brain is a mixture of analog and digital. And the reason digital computation is so energy-hungry is that we rely on fast and reliable bit flips many, many, many times a second, like at gigahertz time clocks.

Nicholas Weiler:

Right. So ones and zeros switching back and forth.

Surya Ganguli:

Ones and zeros switching back and forth. And so each time you switch a one and zero back and forth, you have tons of electrons shuttling back and forth through transistors. And the laws of thermodynamics demand that every fast and reliable bit flip spends some amount of energy. Biological systems took a completely different route through evolution. Every intermediate step of a biological computation is slow, noisy, and quite unreliable, but just unreliable enough that the composite computation is good enough for its purposes.

Nicholas Weiler:

Interesting.

Surya Ganguli:

So, from our perspective, the intermediate, slow, unreliable steps might seem like a terrible design choice, but actually, it's an ingenious design choice because it respects the laws of thermodynamics and doesn't require much energy expenditure. So like synaptic failures and all this kind of stuff that happens, they might be actually principles of really good energy-efficient design. That's why our brain only spends 20 watts, so we're all literally dimmer than incandescent light bulbs.

Nicholas Weiler:

I feel that way sometimes.

Surya Ganguli:

Whereas AI spend megawatts of power. There are about six to seven orders of magnitude more power consumption-

Nicholas Weiler:

Wow.

Surya Ganguli:

... compared to the human brain.

Nicholas Weiler:

Six to seven orders of magnitude. So what... is there anything we can even compare that to?

Surya Ganguli:

Yeah, it's like the power, right. So I mean, a comparable thing, I don't know the numbers off the top of my head, but one thing that I remember is that let's say you try to have stable diffusion or DALL-E create an image, the amount of energy required to generate that image is equivalent to the amount of energy required to charge up a cell phone from zero to full charge.

Nicholas Weiler:

Wow.

Surya Ganguli:

So that's a pretty remarkable amount of energy consumption, just generating one image.

Nicholas Weiler:

Right. And in a time where we're all trying to be energy-conscious, right. That's quite a lot. I know that these companies are doing carbon offsets and so on, but that's only so useful. Some of these companies are.

Surya Ganguli:

Yeah. But on the data efficiency side, I think there's lots that we can do. So one of the things that we worked on for data efficiency was there's these scaling laws, right, for how much error goes down with the amount of data, and it goes down as a power law, but it's a very shallow power law. So this means that you get diminishing marginal returns.

So you have error going down as some power of the amount of data. So what that actually means in practice is if you want a constant amount of drop in error, say a few percentage points drop in error, you need to 10 x the amount of data. And then, if you want a few more percentage drops in error on top of that, you need to 10 x the amount of data again.

Nicholas Weiler:

Wow.

Surya Ganguli:

So these power laws are very shallow in that respect, right. You need a multiplicative increase in the amount of data to get an additive drop in the amount of error. So we were wondering why is this? Why does it have to be? A natural explanation for why you get this is that we're just collecting really large random data sets. So if you have a large data set to begin with and you add another random example to it, or an even 10 x more random examples, chances are the new data is highly redundant with the old data.

And you don't get much more out of more random data if you already have a lot of random data to start off with. So maybe you can intelligently select your data. So we developed algorithms to intelligently select subsets of non-random data, and we looked at performance as a function of non-random data set size. And we could show in both theoretically and, in some cases in practice, that if you sub-create these non-random or non-redundant data sets, and you ask how does error fall off with non-redundant data set size, you can get an exponential fall off as opposed to a power law fall off.

So, operationally, what that means is if you add a few more data points, you can half your error, right. Add a few more data points, you half it again, so the error falls off exponentially with redundant data set size, which is very different from the power law, just as an example of the difference between exponentials and power laws. Let me ask you a question. Have you ever met anybody who's twice as tall as you?

Nicholas Weiler:

No, close. My [inaudible 00:11:44] postdoc was almost twice as tall as me.

Surya Ganguli:

Great. Have you ever met anyone who has twice as much money as you?

Nicholas Weiler:

Sure.

Surya Ganguli:

Okay. Almost everybody in the world has because I looked this up recently when I was teaching my class. The 15th richest person in the world only has half as much money as the richest person in the world. So if they've met, the 15th richest person in the world, has met somebody who has twice as much money as them.

Nicholas Weiler:

Wow.

Surya Ganguli:

What is the huge difference between heights and wealth? Wealth is a power law distribution, which means you can get huge deviations in wealth, but height has an exponential fall-off, which means you can't get these huge deviations.

Nicholas Weiler:

Right.

Surya Ganguli:

So when you translate this to distributions of performance against data set size, that's the difference. You need 10 x amount of data to drop a few percentage points, but you only need it for power law. In exponential for a few more data points, you can get the same percentage wrong. There's lots more that we can do there. We as humans, when we learn, we don't just passively read everything and try to predict the next word. We select what we want to read. So that's called active learning.

We haven't quite gotten active learning to work with deep learning yet. That's an interesting research direction. When babies are learning about the world, they use curiosity and interesting forms of exploration to actively experiment with their world. So babies are performing active experiments where they'll... they have a model of the world. They try to do play that detects violations of that world model, and then when they detect these violations, they correct their world model.

So they're not just, again, passively ingesting data, they're actively doing experiments. And then also we set up social and cultural systems of curriculum design and education and learning where we carefully sequence the problems we give to our children from first grade to 12th grade and through college. So we carefully designed these systems to accelerate their learning capabilities. None of that is implemented in deep learning today. So there's, I think, a lot of work from psychology and neuroscience and so forth that could seep into AI to make them more data efficient.

Nicholas Weiler:

Yeah, it's often occurred to me that a lot of the systems we're working with now are like self-educated savants, right. They've read absolutely everything, but they have no judgment. They don't have a lot of morality built in. Some of them are starting to get there, but maybe we need some parenting principles.

Surya Ganguli:

Yeah. In some sense, actually this reinforcement learning from human feedback where language models will generate sentences and humans provide feedback to correct the language models, that's a form of human parenting that we're giving these language models.

And that was a key aspect of being able to release them into the wild so that they could do interesting things and also instruction-tune feedback and so forth. So there's a little bit of that parenting going on [inaudible 00:14:21] that these companies are doing, but I think we haven't quite nailed the best way to do that yet.

Nicholas Weiler:

Yeah. So we've been talking a lot about how understanding how the brain does what it does, how our biological intelligence works can teach us about how to make artificial intelligence better, more efficient, more selective, and so on. But I'd love to turn to what AI is teaching us about how our brains work. Our brains are very complicated systems. People who have been collecting gobs and gobs of data about the brain at the cellular level, at the systems and network level.

But I guess one critique that's come up is, yeah, maybe we can use AI to people talk about building these digital twins where you can take a huge amount of data and try to create a statistical model essentially of how a brain is working or some other system, the immune system or something like that, and use that sort of to do a virtual clinical trials so you could test various interventions for something in this digital twin.

But I guess one of the questions in neuroscience is, well, if you build something that's just as complex and, in a sense, just as unknowable because it's really hard to figure out what these algorithms are doing under the hood, you haven't really solved your problem of wanting to understand how the brain works. It's a little bit like Borges story about the cartographers that create a map the same size as the kingdom. It doesn't really help you. So you're very interested in this topic of explainable AI. Could you tell us a little bit about what that is and where you think that's going?

Surya Ganguli:

Yeah, you hit the nail on the head, right. We now have the capability to collect large data sets and fit neural models to those data sets or train neural networks to solve the same tasks the animal is doing and then compare the model to the brain. And we actually get pretty accurate models of large-scale neural activity patterns. But then, are we just replacing something we don't understand, the brain, with something else we don't understand, our model of it? And that sort of keeps me up at night. As a statistician, your goal is to explain fraction of variance in data, but that's not the same thing as scientific explanation, as my colleague Daniel Fisher likes to say.

We'd like to extract a conceptual understanding of our accurate model of the brain works, and we'd like to extract testable hypotheses from it. So our approach to doing that is to try to do model reduction in the model. We try to extract sub-circuits or sub-components of the model that are important for explaining the response of the brain in restricted situations. So in a collaboration with Steve Baccus, which had appeared in Neuron recently, we were able to do this for the retina, where we could get an accurate model of the retina response in natural movies.

It was a deep neural network that matched the architecture of the retina, the gross architecture of the retina, and it did really well. Did better than any other model in existence at modeling natural movies. And then we asked, well, in the retina, we have two decades of experiments. If we did those two decades of experiments using these artificial stimuli, could this model get us the same results and, remarkably, [inaudible 00:17:10] the right result for every single experiment?

Nicholas Weiler:

Wow.

Surya Ganguli:

Then we're like, "Okay, how did it do that? Some of these experiments are very qualitative." For example, if you show a periodic flash and then you emit a flash, the retina will send a complaint at the time of the emitted flash. So the retina is like a predictive world machine. How is it doing the prediction?

So we're able to use explainable AI techniques to extract a very simple sub-circuit that explained the response to each of the eight classes of stimuli that were explored over two decades. And in every single case, the sub-circuit we extracted was this consensus sub-circuit that many other experiments had come to piecemeal.

Nicholas Weiler:

So the model had done it in sort of the same way.

Surya Ganguli:

Yeah, exactly. So our model of the retina solves these problems in the same way that experiments suggest the actual retina solves these problems. And so, in some sense, this work could reproduce two decades worth of combined theory and experiment. Of course, we never would've known we were right had not... had the two decades of experiments not been done. But you can start to imagine how you can accelerate this theory experiment cycle in neuroscience.

Instead of devising ingenious single stimuli one at a time, you throw natural stimuli at the brain, you model the brain's responses to those natural stimuli or during natural behaviors, you analyze the model for extractable, testable predictions and conceptual hypotheses, and then you design new experiments based on the analysis of the model. And you might be able to do all of this in an algorithmic loop, like an AI scientist that models the data, analyzes the model, extracts testable predictions, and designs to experiments. And that's where I'm super excited about. The cyclic loop between AI and neuroscience.

Nicholas Weiler:

Yeah, because something we've talked about several times on the show as well is for all we want to understand the human brain, the things we want to understand most are often the things that are the hardest to experimentally analyze because, okay, we want to understand more about the human brain than the mouse brain, but it's harder to study a human brain in the lab.

We want to understand how the brain builds itself, but it's much harder to understand the earliest stages of brain formation than it is to study the brain later in life. So these digital models, in a way, they remind me of conversations we've had with Sergiu Pasca, who's doing brain organoids, these little balls of brain tissue that can be grown in the lab and studied.

Surya Ganguli:

Yeah, those are very cool.

Nicholas Weiler:

It gives you something tractable. It gives you a digital version of what might be going on in a brain circuit.

Surya Ganguli:

Exactly. And it's much more accessible. You can do experiments on it. Exactly. And so I'm very excited about these sort of in silico digital models and twins that we have because we can do any experiment we want on them.

Nicholas Weiler:

Are there any interesting findings that have come out of this that are expanding on what we knew from the decades of experiments? Or are we currently at the stage of, "Okay, we can now get the digital models to recapitulate what we know is going on in the brain, and the next step is to start seeing if we can discover some new things?"

Surya Ganguli:

Yeah. No, we already made some new predictions about how this emitted stimulus response. When there's a violation of periodicity, the retina sends a complaint. We made a new prediction about what kind of circuit creates that. So we made a very specific prediction, and that experiment hasn't been done yet, but in this way, you can extract new predictions from the cyclic loop between neuroscience and AI.

Nicholas Weiler:

That's right. And it can generate things that then the experimentalists can go and say, "Oh, let me see if that's actually happening in the retina."

Surya Ganguli:

Exactly.

Nicholas Weiler:

One thing I'll mention just in case listeners have the impression that the retina is just a photodetector in a digital camera. One of the things that I find really fascinating about this new paper is that you're sort of decomposing the retina.

It has all these different layers, a little bit like some classic neural network models, where then there are things that extract some simple patterns from that, and then those get fed into other types of cells that extract yet more patterns. So there's actually quite a lot of computational complexity going on in the retina before the image ever gets sent to your brain.

Surya Ganguli:

Yeah. It's like a three-layer neural network that contains within a predictive model of the world. For example, your retina knows about Newton's First Law of Motion, which is that an object that is in motion tends to remain in motion.

How do we know it knows that? Because if an object is moving and then it suddenly changes direction, the retina will send a huge complaint to the brain that it wouldn't have sent otherwise if it had kept moving. So it's like saying, "Wait a minute, this object violated Newton's First Law of Motion. I better warn the brain."

Nicholas Weiler:

Something's up.

Surya Ganguli:

So your retina does that, which is remarkable. And our model explained how it does that just through fitting a model to natural movies and then using explainable AI applied to the model for a bar that reverses direction. It explained exactly the spatiotemporal computation that led to this prediction of violation Newton's First Law.

Nicholas Weiler:

That's fantastic. Well, we've talked about how neuroscience may be able to inform AI, and we've talked about how AI may be able to advance neuroscience. And so the last thing I wanted to touch on is you've also done some really interesting work with Karl Deisseroth here at Stanford, who invented this technology optogenetics that lets you turn on and off particular neural circuits or particular neurons using light.

You did a really interesting experiment where you actually had AI and the brain almost working together. And this obviously, I think, is something that's on people's minds. Maybe at a macro level, how are we going to interact with AI? How is AI going to become a partner to enhance our capabilities? But do you see that as something that we can be exploring even at the level of having artificial intelligence and biological intelligence actually interacting?

Surya Ganguli:

Potentially. Yeah, we're working on that. So just the technology that's powering it from Karl Deisseroth lab is that he can now record 10,000 neurons from, say, the mouse visual cortex and he can spatiotemporally stimulate with arbitrary patterns up to about a thousand neurons now. So now you can do these closed-loop experiments where you can not only record the brain but stimulate the brain in a closed-loop fashion.

Nicholas Weiler:

Meaning based on your recordings that can tell it what stimulation to do.

Surya Ganguli:

Exactly. So now you can put a controller in there where a controller takes his input the past history of neural activity patterns and designs a stimulation pattern to do whatever you want to the brain using these thousand neurons as your control [inaudible 00:23:25]. So we've been analyzing theoretically what kind of control laws you can use to modify different biological neural networks. And so, in a very simple version of that study that appeared in Science recently, we were able to learn the language of mouse visual cortex and then write in patterns that could make the mouse act as if it were perceiving desired stimulus we wanted it to perceive.

In this case, just an oriented grading. And the way we knew it, at least thought it perceived that was through a behavioral readout. So we had tied different stimuli to different actions. Then we got rid of the stimuli, visual stimuli, and we just wrote in directly, we stimulated like a small number of neurons that would've fired, had a particular stimulus come in, and the mouse acted as if it saw that stimulus. It did the right action for that stimulus and not the other stimulus.

So basically, at a high level, we could write in percepts into the mouse's brain or at least make the mouse behave as if it had certain percepts in its brain. So we could try to do that on a much more sophisticated level using a control theoretic loop between observation and actuation into the brain. And I think that's really exciting. But there's lots of theoretical work that needs to be done and computational work that needs to be done to turn this technology into a kind of interesting control loop. And I think the possibilities are really interesting and open-ended.

Nicholas Weiler:

Yeah, it does also put me in a mind of the matrix a little bit, right, writing experiences into the brain.

Surya Ganguli:

Yeah.

Nicholas Weiler:

So I'm curious, just to close, what do you see as the future of that? Where does that get us? Why does having an AI writing things into the brain help us expand our capabilities?

Surya Ganguli:

Yeah. Well, one possibility is closed-loop brain-machine interfaces. Right now, brain-machine interfaces operate in open loop. They just record neural activity and then drive an end effector, right.

Nicholas Weiler:

So this is something like a robotic arm for someone who's paralyzed or something like that.

Surya Ganguli:

Yeah, like a robotic arm or something like that. Exactly. Or a cursor on a screen or something like that. So you could imagine closed-loop brain-machine interfaces that use a feedback loop to stabilize neural activity or to correct it if it's not under control. You could imagine memory prosthetics, right, that you're trying to think of where did you store your keys? And it might detect that you are thinking of that. And then it might write in patterns there.

That's really science fiction at the moment because getting this to work in humans could be quite difficult compared to mice. But we're kind of doing the basic groundwork in mice to try to understand what is possible and what is not. And by the way, the other thing that we can do is we can create bio-hybrid artificial intelligences where you train a neural network to solve a problem, but you also train it to mimic neural activity from an animal that's solving the same problem.

So these neural networks are instructed, both by the world and by the brain, and that's eminently doable right now. And we know that AI systems are very fragile, but brains are much more robust, and so does the robustness of the brain seep into the biohybrid neural network, right. That's another interesting kind of crazy direction that we're playing around with.

Nicholas Weiler:

Wow.

Surya Ganguli:

But that's eminently doable at the moment because we're just using neural activity to instruct AI systems.

Nicholas Weiler:

Mm-hmm. Mm-hmm. Well, we'll definitely have to come back. I want to hear all about that once you've got some results there. Maybe we can have another conversation.

Surya Ganguli:

Sounds good. Yeah. Thanks for having me.

Nicholas Weiler:

Yeah. Thank you so much for coming on the show. Thanks again to our guest, Surya Ganguli. We'll include links for you to learn more about his work in the show notes, and we actually have a big symposium coming up on the topic of neuroscience and AI in October of 2024. Check it out on our website, and if you're interested, come join us and hear about the latest in this exciting field. If you're enjoying the show, please subscribe and share with your friends. It helps us grow as a show and bring more listeners to the frontiers of neuroscience.

We'd also love to hear from you. Tell us what you love or what you hate in a comment on your favorite podcast platform, or send us an email at neuronspodcast@stanford.edu. From Our Neurons To Yours is produced by Michael Osborne at 14th Street Studios, with production assistance from Morgan Honecker. Our logo is by Aimee Garza. I'm Nicholas Weiler at Stanford's Wu Tsai Neurosciences Institute. See you next time.