Read Montague discusses neural networks in his office at the Fralin Biomedical Research Institute. He said he only plays the piano at left in secret. Photo by Tad Dickens.

The U.S. Constitution’s Fourth Amendment protects citizens from unreasonable searches and seizures. To date, no one has raided the human mind for crime evidence.

The imaging technology to make such apparently dystopian moves is not in the immediate future, and those who do brain imaging in 2023 aren’t interested in illicit secrets, pre-crime or simple gossip fodder. Scientists including Roanoke-based Read Montague are pioneering ways to help people with psychiatric problems, neurological diseases and drug addiction.

Not that Montague hasn’t thought about artificial intelligence’s future implications.

“One of the dangers of when we decode people is your Fourth Amendment rights will have to be de-granularized,” Montague said. “We’re going to really have to think about what is it that we want to protect inside our own heads.”

Montague, 63, is wary of a potential future that he believes would include a device capable of uploading and decoding human brain activity into comprehensible thoughts, ideas and narratives.

“I would not let somebody scan my brain. Ever. No way,” he said. “My son, who’s 16 … went, ‘What are we going to do about all these technologies?’ And I went, ‘That’s your generation’s problem. You’re going to have to really start thinking about it.’”

Society would seemingly have some time to reckon with that aspect of AI, another of humans’ fantastical creations. Montague, meanwhile, has no problem with today’s brain scanning devices, which use AI to get results and are all about helping people. For example, he didn’t shy away from the optically pumped magnetometry device, a headset that captures brain signals and is a recent addition at the lab he runs, the Center for Human Neuroscience Research, which is part of the Fralin Biomedical Research Institute at VTC. One of that helmet’s purposes is to study two-party interactions on social media, Montague said.

He is among the planet’s leading AI experts, with 30 years pioneering in computational computer neuroscience. His work includes more than researching problems in the brain. Montague — also a research professor at the Virginia Tech Carilion School of Medicine — is using the same tools to chase down Parkinson’s disease and childhood epilepsy.

On Thursday, he will open the 2023-2024 Maury Strauss Distinguished Public Lecture Series at the Fralin Institute. The free, public event — titled “Machine Learning and Human Thought – A Modern Frontier” — is set for 5:30 p.m., with a 5 p.m. reception at 2 Riverside Lane. The institute will livestream the presentation via Zoom.

This year’s Strauss lectures

See the full list of 2023-24 Strauss lecture series speakers at bit.ly/fralin-institute-strauss-lectures

Don’t expect talk of a dark future. Montague’s work is about a bright present and future, and it includes applying machine-learning models to accurately interpret vast amounts of functional magnetic resonance imaging data and encoding the brain’s magnetic waves with unprecedented resolution, opening up new ways to visualize brain activity, said Michael Friedlander, the Fralin Institute’s executive director.

Montague will be the first speaker from inside the building to give a Strauss lecture. Friedlander began the series in 2010, and eight years later Roanoke real estate developer and philanthropist Strauss donated $1 million to bring in speakers from outside the region — Nobel Prize laureates and other distinguished scientists, doctors, researchers and the like.

“But I’ve heard from people in the community who say, ‘We know you guys are doing all this incredible research. We’d like to hear from you as well,’” Friedlander said. “I decided this year, with [Strauss’] blessing, that we would start the program with an inside speaker, i.e. Read Montague. Who better to kick it off, to give people a taste and flavor of what’s going on right at the research institute, not just around the world. So we’ll see how it goes. I expect the reaction of the public will be good, and I expect we’ll do more of this in future years as well.”

It will be Montague’s first time speaking in Roanoke, but he’s experienced at giving presentations worldwide. Last year he traveled to Sweden’s Karolinska Institutet, where he joined professors from Cambridge, Princeton, Brown and elsewhere for a Nobel mini-symposium. The topic: Dopamine as a neural substrate of reward prediction and psychopathology.

Friedlander and Montague go back to the latter’s days as a bored medical student at the University of Alabama-Birmingham. Montague, an Atlanta native who had graduated from Auburn University with a mathematics degree, heard Friedlander give a lecture to his fellow med school students, then asked him if there was a chance he could get involved in research. 

“He and I had a long talk, and he impressed me in a matter of minutes as somebody who was not only bright, but with a real burning interest to go deep in the science of medicine,” Friedlander said. “So it didn’t take me long to figure out it would be great to get him involved with research in my lab.”

Friedlander brought him on as a part-timer at his neurobiology lab, then gave him full-time work. Montague, having shucked his medical studies at Birmingham, received a doctoral degree in physiology and biophysics in 1998. From there, Montague received two fellowships — one with Nobel Prize-winner Gerald Edelman, in New York, and the other with one of his big influences, in California.

Montague’s California fellowship sprang from a paper he read in his Alabama-Birmingham days — “A Learning Algorithm for Boltzmann Machines,” from 1985. Geoffrey Hinton, now known as the “Godfather of AI,” and Terrence Sejnowski were two of its writers. Sejnowski leads the Salk Institute’s Computational Neurobiology Laboratory, and Montague wanted to go there. Two years after receiving Montague’s letter asking for a fellowship, Sejnowski invited him out.

The paper was about machine learning, a part of AI. The Howard Hughes Medical Institute funded Sejnowski’s lab, and the Salk Institute was supportive, but there was a major problem at the time, Montague said. It centered on the artificial neural networks the lab was trying to master.

Neural networks can be biological, or machine-based. In AI, they are a type of brain-inspired computational tool that learns from the data it is fed. 

“You have a system,” Montague said. “It has some input, like a behavioral problem, a query, foraging on a flower, it produces some output. That output gets compared to something. You have some critic, automated or otherwise, that comes back, and then you reorganize here. That’s learning. That’s the way people like me think about learning. And you can see, that doesn’t distinguish natural systems from artificial systems in any way.”

He used a bee for an example.

“A bee has to keep learning to stay alive its whole life” of 30 days, he said. “In fact, a bee has to keep learning across the course of a day because the sun comes across like this, different kinds of flowers start opening their petals, different sources of nectar appear at different times of day. And they look different. It could be a cloudy day … they have to relearn this stuff all the time. So bees are like humans in that way: Bees have to jump to conclusions; and they have to forget quick. Humans are good at that kind of stuff.”

So are artificial neural networks, except that in the late 20th century, the computing power was too weak.

“You couldn’t train them on any data,” Montague said. “You didn’t have any computers big enough. … You couldn’t expose them to enough information to extract any kind of structure.”

Nowadays, the computers that researchers use for neural networks are 10 million times faster than they were in the late 1980s, he said. Such speed and power opened the way for the artificial deep learning that helps such researchers as Montague to make breakthroughs.

His research has spanned the neural basis of risky decision-making, confirmation bias, risk-reward analysis, mental states during the simulated commission of a crime, impulsiveness and political ideologies, according to the Fralin Institute. In 2011, Montague’s group at Fralin was the first to observe nano-scale variations in brain chemicals in humans — while they are awake. In work he did in 2016, 2018 and 2020, he discovered how dopamine and serotonin jointly underpin sensory processing and human perception, according to the institute.

After his fellowships, he went to Baylor University College of Medicine, in Houston, where he worked until he came to Roanoke. He helped bring Friedlander from Birmingham to Houston. After Virginia Tech hired Friedlander to run what would become the Fralin Institute, he brought on Montague in 2010.

“He’s been very impressive his entire career,” Friedlander said. “I always knew he was destined to do very important things, creative things, and he has. He’s never disappointed. … I’d say the last five to 10 years of his career are the most exciting ever, with the work he’s been doing recently. It’s truly transformational.”

Read Montague’s research has spanned the neural basis of risky decision-making, confirmation bias, risk-reward analysis, mental states during the simulated commission of a crime, impulsiveness and political ideologies. Photo by Tad Dickens.

Montague, during a long and wide-ranging conversation in his office, used his white board to draw out lessons about machine learning. He discussed such science paragons as Alan Turing, who broke the Nazi Enigma code during World War II among other accomplishments — “completely underappreciated for the work that he did” — and Isaac Newton, who developed calculus to explain the solar system. 

“What would amaze [Newton] about the modern world?” he said. “Heavier-than-air flight … and the thousands of planes in the sky at any one moment. We’re governing all of that. And then materials, right? Plastics and glasses and all this stuff that we can make. What is that machine doing? Well, we learned how to melt sand down, take predicate logic and run it a trillion times faster than it could run through your brain. What?

“Three-hundred and fifty years. We’re not going to recognize things in 350 years.”

And he discussed Hinton, who was Sejnowski’s partner on the long-ago paper that drew Montague toward his life’s work. 

“He knows a lot about the natural world, but when he works on networks, he always uses toy examples,” Montague said. “He’s made an entire career of playing with toy examples. I’m not kidding. And he always knew networks would be a lot smarter than people. I don’t think he expected what happened now.”

Hinton, whose work in 2012 at the University of Toronto laid the foundation for AI’s explosion, recently left Google, which bought the company that sprang from his AI work with two of his graduate students. Hinton said he left in order to speak freely about the potential dangers of AI, The New York Times and other sources reported. The British scientist went to work at the University of Toronto because most AI funding in the United States at that time was Pentagon-funded, the Times reported.

“The guy’s already made a lot of money, too,” Montague said. “A crisis of conscience after making $100 million.”

He added: “He’s not disingenuous. He’s not new to that concern. … But the genie’s not going back in the bottle. It’s too easy to do. A kid with a computer.”

He noted that today’s smartphones have more computing power than the supercomputers of the late 20th century. 

“A consumer device. $600. And I know that one of the [supercomputers] that I worked on cost $30 million. … That is a big change.”

As for the neural networks that fuel so much AI work now?

“We’re going to have to give them morals,” he said. “The only way they’re powerful and useful is when you let them loose on ambiguous data, and you give them some freedom.”

He name-checked iconic science fiction writer (and actual scientist) Isaac Asimov and his rules of robotics.

They are, according to Brittanica.com:

A robot may not injure a human being or, through inaction, allow a human being to come to harm. 

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

And his later fourth law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

“They’re good ideas, I think,” Montague said. “But for everything that you can say, one thing that working in this area teaches you is there are always unintended consequences, unintended special cases.”

Tad Dickens is technology reporter for Cardinal News. He previously worked for the Bristol Herald Courier...