Collage of five portraits and a Michigan Engineering block "M" logo in the top left. The portraits are (top middle) Nicholas Kotov, (top right) Peter Adriaens, (bottom left) Max Li, (middle bottom) Qu Qing, (bottom right) Fraser King.

AI symposium: Michigan Engineering speakers share how they use AI in research

In addition to making predictions and scientific discoveries, engineers at the MIDAS symposium discussed improving AI’s interpretability and preventing misuse.

When he first heard that artificial intelligence and machine-learning models could lead to scientific breakthroughs, Nick Kotov was skeptical. He was sure that he and his colleagues had enough expertise in chemistry, biology, and materials science to design nanobiotics—nanoparticles that serve as antibiotics.

A portrait of Nicholas Kotov in a lab.
Nicholas Kotov, the Irving Langmuir Distinguished University Professor of Chemical Sciences and Engineering and the Joseph B. and Florence V. Cejka Professor of Engineering. Photo credit: Brenda Ahearn, Michigan Engineering.

Kotov’s team tried to design particles made from inorganic materials, such as the zinc oxide found in sunscreen, that could kill pathogens by binding with, and shutting down, their proteins. But all of his attempts to predict what nanoparticle shapes and materials would be most effective were unsuccessful. Machine-learning tools were his Hail Mary.

“For a long time, I resisted using machine learning. I didn’t like it; I didn’t trust it,” said Kotov, who is the Irving Langmuir Distinguished University Professor of Chemical Sciences and Engineering and the Joseph B. and Florence V. Cejka Professor of Engineering. “AI models have thousands of parameters without clear meaning, [so] it appeared to me as lazy-man science. It wasn’t until I was frustrated to the bottom of my stomach with our inability to predict these interactions that I said ‘yea, let’s try it.’”

Other faculty from the College of Engineering shared their own stories of how they used AI for science and engineering at the Michigan Institute for Data & AI in Society’s (MIDAS) symposium on March 18th and 19th. The annual event aims to enable the use of AI to achieve research breakthroughs by allowing faculty to share their lessons, as well as offering sessions for research brainstorming and skill building. The Michigan Engineers discussed the discoveries they’ve made and their tips for making trustworthy and interpretable AI models.

Kotov changed his tune over the course of his journey with AI. The former skeptic is now recognized as a leader in developing machine-learning tools to study interactions between proteins and nanoparticles. Most tools predict protein interactions based on the order of their amino acid building blocks, which determines the protein’s structure and, by extension, how other substances will fit into specific grooves where the chemical interactions happen. 

Rather than focus on the order of the building blocks, Kotov’s models predict with around 80 percent accuracy how proteins interact based on their overall shapes and how their shapes can change. The approach can be directly applied to inorganic nanoparticles to accurately predict how they will interact with proteins. Kotov has used those insights to design a variety of nanoparticles, including some that clear multiple SARS-CoV-2 variants from mice lungs. His innovations with bio-inspired nanoparticles recently earned him a spot in the National Academy of Engineering.


A portrait of Peter Adriaens.
Peter Adriaens, professor of civil and environmental engineering. Photo credit: Joseph Xu, Michigan Engineering.

Peter Adriaens, a professor of civil and environmental engineering, has been developing machine learning models that advise companies on how to mitigate financial risks due to water shortages, flooding and other water-related problems caused by climate change. His AI journey started when pension fund managers told him that water risks were beginning to destabilize capital markets, but there was no easy way to quantify the financial costs of those risks.

He has developed models that combine data on private facilities’ water use and regional water stress. These tools can help companies evaluate water risks of their facilities and calculate the climate-adjusted value of their assets. Adriaens hopes that the tool will be used by companies to help negotiate the terms in their contracts with public water utilities or avoid building facilities that become stranded.


 A portrait of Max Li.
Max Li, assistant professor of aerospace engineering. Photo credit: Brenda Ahearn, Michigan Engineering.

Max Li, an assistant professor of aerospace engineering, created an AI chatbot to help air traffic controllers prevent situations in which airplanes must wait in the sky before they can land or get redirected to different airports. Sometimes bad weather can prevent airplanes from landing where and when they originally intended, which burns more fuel and clogs up air space. Ideally, planes would instead wait out such events before take off via ground delay programs.

Li hopes that his tool will one day help air traffic controllers quickly schedule successful ground delays based on information from past delays. The chatbot is trained on around 20 years of ground delay programs from the Federal Aviation Administration’s Operational Information System. The data include the time, dates and locations of the ground delays, along with annotations of what caused the delay—all of which the air traffic controllers can quickly reference by asking the chatbot questions. The tool is not yet publicly available, but Li is currently gathering stakeholder input on how the tool would be most useful.


A portrait of Qu Qing.
Qing Qu, an assistant professor of electrical and computer engineering. Photo credit: Silva Cardarelli, Department of Electrical and Computer Engineering, University of Michigan.

Qing Qu, an assistant professor of electrical and computer engineering, shared methods to prevent generative AI models from creating images with nudity and the faces of real people, which could be combined into harmful deep fakes.

Current methods try to prevent genAI models from producing harmful content by selectively removing the influence of certain training data points without completely retraining the model—a process called “machine unlearning.” The problem with this approach is that the harmful content can still be produced by a user who carefully weaves adversarial text into their prompts designed to draw on that hidden data. Qu’s solution is to train the model to identify words that are indirectly related to potentially harmful content, then have the model ignore these words when generating images.


A portrait of Fraser King.
Fraser King, research fellow in climate and space sciences and engineering. Photo credit: Geoff King, used with permission.

Fraser King, a research fellow in climate and space sciences and engineering who combines surface and satellite radar data with machine learning to predict weather patterns, shared tips on how to make AI models more interpretable. He argues that interpretable models are necessary to ensure that their predictions are based on physically sound principles.

Withholding some information from the model can reveal how certain types of data impact the model’s performance, which allows King to learn how the model makes decisions from the data. With this approach, he learned that a machine-learning model trained to predict precipitation in Alaska was relying on wind speed and direction. This model wouldn’t work with satellite data because a satellite’s position relative to the wind changes as it orbits Earth. He is currently working to generalize the model to satellite data.