Leading into the Future

University of Minnesota has been at the forefront of computing for decades. Now it's time to lead in AI.

Illustrations of multiple computers from older to modern on a line.

The next great technological revolution may be happening right in front of our eyes.

Seemingly overnight, artificial intelligence (AI) has gone from a distant idea to something accessible to everyone with an Internet connection.

AI has the potential to be one of the most transformative revolutions in human history. Its ability to analyze vast amounts of data, identify patterns, and make predictions has already begun to reshape industries such as healthcare, finance, transportation, and entertainment. 

The University of Minnesota has often been a trailblazer in computing technology, from pioneering supercomputing to shaping networking and software development.

Remaining at the forefront of AI research and education ensures Minnesota's continued impact on technological progress. In an era where AI redefines industries and human interaction, the University's expertise will illuminate the path forward, guiding the responsible integration of AI for the betterment of society.

‘The Silicon Valley before Silicon Valley’

The University and the state of Minnesota’s importance in supercomputing dates back to the mid-1940s when Engineering Research Associates, the first digital computer company, was founded in St. Paul. After a series of mergers and acquisitions, the company became Sperry Rand and was the second largest computing company, only behind IBM.

1950s

In 1957, a group of engineers, including Seymour Cray, founded a new company, Control Data Corporation. Cray is a 1949 bachelor’s of electrical engineering and a 1951 master of science in applied mathematics graduate of the U of M.

1960s

At Control Data Corporation, Cray and others built the first supercomputer, the CDC 6600. At the time, the CDC 6600 outperformed the industry’s prior record holder by a factor of three.

An illustration of a computer similar to those used in the 1970s.

1970s

Cray left Control Data and formed Cray Research in the early 1970s. There, he developed the Cray I, which became the fastest supercomputer at the time. For this work, Cray is often referred to as “the father of supercomputing.”

An illustration of a computer and monitor similar to those used in the 1980s.

1980s

“Minnesota was kind of Silicon Valley before Silicon Valley from 1946 into the 1980s,” says Jeffrey Yost, the director of the Charles Babbage Institute located in Andersen Library, which studies the history of computers and technology. “The University’s engineering education infrastructure, the trained workforce, the venture capital infrastructure here early on all extends in part from having Control Data here earlier.”

At the University, research into computing and developing these technologies has taken place on campus for decades. In 1981, the U of M was the first U.S. university to acquire a supercomputer with the acquisition of the Cray I.

An illustration of a modern desktop computer and monitor.

1990s

This work into cutting-edge technology continued into the decades when, in 1991, a U of M team led by Mark McCahill developed Gopher, an internet protocol that laid the ground work to the World Wide Web we use today. Before Gopher, users had to retrieve documents and programs from the Internet one by one via servers located around the world. Gopher combined these internet resources in a user-friendly, text-based way.

“Gopher was extremely important,” Yost says. “That was the way people found information that was useful to them on other computers. It was probably the most prominent way for people to locate information on servers until around 1994.”

Downward arrow icon made up of maroon dots.

Moving into the next technological revolution

Today, researchers from across the University are exploring AI and its possibilities. Vital, cutting-edge research is being done that has the chance to shape our world going forward.

“There is not a single discipline of study that is not impacted by this technology,” says Vipin Kumar, a Department of Computer Science and Engineering professor and one of the leading researchers in the field of AI. “Whether it’s law, journalism, social sciences, material science, and so forth. You can think of any field, and you will see AI being used.”

At the same time, researchers have to grapple with the potential negative impacts of AI. With deep fakes a concern ahead of the 2024 U.S. presidential election, the seemingly limitless possibilities with AI can also take a concerning turn if work is not done ethically and morally.

“I never thought I’d see the day where AI got this far,” says College of Veterinary Science Professor Jaime Modiano, who is leading research into how AI can determine a dog’s risk of developing cancer. “To be perfectly honest, I figured this was stuff from a sci-fi movie. But just like any tool, and AI is a tool, it can be used for great things or it can be used for terrible things. Take, for instance, a hammer. You can use a hammer to build a house. Or you can use the hammer to beat someone up.”

Kumar Vipin, a man wearing a white shirt and glasses with short hair.

Because of the rich history of computing breakthroughs in and around the University, it was selected to house the Charles Babbage Institute. Its mission is to facilitate, foster, and conduct research to advance the understanding of computing, information, and culture.

Yost, the institute’s director and research professor in the History of Science, Technology, and Medicine Graduate Program, recently was invited to Capitol Hill with three other national-leading historians of computing to help drive policy in AI. One of Yost’s concerns is that the major tech companies will be driven by profits when developing the future of AI instead of social good.

“There is a lot of hype about large language models, such as ChatGPT, now,” Yost says. “They train on data without contexts. This presents substantial social risks, and we need social scientists and ethicists, as much as computer scientists, to help advise and get proper regulations in place to better assure these technologies can lead to social benefits and not social harms.”

In the classroom, faculty members are teaching students to use the technology ethically, but admit there is no way of knowing what this technology might look like down the line. For instance, if companies move toward having AI write code — or write words — for them, and there’s an error, who is there to fix it if the business doesn’t have a computer scientist or an editor who understands the language?

“You still need human intervention to correct things and to come up with architectures,” says Maria Gini, a College of Science and Engineering professor. “So how do we train this next generation with so many unknowns?”

Where AI may be in the next decade or two is complex and complicated. But what’s clear is that its future will be driven by the research being done right now at places such as the U of M.

“If you’re not a leader in this field, you will lose relevance pretty soon,” Kumar says. “You need to be the ones pushing the boundaries and frontiers of knowledge. If you’re not, it’s going to be hard to call yourself a major research institution. So it’s absolutely critical for us to be at the top.”