For almost 50 years, Dr. Klaus Truemper has worked at the University of Texas at Dallas: the first two decades in Operations Research and Mathematics, and then in Artificial Intelligence (AI). At present, he is Professor Emeritus of Computer Science. He recently published his AI insights and findings in the book Artificial Intelligence: Why AI Projects Succeed or Fail. He is the author of additional books on artificial intelligence, brain science, mathematics, and computer science, and related software.
What inspired you to write this book?
AI research has an extraordinary failure rate of projects when compared with other sciences. Since 1985—when I started working in AI—I have wondered why this is so.
There were some explanations early on in my research in AI. For example, Natural Language Processing (NLP) often relied on arguments that, from a philosophical standpoint, assumed key results of the book Tractatus Logico-Philosophicus of the philosopher Ludwig Wittgenstein. Yet, Wittgenstein realized in the second part of the 1920s that these results were fatally flawed. Thus, these NLP projects were a priority and guaranteed to fail.
However, I never managed to explain the other failed AI projects, for example, the extraordinary failure of Lisp and the Texas Instruments Lisp computer called Explorer or the total collapse of expert systems research. That is, I could explain what went wrong but could not show why AI researchers made these blunders in the first place.
Many years ago, I became aware of the new results of neuroscience, popularly known as brain science; I use the technical term since much more than the brain is involved. It all began with a talk given by Prof. Aage Moeller in the lecture series honoring one of the early professors of UTD, Prof. Polykarp Kusch. Prof. Moeller’s talk triggered my interest: Could these results help us understand the world? That thought was the seed for a research effort that so far has resulted in three books. The book on AI is the latest one.
Dr. Truemper, would you mind giving a brief description / synopsis of your book?
Artificial Intelligence (AI) is a strange area of science: Some projects succeed beyond all expectations, while others fail miserably. How is this possible? More importantly, how can one avoid failure? The book answers these two questions.
Some failures are caused by erroneous mathematics or the use of inappropriate data. But others defy such a simple explanation. The book uses modern neuroscience and philosophy to obtain answers for nonobvious cases.
Consider two perfectly reasonable approaches for AI research:
By watching ourselves solve a given problem—for example, how we drive a car—we infer how a computer can produce the same result.
By thinking about the world, we infer how the world is structured. We assume that this insight into the structure of the world is correct and hence postulate that when a computer looks at the world the same way, it will function as we do.
Neuroscience can be used to show that these two seemingly reasonable approaches for the construction of AI systems are virtually guaranteed to fail.
An amazing conclusion, isn’t it? The book works out the detailed arguments, including a discussion of various example cases.
If you could interview your younger self from 40 years ago, what would you advise yourself with respect to AI research?
Modern neuroscience has existed only for 30 years. So 40 years ago, the results employed in the book were unknown, and I couldn’t have advised myself then about the right way to look at AI.
An interesting situation, isn’t it? It demonstrates that we live in a period of upheaval of a magnitude never seen before in the history of mankind.
I should add, though, that I had the good fortune to base all of my AI research on logic and use mathematical logic to construct AI systems. Hence, that work does not suffer from the shortcomings mentioned above, and we produced some really good results. For example, PhD student J. Straach and I created a self-learning expert system for a complex problem. After several dozen uses, the system had learned enough, just by itself, that it had become as capable as the best expert.
Looking into the future, what role do you see for AI? And what could a timeline look like?
We are at the threshold of incredible advances. OpenAI’s system GPT-4, which appeared just a few days ago, is one example of amazing AI systems. Google is a master producer of impressive AI systems for a number of applications.
Timeline: A flood of new and exciting AI systems is building right now.
Hype: The only thing that can hurt AI is the current wave of hype using “AI” as a buzzword. For example, the hype has proved to be very damaging for the development of self-driving cars. These cars so far have maimed and even killed people. On the other hand, there are incredible results already, and we hope they overcome the hyperbolic claims’ negative impact.
Which fields do you think could benefit most from AI, and conversely, which field might benefit least?
My definition of an AI problem is any case that demands highly intelligent subconscious neuro processes when the task is performed by humans. Hence, any time that condition is satisfied by a problem, we have an opportunity to design an AI system. This is a vast field, indeed since it spans virtually every human endeavor.
There is concern that extensive implementation of AI will put humans out of work; what would your response be to those concerns?
That is a mistaken concern. Consider that some centuries ago, almost all of humanity was involved in the production of food, clothing, elementary equipment, and housing. Before the current wave of AI, the percentage of people involved in those tasks had already become very small. This has set free a large percentage of the population to do something else. For example, we now have a huge number of scientists looking at virtually every part of our world and try to figure things out. Armed with that knowledge, we can create a better world.
This started long before AI produced anything useful, which in my estimate, began to happen only 30 years ago. AI will only accelerate the conversion process where people no longer produce for day-to-day needs but work for a better world.
If you could please provide the reader with two or three examples of successful application of AI – in other words, how has AI positively affected socio-economic outcomes?
The Google search engine answers 8.5 billion queries each day. It makes an incredible economic contribution to every aspect of human life around the globe.
DeepL and Google Translate unite people by allowing them to communicate in their own language. For example, in the European Union translators are a critical component for information exchanges among the 27 countries.
AlphaFold created by DeepMind, accurately predicts 3D models of protein structure at a minute fraction of the cost of prior methods, which use expensive and time-consuming laboratory tests.
GPT-4 has opened up new ways to produce text and computer code.
Is AI currently hardware constrained or is the current hardware already up to the task?
There is a major distinction of computing equipment: GPU versus CPU. The latter is the old-fashioned central processing unit used in multicore machines. It sufficed for traditional AI systems. GPUs (graphics processing units) with terrific performance are needed to construct and evaluate huge neural nets, which are currently the fastest-growing segment of AI computation. The faster GPUs become, the more complicated neural nets can be constructed and evaluated. At this time, there is no limit in sight where we would say that GPUS are fast enough.
Some years ago, I speculated that the 21st century will be called the century of computation. We see the first events supporting that guess.
What role do you hope to play with respect to future developments of AI – are you currently actively involved in frontline research?
Researchers of a large AI lab in the UK who use the paradigm of the so-called free energy principle want to interact with me, since the results of the book intrigue them. Beyond that, I am moving on to a new question: How can neuroscience help us improve the human condition? It has complex answers. This will be the topic of the next book.
ABOUT THE UT DALLAS COMPUTER SCIENCE DEPARTMENT
The UT Dallas Computer Science program is one of the largest Computer Science departments in the United States with over 4,000 bachelors-degree students, more than 1,010 master’s students, 140 Ph.D. students, 52 tenure-track faculty members, and 42 full-time senior lecturers, as of Fall 2022. With the University of Texas at Dallas’ unique history of starting as a graduate institution first, the CS Department is built on a legacy of valuing innovative research and providing advanced training for software engineers and computer scientists.