A Universal Language for Reasoning
Leibniz's dream and the search for a universal language of thoughts
Probably the greatest idea in all of Computer Science is the definition of “computation”. This foundational step enabled everything else in our field to fall into place. It started with a dream of one of the greatest mathematicians, philosophers, and scientists of all time and very likely the first computer scientist, Gottfried Leibniz, and concluded with the seminal work of Alan Turing, the most important mathematician of the 20th century.
This is Origins of CS, a series exploring the foundational ideas in Computer Science, the long story of humanity’s search for the ultimate machine, a machine to solve all problems; the kind of machine we today call a “computer”.
Humanity’s ingenuity extends far back into the foggy ages of the agricultural revolution and probably even farther back. Since the dawn of homo sapiens, our species has always been interested in predicting the future. Tracking the seasons, planning crops, estimating an enemy’s forces, and building cities, all of these tasks require significantly complex computations that often need to be reasonably accurate to be helpful.
For this reason, ancient civilizations developed all sorts of computational devices to keep track of complex phenomena such as planetary motion. The most famous case is the Antikythera mechanism, a two-thousand-and-some-old device used to predict eclipses regarded as the first known analog computer. The zairja is another example from around 1000 CE, a device used to make astrological predictions, much like a simplified language model (only half-jokingly here).
This is a trend we see developing well into the Middle Ages and the Renaissance: constructing machines that perform some sort of calculation, whose final realization —before the arrival of electrical computers— is Charles Babbage’s differential and analytical engines. But this is a story for another day.
In this post, I want to begin exploring the ideas that lead to the notion of “algorithm” or computational process, regardless of their physical realization. This is the first entry of Origin of CS, the story of the origins of Computer Science.
Note: A previous version of this post was read by my very first few subscribers. I’m updating and resharing this post today with you, as part of a new series I’m starting.
The modern history of Computer Science can be said to begin with Gottfried Leibniz, one of the most famous mathematicians of all time. Leibniz worked on so many fronts that it is hard to overestimate his impact. He made major contributions to math, philosophy, laws, history, ethics, and politics. He is probably best known for co-inventing, or co-discovering, calculus along with Isaac Newton. Most of our modern calculus notation, including the integral and differential symbols, is heavily inspired by Leibniz’s original notation. Leibniz is widely regarded as the last universal genius, but in this article, I want to focus on his larger dream about what mathematics could be.
Leibniz was amazed by how much notation could simplify a complex mathematical problem. Take a look at the following example:
“Bob and Alice are brothers. Bob is 5 years older than Alice. Two years ago, his age was twice as hers. How old are they?”
Before Algebra was invented, the only way to work through this problem was to think hard, or maybe try a few lucky guesses. But with algebraic notation, you just write some equations like the following and then apply some straightforward rules to obtain an answer.
Any high-school math student can solve this kind of problem today, and most don’t really have the slightest idea of what they’re doing. They just apply the rules, and voila, the answer comes out. The point is not about this problem in particular but about the fact that using the right notation —algebraic notation, in this case— and applying a set of prescribed rules —an algorithm— you can just plug in some data, and the math seems to work by itself, pretty much guaranteeing a correct answer.
In cases like this, you are the computer, following a set of instructions devised by some smart “programmers” that require no creativity or intelligence, just to painstakingly apply every step correctly.
Leibniz saw this and thought: “What if we can devise a language, like algebra or calculus, but instead of manipulating known and unknown magnitudes, it takes known and unknown truth statements in general?”
In his dream, you would write equations relating known truths with some statements you don’t know, and by the pure syntactic manipulation of symbols, you could arrive at the truth of those statements.
Leibniz called it Characteristica Universalis. He imagined it to be a universal language for expressing human knowledge, independently of any particular language or cultural constraint, and applicable to all areas of human thought. If this language existed, it would be just a matter of building a large physical device —like a giant windmill, he might have imagined— to be able to cram into it all of the current human knowledge, let it run, and it would output new theorems about math, philosophy, laws, ethics, etc. In short, Leibniz was asking for an algebra of thoughts, which we today call logic.
He wasn’t the first to consider the possibility of automating reasoning, though. Having a language that can produce true statements reliably is a major trend in Western philosophy, starting with Aristotle's syllogisms and continuing through the works of Descartes, Newton, Kant, and basically every single rationalist that has ever lived. Around four centuries earlier, philosopher Ramon Llull had a similar but narrower idea which he dubbed the Ars Magna, a logical system devised to prove statements about, among other things, God and the Creation.
However, Leibniz is the first to go as far as to consider that all human thought could be systematized in a mathematical language and, even further, to dare dream about building a machine that could apply a set of rules and derive new knowledge automatically. For this reason, he is widely regarded as the first computer scientist in an age where Computer Science wasn’t even an idea.
Leibniz's dream echoes some of the most fundamental ideas in modern Computer Science. He imagined, for example, that prime numbers could play a major part in formalizing thought, an idea that is eerily prescient of Gödel’s numbering. But what I find most resonant with modern Computer Science is the equivalence between a formal language and a computational device, which ultimately becomes the central notion of computability theory, a topic we will explore in future issues.
Unfortunately, most of Leibniz's work in this regard remained unpublished until the beginning of the 20th century, when most of the developments in logic needed to fulfill his dream were well on their way, pioneered by logicians like Gottlob Frege, Georg Cantor, Bertrand Russell, and ending in the final realization of a computational model by Alan Turing.
But that’s a story for another day :)
Loved it!
I was thinking of starting to write about how Geometry and Logic ended up with a radical revolution in Math and the birth of Computer Science.
It is fascinating how those two different fields helped us to discover the same problem.
Let's do a standalone piece on the zairja! I've learned (and written) a ton about the Antikythera mechanism, but the zairja needs some love.