There is no Nobel Prize for mathematicians, the story goes, because of a love affair.
Alfred Nobel, the inventor of dynamite who established the prizes to spruce up his image, refused to endow a prize in mathematics because his wife was having an affair with the Swedish mathematician Gosta Magnus Mittag-Leffler. Nobel was afraid a math prize would be awarded to the mathematician-cum-Romeo, and so the mathematics community has forever been excluded from the most recognized award in all of science.
Alas, the story is not true. Nobel never married and by all accounts was quite a lonely man. But his oversight may perhaps be why mathematicians get so little press. That, and the fact that non-mathematicians have no clue what they’re up to.
“Most people are so frightened of the name of mathematics that they are quite ready, quite unaffectedly, to exaggerate their own mathematical stupidity,” said the English number theorist G.H. Hardy. But admit it: Whether you left math after a humiliating D in high school trigonometry or crawled away, exhausted and defeated, from a year of college calculus, you’ve always suspected that, deep down, mathematics rules the world.
As you read this, ex-physicists are probably devising ever more sophisticated ways to wager your pension fund on Wall Street, and no doubt five geniuses in a government agency that does not officially exist are developing data-mining algorithms that will calculate the likelihood your baby sister is a terrorist.
But there was little to no popular media coverage of the Aug. 20 announcement of the Fields Medals, the highest honor in mathematics. Given every four years to the best mathematicians under the age of 40, this year’s prizes were awarded at the International Congress of Mathematicians in Beijing. No doubt you didn’t even know they were getting together.
The medals went to Laurent Lafforgue of the Institut des Hautes Etudes Scientifiques, in Bures-sur-Yvette, France, and to Vladimir Voevodsky of the Institute for Advanced Study at Princeton. The 2002 Nevanlinna Prize, one of the highest honors in computer science, went to Madhu Sudan of the Massachusetts Institute of Technology.
Let us recognize that mathematicians are not like you or me. We the many can detect some beauty in the paintings of Titian, feel a certain sad hope in a Chopin sonata, recognize the grace in Frank Lloyd Wright’s Fallingwater. But, most likely, the isomorphism between a modified motivic cohomology of an algebraic variety and the modified singular cohomology of its natural topological space does little for us.
Which is, really, a shame. For there is a beauty in mathematics, which you may have glimpsed that day in first grade when it struck you how peculiar zero was: that you could add it to any other number — any number at all! — and the number would stay the same. Or maybe you’ve encountered a slick little thing called the square root of -1. There are men who have this number engraved on their tombstones.
This wonder, of course, gets beaten out of us by dull teachers, media stereotypes, the massacre of our attention span, and the manufacturers of standardized college entrance exams. But it hasn’t been beaten out of these guys. “There exist in mathematics things extremely beautiful,” said Lafforgue. “One thing that’s always astonishing is that, occasionally, one realizes that in mathematics the truth is beautiful.”
Over the millennia the tree of mathematics has branched in dozens of different directions, arching out from a base that looks like a high school transcript: geometry, algebra, analytic geometry, calculus. In some ways mathematicians have been working there ever since, extending those basic concepts to more and more sophisticated ideas, building mathematical objects (like the set of all positive integers, 1, 2, 3, …), and constructing ever more complicated beasts (like the set of all fractions, to give a trivial example,). Mathematicians often find this beauty, this truth, in their efforts to unite previously disparate areas of their field, to tame the unruly beasts they have unleashed.
Lafforgue made his mark in such unification. Decades ago a young Princeton mathematician named Robert Langlands conjectured that two very different animals are intimately connected. Roughly speaking, as Charles Seife described in Science magazine, these animals are mathematical objects that can be distorted in certain ways and still retain their original shape [such as the fundamental equivalence between a rubber coffee cup and a doughnut -- one can be stretched into the other, as long as you respect the hole] and objects that reveal the relations between solutions of equations.”
Langlands’ conjecture, described as a “Rosetta stone” of mathematics, was formalized into the Langlands Program, a quest that has happily occupied scores of mathematicians for more than 30 years. Andrew Wiles, the Princeton mathematician who a few years ago announced a heralded proof of Fermat’s Last Theorem, established a important ingredient of the conjecture in his own work.
In general, mathematicians believed that Langlands’ conjecture was true, but proving it was extremely difficult. Parts of the proof had already garnered two Fields Medals, and in 1999 Lafforgue made his mark with a 300-page handwritten proof of the conjecture in the case of what are called “function fields.”
This is where the writer, fearing that he is losing the reader, must bring the discussion back to earth. He’ll begin by pointing out that, the foregoing remarks notwithstanding, mathematicians are as human as you or me, even if they often have funny-looking hair or peculiar habits. The genius Paul Erdos called little children “epsilon,” which is humorous if you’re a mathematician (the Greek letter epsilon is often used as a symbol to express the concept of something approaching zero) but probably irritating if you are not.
Sometimes mathematicians make mistakes. After his proof of Fermat’s Last Theorem was announced in the New York Times (“At Last, Shout of ‘Eureka!’ in Age-Old Math Mystery”) Wiles discovered a mistake in his work and presumably just about had a bird. It took him a year to fix the problem, and Fermat was put to bed at last.
In 2000 Lafforgue was awarded the Clay Research Award by the Clay Mathematical Institute in Massachusetts. Just five days later he found a mistake in his own work. Lafforgue contacted Arthur Jaffe, the Clay Institute’s president and a mathematics professor at Harvard, and offered to return his prize.
“Andrew Wiles and I convinced him that the award would be, under the circumstances, even more valuable to him,” said Jaffe. “He could travel or collaborate however he liked in order to repair his proof.”
Lafforgue said he worked day and night and fixed the flaw a few months later, proving that two very different-looking things are the same. At the same time it gave mathematicians confidence that the Langlands Program would succeed in other areas where work proceeds on the conjecture.
Why does Lafforgue’s proof matter to me or to you? For a moment let’s set aside the part about beauty.
In 1960 the physicist Eugene Wigner spoke of “the unreasonable effectiveness of mathematics.” Mathematics is useful, and not just in the spreadsheet that is going to be part of the report you have due in three hours. Scientists and engineers have constructed our world on it, from Newton’s calculus — which he invented to describe the laws of motion — to the quantum mechanics that describe the workings of the chip inside your personal computer.
Mathematics, amazingly enough, works; that is why it has been called the queen of the sciences. It works in the real world, and not just in the airy heights of the mathematician’s imagination. It’s an ugly kind of thing, in a way, in the minds of some pure mathematicians. “Real mathematics has no effect on war,” wrote Hardy in “A Mathematician’s Apology,” a book intended to justify his existence as a pure mathematician. “No one has yet discovered any warlike purpose to be served by the theory of numbers or relativity.”
Hardy wrote this in 1940, five years before the nuclear bomb was built on Einstein’s fundamental ideas in relativity and well before today’s cryptography built on prime numbers. These days no one is clean. That is one thing of which we can all be sure.
Voevodsky also solved a major mathematical problem, called the Milnor conjecture. Milnor, one of the best mathematicians of the past half century and a close friend of John Nash, the mathematician portrayed in the movie “A Beautiful Mind,” believed there was an equivalence between different ways of describing the properties of different kinds of surfaces. (This is a vast oversimplification, but I’m worried again that I’m losing you.) Voevodsky created new mathematical tools that, in 1996, enabled him to solve the problem.
Voevodsky is also a Clay Institute Prize fellow, and the institute sponsors his visits to Russia to lecture and inspire the current generation of young students there. Russia has a proud tradition of mathematicians and mathematical physicists (built in part, it has been speculated, because the country was unable to mount major efforts in the experimental sciences), which like other areas has suffered with the collapse of the Soviet Union.
How do mathematicians like Voevodsky work? This is, in the end, difficult to say. Hardy spent most days at the side of a cricket field, drinking tea. The great Grothendieck, who as much as anyone else is responsible for a vision of the unity of all mathematics, spent 18 hours a day creating mathematics that has astonished and inspired mathematicians ever since. (Grothendieck won his own Fields Medal in 1966, Milnor in 1962.) “There was far more imagination in the head of Archimedes than in that of Homer,” Voltaire said, which English majors might doubt. But they would be wrong.
Your tax dollars pay for about $300 million dollars of mathematical research each year, and we are finally going to get to something you can understand.
As you read this article, you trust that the words your computer retrieved from Salon’s Web servers are faithful to their original. But how do you know?
You know because mathematicians and engineers have thoughtfully included error-correcting codes in the computers that talk to one another across the Internet, because sometimes bits get scrambled, dropped, or mutilated. Madhu Sudan, winner of the 2002 Nevanlinna Prize, explained it to his daughter as follows: “When my 3-year-old daughter, Roshni, asked me what I do, I told her I correct errors. She asked me what is an error and what does it mean to correct them. So I wrote ‘Rothni’ on a piece of paper and asked her to circle any mistakes (without explaining what I intended to write). She circled the ‘t.’ I told her that’s what I do for a living. She understood what I did, but not why it was a big deal.”
It’s a big deal because Sudan has shown that certain codes can correct many more errors than was previously thought possible.
Sudan is a theoretical computer scientist. He does not, he said, use a computer in his work. He was recognized for his breakthroughs in error-correcting codes, probabilistically checkable proofs, and the non-approximability of optimization problems.
Given a proposed proof of a mathematical statement — say, the statement that there is an infinite number of prime numbers (numbers, such as 7 or 17, evenly divisible only by 1 and themselves) — the theory of probabilistically checkable proofs recasts the proof so that its fundamental logic is encoded as a sequence of bits that can be stored in a computer. Checking only some of these bits, Sudan and others have shown, can determine with high probability whether the proof is correct. Amazingly, the number of bits one must examine can be made extremely small.
How small? Let’s just leave it as “small,” because this article must soon come to an end, and because … well, you know.
Consider two sets: all towns in your state, and all states whose names end in the letter “A.” Given a finite collection of finite sets such as this, what is the largest size of a subcollection such that every two sets in the subcollection have no overlap? I forgot to ask Sudan about this particular problem, but he probably doesn’t know anyway — hey, it’s a tough problem. You might propose a solution, which could be easily checked, but in general there may be no known algorithm that will easily produce a solution from scratch.
What Sudan and others showed is that, for many such problems, approximating an optimal solution is just as hard as finding an optimal solution. Now, this could obviously be useful to scientists and engineers. It also has implications for a fundamental mathematical problem called P=NP (if you solve it, the Clay Institute will give you a million dollars).
Why, in the end, does all of this mathematics matter? Why have Lafforgue, Voevodsky, and Sudan been culled from the set of all mathematicians for the highest honors?
Yes, mathematics is wonderfully useful for calculating the properties of superstrings and the path of the next comet that will collide with earth. Yada yada.
But, really, does a poem matter only because it can be read at a memorial? Isn’t a busker’s real worth the gleam in his eye when he performs? Is whatever art you may have created more important than the way you felt when you created it, or the way others felt when it was received?
“The case for my life, then,” Hardy wrote in his “Apology,” “or for that of any one else who has been a mathematician in the same sense in which I have been one, is this: that I have added something to knowledge, and helped others to add more; and that these somethings have a value which differs in degree only, and not in kind, from that of the creations of the great mathematicians, or of any other artists, great or small, who have left some kind of memorial behind them.”
This story has been corrected.