By Bibi Hanselman

a^{2} + b^{2} = c^{2}. Whether it’s a distant high school memory or it has found its way into your physics homework somehow, there’s no denying that the Pythagorean theorem — timeless and profound, seared into all our memories — is practically a cultural touchstone in addition to a mathematical one.

On Monday, February 12, the Math Colloquium series for this semester kicked off with Amherst’s own Professor David Zureick-Brown. In all its simple and elegant beauty, the infamous theorem shone on the title slide, accompanied by a digestible visual proof. Zureick-Brown explained to the audience that proofs like these for the Pythagorean theorem are plentiful throughout history— so much, in fact, that even President James Garfield, when he was a House representative, added to the mix in his free time (unimaginable in today’s Congress…).

But this clearly wasn’t what Zureick-Brown would spend his precious 50 minutes discussing. His charming introduction merely affirmed the obvious: the Pythagorean theorem is as fundamental to mathematics as you can get. But that makes it easy to take for granted. Change around those exponents a little bit, and you’re suddenly left with a Pandora’s box of questions that have puzzled, and continue to baffle, the world’s greatest mathematicians.

**Enter Fermat’s Last Theorem**

If you reflect on the Pythagorean theorem for a moment, you’ll see that some numbers you can plug in for a, b, and c are nicer than others. Take a right triangle with leg lengths of 1 — then the hypotenuse will come out as the square root of 2, which is not the prettiest number. But what about leg lengths of 3 and 4? Now the hypotenuse will be the nice, whole number 5. This solution to the equation of the Pythagorean theorem, written as (3, 4, 5), is called a Pythagorean triple, as it is composed solely of positive integers. These Pythagorean triples, though pleasant to look at, turn out to be not so special: we have known since antiquity that an infinite number of them exist.

Now, let’s shake things up a little. We can increase the exponent of each variable by one, yielding a^{3} + b^{3} = c^{3}. Are there any positive integer solutions to this equation? I’ll spare you the effort: there aren’t any. Even stranger, as we continue to increase the exponents in the equation — that is, if we view the Pythagorean theorem more generally as a^{n} + b^{n} = c^{n} for some n greater than 2 — there are infinitely many real solutions, but we will never turn up with any nice positive integer solutions like our Pythagorean triples.

We can continue guessing and checking, but there are an infinite number of cases to go through. Is it really true that there are no solutions *at all *past this point? Given its relation to the easily understood Pythagorean theorem, it seems like this observation wouldn’t be so difficult to rigorously prove. Yet, when one of the world’s preeminent mathematicians of the 17th century, Pierre de Fermat, stumbled upon this deceptively simple conjecture, he struggled to provide a proof, and eventually died without one. Thus, the 300-year struggle to finish what he started and prove Fermat’s last theorem, as it’s now called, was underway. Zureick-Brown explained that intermediate results had been uncovered along the way — importantly, it was proved that there are no positive integer solutions for *prime number *choices of our exponent, n. Still, the ultimate goal of extending this truth to *all *possible integer values of n remained elusive for even the world’s brightest minds.

The quest continued until 1996, when British mathematician Andrew Wiles published a dizzying yet extraordinarily impressive proof of Fermat’s conjecture, firmly placing it within theorem territory. One of the most vicious beasts in mathematics was slain at last, and mathematicians around the world rejoiced. Wiles’ proof was so marvelous and groundbreaking that Fermat’s last theorem even began to trickle into pop culture. Notably, in an episode of *The* *Simpsons*, Homer Simpson appeared to refute the theorem by scribbling down a supposed counterexample on a chalkboard: 3987^{12} + 4635^{12} = 4472^{12}. The integers in Homer’s solution were far too large for calculators at the time to verify, so for a brief time, it hilariously seemed plausible. Thankfully, modern calculators now easily show that Homer’s counterexample was tongue-in-cheek, and his equation is simply false. Fermat’s last theorem lives on.

But is the mathematical community completely satisfied now? Zureick-Brown emphasized that mathematicians have a uniquely unfortunate tendency of pushing the previous work of others, no matter how settled, to its limits. As it turns out, the equation as seen in Fermat’s last theorem is just one of countless polynomial equations that feature only integer coefficients, which are called Diophantine equations. Typically, mathematicians want to solve for, or prove the nonexistence of, the integer solutions to such polynomials, as we saw with Fermat’s last theorem. And as one of Zureick-Brown’s slides explicitly stated: “Fact: Solving Diophantine equations is hard.” With that, Zureick-Brown dove into the quirky relatives of Fermat’s last theorem and how current mathematicians are making sense of them.

**But What about Computers?**

Unlike Fermat, we live in the computer age, and now it’s possible to crunch numbers faster than anyone even several decades ago could have imagined. If we’d like to find quick solutions to these Diophantine equations, it seems reasonable for a computer to run through all possible values up to a sufficiently large limit. So why can’t we simply brute force these problems?

The answer is not because mathematicians feel intellectually threatened when computers hold their hands. Even the most powerful supercomputers today would not be able to do any favors for them. In one example of a set of Diophantine equations, Zureick-Brown provided a calculation of the total time it would take a supercomputer to check through all cases of possible integer solutions below a certain limit. For all choices of integers less than a million, it would take a minute — no problem. Going beyond that to check choices of integers with an upper limit of 10^{264} (1 followed by 264 zeroes!), which is often necessary to confirm the existence of any solutions, suddenly would take 10^{252} *years*. These amazingly large numbers can be hard to wrap one’s head around, so Zureick-Brown offered a simple comparison. The universe is expected to die in a process known as the heat death in 10^{100} years…so the supercomputer’s calculations would outlast the lifespan of the universe 10^{152} times over!

A seemingly clever solution, then, is to use multiple computers running the calculations at once to reduce the overall computation time. But this hardly makes a scratch: even using a *billion *computers would only cut the total time down to 10^{243} years. It’s a billion times lower but doesn’t even come close to matching the lifespan of the universe, let alone our own lifetimes.

One last resort could be developing a general algorithm to solve Diophantine equations, or more precisely, determine the existence of any solutions as a yes/no question, which wouldn’t require brute force but rather a quick inspection of the equation’s nature. Sadly, that too is out of the question. In one of the first theorems he covered, Zureick-Brown presented Hilbert’s tenth problem, an articulation of this very question, which was tackled in 1961 with a theorem rejecting the existence of any such algorithm. With no easy workarounds, mathematicians must grapple with sets of Diophantine equations on a case-by-case basis without the assistance of computers (most of the time, as we will see).

**A Taste of Arithmetic Geometry**

Among the hundreds of proofs of the Pythagorean theorem, some are clearly more elegant than others. As seen above, it can be proven entirely without words in a nice geometric argument. It turns out that one can also derive a formula that produces Pythagorean triples by only using the geometry of the circle. In the diagram below, a line can be drawn through a circle that intersects at three points: (-1, 0), (0,t), and (x,y), where t is arbitrarily chosen along the diameter, and the intersection of the line and circle at (x,y) is dependent on our choice of t. Expressing x and y in terms of t, then, surprisingly yields formulas that correspond to *two* of our three integers in a Pythagorean triple. The final integer can further be deduced from the slope of our line, which is simply t.

How does this even relate to Zureick-Brown’s discussion of Diophantine equations? Recall that Pythagorean triples are just sets of integers that satisfy the familiar equation a^{2} + b^{2} = c^{2}, which is clearly a Diophantine equation — in other words, we employed geometry to solve a Diophantine equation! This subtle intersection of number theory and geometry is so useful that it spans an entire field of active research in mathematics known as *arithmetic geometry*.

In a slightly less familiar example, Zureick-Brown demonstrated the power of arithmetic geometry, where geometrical insight allows mathematicians to quickly learn about solution sets for certain Diophantine equations. The equation y^{2} = (x^{2} – 1)(x^{2} – 2)(x^{2} – 3) looks like an ordinary, potentially confusing equation at first glance, but as anyone who has gone through high school math knows, we can learn more about it by plotting it. Simply plotting its real solutions in the xy-plane gives a strange curve; however, expanding our possibilities to the complex domain (that is, taking imaginary numbers into account) rotates this curve around a z-axis, sweeping out a *surface* in three dimensions. This reveals that our 2D plot of real solutions is merely a cross-section — a slice — of a larger surface: a toroid of genus 2. In layman’s terms, that’s a two-holed donut.

Specifically, the genus, or number of holes, given by the curve of a Diophantine equation is an incredibly helpful nugget of information, however tiny it is, in our search for integer solutions. According to a conjecture proved by Gerd Faltings in 1983, when the genus of a curve is greater than 1, there are only a finite number of *rational *solutions (solutions that can be expressed as fractions) to the corresponding equation. Translating that again: multi-holed donuts can’t have infinitely many rational solutions. Since an integer is just a certain type of rational number, that tells us that these multi-holed donuts will not have infinitely many *integer* solutions, unlike some other Diophantine equations we have examined.

**Sums of Cubes: Adding Another Dimension**

The Pythagorean theorem and Pythagorean triples tell us that it’s possible to write one integer as the sum of two perfect squares. By an extension of the imagination, one might surmise that some integers can be expressed as the sum of three cubes. That means an integer n can be written in the form a^{3} + b^{3} + c^{3}, where a, b, and c are other integers. It’s another Diophantine equation that, for certain choices of our integer n, can prove quite bothersome.

Many integers, like 6, can be written as sums that are fairly easy to find from scratch: (-1)^{3} + (-1)^{3} + 2^{3}. But even considering its neighbor, 5, runs mathematicians into trouble. There is actually no way to express 5 as a sum of three cubes, but luckily, there’s a theorem that accounts for this. According to the theorem, a number n *can’t* be expressed as a sum of three cubes if n mod 4 or n mod 5 is 9 (that is to say that dividing n by 9 leaves a remainder of 4 or 5). That means not only is 5 ruled out, but so are 4, 13, 14, 22, 23, and so on.

That doesn’t mean that all the other integers have straightforward solutions. Just consider 33 — it is clearly not one of these forbidden numbers. Nonetheless, its sum of three cubes remained a total mystery and one of the great unanswered questions in number theory until 2019, when the mathematician Andrew Booker stumbled across a ridiculous solution: 33 = 8866128975287528^{3} + (-8778405442862239)^{3} + (-2736111468807040)^{3}. Other seemingly harmless numbers, like 42, feature mind-bogglingly high integers when written as the sum of three cubes — and those are the* smallest possible solutions* for those numbers!

This is one of the few problems in which brute force from a computer *can *play a role, albeit a limited one. Instead of having to check every case individually (that is, all the combinations of our integers a, b, and c), there is a neat trick in which one can reduce the number of checks drastically by only having to change one select integer with each trial. While computers have helped with these cases, sums of cubes for other elusive numbers, like 114, remain unsolved. Given the enormity of the solutions we have already seen, it’s not unreasonable to guess that the solution for a number like 114 could involve integers of extremely high orders of magnitude — existing around the threshold where the computation time for that many cases surpasses the universe’s lifetime. So far, no one has found sums of cubes for those numbers, but it’s far from a done deal. Solutions in these cases perhaps do exist beyond the limits of our current computers, and there is still no theorem like Fermat’s that outright denies the existence of any.

**Getting Specific About Generalized Fermat Equations**

Remember that Fermat’s last theorem rejected the existence of positive integer solutions to a Diophantine equation of this specific form: a^{n} + b^{n} = c^{n}, where n > 2. An equation in this form is a *Fermat equation* due to its role in the fabled theorem. But do all the exponents really have to be the same? Can we determine the existence of integer solutions for an equation like a^{2} + b^{3} = c^{4}, where the exponents differ from one another? The final species of Diophantine equations Zureick-Brown covered in his talk entailed these simple modifications, or generalizations, of the Fermat equations.

Disentangling specific cases such as a^{2} + b^{3} = c^{4} is usually a cumbersome process, and some nearly match the stature of Fermat’s last theorem itself in the difficulty they pose. Fortunately, much like the two-holed donut scenario, there’s a handy theorem that can give mathematicians some, but not complete, information about the solutions of a certain generalized Fermat equation. Instead of using something geometrical, like a genus, it involves the computation of a variable 𝛘 (the Greek letter *chi*). 𝛘 is equal to (1/a) + (1/b) +(1/c) – 1, where a, b, and c are the exponents in the equation, and this value is typically the first clue mathematicians uncover, as it reveals whether finitely many solutions exist. After this step, our mathematicians-turned-detectives are left with only their own brainpower and scattered resources from their fellow mathematicians, such as techniques in past proofs, to find solutions.

Thus, with endless cases to crack and several tools to draw upon, generalized Fermat equations have attracted no shortage of voracious minds in the number theory community. As he touched on the current state of this collective quest, Zureick-Brown delivered a final surprise: as a graduate student, he’d solved one of these generalized Fermat equations himself in which the exponents were (2, 3, 10). Zureick-Brown may not enjoy Wiles’ fame, but his initials proudly stood among his ranks on the final slide, immortalized in a winding list of mathematicians who had tackled other tricky exponents before him — including the most general case, (n, n, n), otherwise known as Fermat’s last theorem. It was a subtle reminder to both the prospective mathematicians and the outsiders in the audience that however petty or esoteric the goals mathematicians set out for themselves appear, no contribution is left uncounted, and for good reason. Instead of fame, something more intrinsic ultimately motivates the work of Zureick-Brown and many others: a thirst for communal problem-solving. In a sense, digging into the smorgasbord is the purest form of joy out there.