It’s that time of year again! The Putnam math competition is this Saturday, 12/1. Here’s a hard calculus problem from last year’s Putnam. As always, I’ll try and justify my thinking processes that led me to the solution (even though they are indirect).
Let and be twice continuously differentiable functions with the following properties:
- for every ;
- for every , and ;
- for every , the vector is either 0 or parallel to the vector .
Prove that there exists a constant such that for every and any , we have
Gut reaction: Yikes!
This problem looks very intimidating. I never got beyond this point on the Putnam. Decided to work on the other problems instead.
But let’s solve it.
Our first step is to justify to ourselves why such a convoluted statement might be true.
We’ll do this by looking at a specific example of a that satisfies the conditions. Why is each condition essential? Why would satisfy the conclusion? Could we generalize from our example?
Let’s find a F.
The simplest thing to do would be to take . (Technically we have to take something else around but let’s not worry about this; we’re just trying to get some intuition.) Then is parallel to . Integrating against and , we find that one possibility for is . The problem claims that . Is this true? Let’s suppose all the are at least 1 (we didn’t define very well around ). Suppose . Then add up to by telescoping. So one of the differences must be at most . We can just take .
What was essential here?
The fact that the integral of is , which converges as . In general, the condition , or , forces the antiderivative to converge as . One possibility for is . Then our argument would go through as above: again, we have by telescoping
where . Thus letting we find that .
But we’re just getting started…
We now have some idea why the statement would be true. However, we have not solved the problem at all! We took a such that . We need to prove it for all such that . This means the gradient could be scaled by an arbitrary number depending on :
This would mess up any way to write as a difference of a function in and as above. What do we do?
(Thus we see that to solve our problem, we have to give at least a partial answer to the question: given only that the gradient field of a function is parallel to a given vector field, rather than the gradient field itself, how do we recover the function?)
A condition on gradient fields gives a condition on h
It’s clear that we need to understand what could be. Let’s find some condition on . We know that in order for some vector field to be a gradient field, we must have
(This fact comes from the fact that for a twice continuously differentiable function , .)
Applying this to gives
We can’t “solve” this PDE… or can we?
I got stuck here for a very long time. The basic problem is that it seems like “too many” functions solve this equation. After all, we have a PDE in two variables, but we only have 1 equation relating them. How can we hope to say anything about ?
The idea is that would have a very general form. We take as inspiration the wave equation
which has as a general solution the traveling wave where is any “nice” function. The equation is not enough for us to give a very explicit solution, but we’ve still managed to describe all solutions: they must be functions in terms of the quantity !
Can we do the same here? Is a function in some expression? What would be a natural choice? The equation for looks actually a lot like the gradient equation for . Of course, is a solution. Playing around a bit, we find is a solution—look familiar? Let’s guess that a function in , i.e., we’ll try to prove for some function .
This is the key observation for the problem.
h is a function in G(u)-G(v) because G(u)-G(v)=k are the level lines
Saying that for some function is the same as saying that the value of depends only on the value of . Why would this be the case? (If the above ideas were kind of unmotivated, hopefully our proof now will be enlightening!) Another way of saying this is that doesn’t change if we move along the curve for constant.
In fact, (1) is exactly telling us that doesn’t change along a certain direction: it says that
i.e., the directional derivative of in the direction is 0. Now are exactly the integral curves of !*** Indeed, implicitly differentiating gives .
Suppose we move along from to with unit speed in the direction. Let this path be . How does change? Using (2),
Since is connected (see below), this shows is a function of . We can write
for some function .
***(We should really check some things here. Note is an at most single-valued function of . Indeed, fixing a , is a strictly increasing function of because is the integral of . There is at most one value of that works. Moreover, the curve is connected. Indeed, if and are points on the curve, then for any we have , so by the Intermediate Value Theorem, there is a solution to . Moreover, continuity considerations show that depends on continuously. Or you can probably just argue by taking the derivative.)
We’ve gotten through the hard part: we know what form has to take. We know
Since we have an expression for the gradient field, we can now find up to a constant, in terms of this function . Let be the antiderivative of . Then
for some . But the condition actually gives (we have by choice of antiderivative).
Now suppose . Again, we note is increasing and
This means that for some . Then
Because has continuous derivative, on this derivative has a maximum, say . Then . Hence
We let and we are done.