Thanks to Changhao Chen, Burak Erdoğan, and Igor Shparlinski for useful discussions surrounding this post.
Let be a positive integer (which we take later to be
). We consider the exponential sum
where . We are interested in bounds for
where are a family of sets indexed by some parameter
. For simplicity, we only consider, for fixed
,
We would like to show that for most , (1) is small. This is supported by the usual heuristic that we expect square root cancellation in (1). On the other hand,
is large for some special values of
(say
), so it certainly matters how the
lie in space. Burak Erdoğan and I studied these types of questions (also the topic of this previous blog post) motivated from understanding the fractal dimension of solutions to certain PDE (see the introduction and references within for a complete history). We let
Theorem 1 (Chen-Shparlinski): Let
and
Then for a.e. , with respect to the Lebesgue measure,
Let us make some remarks before beginning the proof. It is worth noting that for very small we know the best value of
. Indeed, Brandes, Parsell, Poulias, Vaughan, and I showed
is admissible and cannot be improved. In the aforementioned paper of Erdoğan, we mentioned that one could obtain a variant of Theorem 1 by invoking Vinogradov’s mean value theorem, which is a best possible bound for
The guiding principle is that if an exponential sum is large at a single point, then one can create many other points where the exponential sum is large. On the other hand, there cannot be too many points where this occur as (2) is small. This is a somewhat unsatisfactory approach, as it is not clear that the variable mean value in (2) is the right tool to analyze the two variable
. Recently, Chen and Shparlinski instead utilized the following two variable mean value type theorem of Wooley, which turns out to improve the bounds a bit and simplify the proof.
Theorem 2 (Wooley): Suppose
is an integer. Then for any
Note that Theorem 2 is best possible, in a certain sense. By considering a small rectangle near
, we see
Thus Theorem 2 cannot be improved much, for the values of for which it applies. It is not clear that the range of
is best possible. A natural conjecture is that Theorem 2 holds for
Such an improvement would improve Theorem 1.
Proof of Theorem 1: We only prove for , for simplicity. The reader may consult Chen and Shparlinski’s paper for the general case, where the additional idea is to employ the completion technique.
Let and
. We partition
into a grid of
small rectangles of size approximately
We label these rectangles by
The point is that (1) is does not change much on such rectangles. Indeed it is easy to check, using , that (for
large enough) if
for some , then
for any . We let
consist of the rectangles
such that there is a
with
. Combining this with the mean value estimate in Theorem 2, we see that
cannot be too large.
Indeed, Markov’s inequality and Theorem 2, we see that for ,
We now consider the image of these rectangles under the map
We have
where
Note that does not distort rectangles too much, so that
where is the Lebesgue measure. Thus by subadditivity of the Lebesgue measure and (3)
Note here that is fixed. What we actually care about is what happens for a fixed
and
for some large
. There is a standard trick from probability (or analysis) to apply the Borel-Cantelli lemma. We first apply the above result with
to find
By the Borel-Cantelli lemma, if
then the set of such that (4) holds for infinitely many
has measure zero. This is implied by
as long as is sufficiently small. This, in turn, is implied by