254A announcement: Analytic prime number theory

What's new

In the winter quarter (starting January 5) I will be teaching a graduate topics course entitled “An introduction to analytic prime number theory“. As the name suggests, this is a course covering many of the analytic number theory techniques used to study the distribution of the prime numbers $latex {{mathcal P} = {2,3,5,7,11,dots}}&fg=000000$. I will list the topics I intend to cover in this course below the fold. As with my previous courses, I will place lecture notes online on my blog in advance of the physical lectures.

The type of results about primes that one aspires to prove here is well captured by Landau’s classical list of problems:

  1. Even Goldbach conjecture: every even number $latex {N}&fg=000000$ greater than two is expressible as the sum of two primes.
  2. Twin prime conjecture: there are infinitely many pairs $latex {n,n+2}&fg=000000$ which are simultaneously prime.
  3. Legendre’s conjecture:…

View original post 3,947 more words

战胜拖延—–让PHD达成每天必要的工作时间

作为一个PHD,每次被老板问道每周工作多少时间,总是支支吾吾回答不出,于是老板质问:你每天工作一个小时还是两个小时!瞬间就会让自己在当天晚上感到非常的焦虑和恐慌,但是第二天醒来之后继续忘记前一天的恐慌,依旧保持着一种非常拖延的状态。白天依旧无所事事,深陷PHD的时间陷阱而不能自拔;每到深夜,就会为自己在白天的虚度光阴而悔恨。每天,每周,每个月,甚至一整年都处于一种非常拖延的状态,科研依旧毫无进展,每次看到别的PHD一年写出几篇文章,都会觉得自己的压力越来越大。即使在这种情况下,拖延症还是有可能继续困扰着每一个PHD,甚至会一直的影响下去。对于一个患有严重拖延症的PHD而言,要保证一天好几个小时的科研时间,简直就是天方夜谭。而且有一段时间,患者的学习的状态时好时坏,科研一直停滞不前,就像这样:白天起床—焦虑—瞎忙—挫败—焦虑—放松—发现自己之前做错了—挫败—拖延—忧愁。那段时间,别说一天能够工作8,9个小时,哪怕能够克服心理障碍并且高效的工作一个小时,都已经是非常不容易的了。

后来到了高年级,面临着毕业的压力,科研必须有进展,痛定思痛,于是就开始想办法克服自己多年的拖延。据说连续两周让自己保持一种状态就可以形成习惯,要突然改变自己拖延的状态谈何容易。苦海无涯,回头是岸。但是对于一个高年级的PHD而言,苦海无涯是真,回头是岸是假。猛然回头看一眼,已经没有退路,只能竭尽全力游到对岸。同时也会觉得,身边的人能做到的事情,自己也没啥做不到的。于是思考许久,就想到用GOOGLE日历记录时间的方法来看看自己每天都干了啥。

首先第一步需要做的,就是弄清楚自己每天必须要做的事情有哪些。作为一个PHD,当然科研是我们必须要做的,但是我们出门在外,远离家乡,肯定还有很多生活上面的琐事需要我们来处理。同时,学校也会要求每一个PHD每个学期完成一定数量的助教任务。做这些事情都肯定会占用我们的时间。这个时候,就需要我们先提前一周把这些杂事先在GOOGLE日历上面标记出来,表示这些时间段是没有弹性的,是我们必须处理的。标记这类时间的时候,一定要做到尽量精细。比方说:坐车的时间,花在路上大概需要多少时间,吃饭的时候,休息的时间,运动的时间,社交活动的时间,诸如此类。

标记完这些时间之后,就可以很清楚的看到自己每天,每周,甚至每个月能够在科研上面最多投入多少时间。于是剩下的就是开始执行科研这项事情。科研毕竟不是上班,上班重复的工作多,需要创造的时间少。但是科研恰好相反,需要自己创造的时候是非常多的。由于博士论文是需要PHD自己独立完成一个项目或者一个课题,这个时候就非常容易给人带来一种挫败感。能够选择读PHD的,虽然不全是特别聪明的人,但是至少不是傻瓜,至少在读本科的时候都得到过老师们赞许的人。对于学习这件事,只要在自己的能力范围内规定一天看多少书,基本上还是能够按时解决。但是科研这种事情恰好不同,规定一个PHD在几天甚至几周内搞定一个博士课题,几乎是不可能的事情。就算天天不睡觉,天天想问题,在这段时间内也未必有新的想法来解决手上的问题。如果是一个完美主义者,在科研的过程中就很容易产生一种挫败感,因为科研的道路并不像自己所想象的那样一帆风顺,都是在曲折中不停地往前走。这个时候就一定要放弃那种所谓的完美主义,有的事情做到就好,不需要完美,只要自己在不停的做这件事情就可以了。为了保证自己每天都有一定的时间投入在科研上,就必须要时刻记录好自己的工作时间。如果是严重的拖延症患者,一开始工作的时间不能够太长,就以一个小时,甚至半个小时作为最佳的时间。这个就是所谓的30分钟工作法。每当自己专心的投入科研半个小时,就可以在日历上面记录自己工作了半个小时。如果三心二意的在看书,就不要记录这段时间。每天投入的时间也不需要太长,一定要放弃那种一天能够工作10个小时的想法,一开始的时候每天投入三个小时即可。只要自己做完了这三个小时的科研,剩下的21个小时就可以自己做自己的事情,吃饭睡觉,想做啥做啥。这个30分钟工作法的目的就在于减轻PHD的时间焦虑,改进自己的工作学习流程,精确的保质保量。把一天,一周,甚至一个月分成可以控制的时间片段,通过不断地积累来促进科研的进展。俗话说:不积跬步,无以至千里;不积小流,无以成江海。要对抗拖延,就必须选择一个小的,可以操作的目标,集中注意力做30分钟,甚至一个小时。此时不需要惦记着一个很宏大的目标,只需要着重于脚下的路。

看到这里,也许有人会说,每天3小时算什么,每天有24小时呢。当然,对于每天能够持续思考科研难题8,9个小时的人来说,3个小时确实少了许多。但是,对于普通人来说,科研与学习有本质的区别,在学习的过程中,通过自己的聪明才智能够持续不断地做出书后面的习题,从而刺激自己每天不停地学习下去。在科研过程中,几乎没有任何一个合格的博士课题是能够让一个PHD在几天,几周,甚至几个月之内完成的。在一个人持续几个月没有新结果新想法的时候,就会没有动力来刺激自己继续进行这项工作,继续做这项工作,就很有可能给自己带来一种更大的挫败感。在这个时候,就需要把自己的注意力从最终的结果转移到过程上面,只需要关注自己每天,每周,甚至每个月工作了多少时间即可,不要一直想着自己什么时候能够做出来结果,也不要期望着自己努力了几天就能够解决最终的科研问题。每天需要做的就是进行一次不完美,但是完全符合人性的努力。其实每天3小时的工作量虽然看上去不够多,但是几年累积下来,就是一个非常大的工作量,一年的平均科研时间已经达到了1000小时。不要以读完一本书,写完一篇论文,或者连续工作4个小时作为自己的奋斗目标,而要以30-60分钟的高质量专注工作为目标。

最后,对于一个PHD来说,科研中的失败肯定是家常便饭,如果一直怀着一种完美主义的心态,就不会愿意去冒险,不会愿意去采取行动。当你在为了自己而找各种各样借口的时候,就是一种退缩;而怀着一种成长的心态,就会乐于采取行动去解决问题,即使这件事情看上去很难,看上去是多么的遥不可及,或者说不是很喜欢去做它。与其相信自己找的各种借口,让它们带着你进入泥沼,不如不去理睬这些借口,直接采取行动去解决问题。

Perron-Frobenius Operator

Perron-Frobenius Operator

Consider a map f which possibly has a finite (or countable) number of discontinuities or points where possibly the derivative does not exist. We assume that there are points

\displaystyle q_{0}<q_{1}<\cdot\cdot\cdot <q_{k} or q_{0}<q_{1}<\cdot\cdot\cdot<q_{\infty}<\infty

such that f restricted to each open interval A_{j}=(q_{j-1},q_{j}) is C^{2}, with a bound on the first and the second derivatives. Assume that the interval [q_{0},q_{k}] ( or [q_{0},q_{\infty}] ) is positive invariant, so f(x)\in [q_{0},q_{k}] for all x\in [q_{0}, q_{k}] ( or f(x)\in [q_{0},q_{\infty}]  for all x\in[q_{0},q_{\infty}] ).

For such a map, we want a construction of a sequence of density functions that converge to a density function of an invariant measure. Starting with \rho_{0}(x)\equiv(q_{k}-q_{0})^{-1} ( or \rho_{0}(x)\equiv(q_{\infty}-q_{0})^{-1} ),assume that we have defined densities up to \rho_{n}(x), then define define \rho_{n+1}(x) as follows

\displaystyle \rho_{n+1}(x)=P(\rho_{n})(x)=\sum_{y\in f^{-1}(x)}\frac{\rho_{n}(y)}{|Df(y)|}.

This operator P, which takes one density function to another function, is called the Perron-Frobenius operator. The limit of the first n density functions converges to a density function \rho^{*}(x),

\displaystyle \rho^{*}(x)=\lim_{k\rightarrow \infty}\frac{1}{k}\sum_{n=0}^{k-1}\rho_{n}(x).

The construction guarantees that \rho^{*}(x) is the density function for an invariant measure \mu_{\rho^{*}}.

Example 1. Let

\displaystyle f(x)= \begin{cases}  x &\mbox{if } x\in(0,\frac{1}{2}), \\  2x &\mbox{if } x\in(\frac{1}{2},1).  \end{cases}

Screen Shot 2014-11-08 at 9.55.51 am

We construct the first few density functions by applying the Perron-Frobenius operator, which indicates the form of the invariant density function.
Take \rho_{0}(x)\equiv1 on [0,1]. From the definition of f(x), the slope on (0,\frac{1}{2}) and (\frac{1}{2},1) are 1 and 2, respectively. If x\in (\frac{1}{2},1), then it has only one pre-image on (\frac{1}{2},1); else if x\in(0,\frac{1}{2}), then it has two pre-images, one is x^{'} in (0,\frac{1}{2}), the other one is x^{''} in (\frac{1}{2},1). Therefore,

\rho_{1}(x)= \begin{cases}  \frac{1}{1}+\frac{1}{2} &\mbox{if } x\in(0,\frac{1}{2}), \\  \frac{1}{2} &\mbox{if } x\in(\frac{1}{2},1).  \end{cases}

By similar considerations,

\displaystyle \rho_{2}(x)=\begin{cases}1+\frac{1}{2}+\frac{1}{2^{2}} &\mbox{if } x\in(0,\frac{1}{2}), \\ \frac{1}{2^{2}} &\mbox{if } x\in(\frac{1}{2},1).\end{cases}

By induction, we get

\displaystyle \rho_{n}(x)=\begin{cases}1+\frac{1}{2}+\cdot\cdot\cdot+\frac{1}{2^{n}} &\mbox{if } x\in(0,\frac{1}{2}), \\ \frac{1}{2^{n}} &\mbox{if } x\in(\frac{1}{2},1).\end{cases}

Now, we begin to calculate the density function \rho^{*}(x). If x\in(0,\frac{1}{2}), then
\displaystyle  \rho^{*}(x)=\lim_{k\rightarrow \infty}\frac{1}{k}\sum_{n=0}^{k-1}\rho_{n}(x)  =\lim_{k\rightarrow \infty}\frac{1}{k}\sum_{n=0}^{k-1} \sum_{m=0}^{n}\frac{1}{2^{m}}  =\lim_{k\rightarrow \infty}\frac{1}{k}\sum_{n=0}^{k-1}\left(2-\frac{1}{2^{n}}\right)=2.
If x\in(\frac{1}{2},1), then
\displaystyle  \rho^{*}(x)=\lim_{k\rightarrow \infty}\frac{1}{k}\sum_{n=0}^{k-1}\rho_{n}(x)  =\lim_{k\rightarrow \infty}\frac{1}{k}\sum_{n=0}^{k-1}\frac{1}{2^{n}}  =\lim_{k\rightarrow \infty}\frac{1}{k}\left(2-\frac{1}{2^{k}}\right)=0.
i.e.

\displaystyle \rho^{*}(x)= \begin{cases}  2 &\mbox{if } x\in(0,\frac{1}{2}), \\  0 &\mbox{if } x\in(\frac{1}{2},1).  \end{cases}

Example 2. Let

\displaystyle f(x)=\begin{cases}  2x &\mbox{if } x\in(0,\frac{1}{2}), \\  2x-1 &\mbox{if } x\in(\frac{1}{2},1).  \end{cases}

Screen Shot 2014-11-08 at 9.56.12 am

Take \rho_{0}(x)\equiv1 on (0,1). By induction, \rho_{n}(x)\equiv1 on (0,1) for all n\geq 0. Therefore, \rho^{*}(x)\equiv1 on (0,1).

Example 3. Let

\displaystyle f(x)=\begin{cases}  x &\mbox{if } x\in(0,\frac{1}{2}), \\  2^{n+1}\cdot\left(x-\left(1-\frac{1}{2^{n}}\right)\right) &\mbox{if } x\in\left(1-\frac{1}{2^{n}},1-\frac{1}{2^{n+1}}\right) \text{ for all } n\geq 1.\end{cases}

Screen Shot 2014-11-08 at 9.56.31 am

Take \rho_{0}(x)\equiv1 on (0,1). Assume

\displaystyle \rho_{n}(x)= \begin{cases}  a_{n} &\mbox{if } x\in(0,\frac{1}{2}), \\  b_{n} &\mbox{if } x\in(\frac{1}{2},1).  \end{cases}

for all n\geq 0. It is obviously that a_{0}=b_{0}=1. By similar considerations,
\displaystyle \rho_{n+1}(x)= \begin{cases}  \frac{a_{n}}{1}+\frac{b_{n}}{4}+\frac{b_{n}}{8}+\frac{b_{n}}{16}+\cdot\cdot\cdot= a_{n}+\frac{b_{n}}{2} &\mbox{if } x\in(0,\frac{1}{2}), \\  \frac{b_{n}}{4}+\frac{b_{n}}{8}+\frac{b_{n}}{16}+\cdot\cdot\cdot = \frac{b_{n}}{2} &\mbox{if } x\in(\frac{1}{2},1).  \end{cases}
That means

\displaystyle \left( \begin{array}{ccc}  a_{n+1} \\  b_{n+1}  \end{array} \right)  =\left( \begin{array}{ccc}  a_{n}+\frac{1}{2}b_{n} \\  \frac{1}{2}b_{n}  \end{array} \right)  = \left( \begin{array}{ccc}  1 & \frac{1}{2} \\  0 & 1  \end{array} \right)  \left( \begin{array}{ccc}  a_{n} \\  b_{n}  \end{array} \right)

for all n\geq 0. From direct calculation, \displaystyle a_{n}=2-\frac{1}{2^{n}} and \displaystyle b_{n}=\frac{1}{2^{n}} for all n\geq 0. Therefore,

\displaystyle \rho^{*}(x)=\lim_{k\rightarrow \infty}\frac{1}{k}\sum_{n=0}^{k-1}\rho_{n}(x)=\begin{cases}  2 &\mbox{if } x\in (0,\frac{1}{2}), \\  0 &\mbox{if } x\in (\frac{1}{2},1).  \end{cases}

Example 4. Let

\displaystyle f(x)=\begin{cases}  1.5 x &\mbox{if } x\in(0,\frac{1}{2}), \\  2^{n+1}\cdot\left(x-\left(1-\frac{1}{2^{n}}\right)\right) &\mbox{if } x\in\left(1-\frac{1}{2^{n}},1-\frac{1}{2^{n+1}}\right) \text{ for all } n\geq 1.\end{cases}

Screen Shot 2014-11-08 at 9.56.38 am

Take \rho_{0}(x)\equiv1 on (0,1). Assume

\displaystyle \rho_{n}(x)= \begin{cases}  a_{n} &\mbox{if } x\in(0,\frac{3}{4}), \\  b_{n} &\mbox{if } x\in(\frac{3}{4},1).  \end{cases}

for all n\geq 0. It is obviously that a_{0}=b_{0}=1. By similar considerations,

\displaystyle \left( \begin{array}{ccc}  a_{n+1} \\  b_{n+1}  \end{array} \right)  =\left( \begin{array}{ccc}  \frac{11}{12}a_{n}+\frac{1}{4}b_{n} \\  \frac{1}{4}a_{n}+\frac{1}{4}b_{n}  \end{array} \right)  = \left( \begin{array}{ccc}  \frac{11}{12} & \frac{1}{4} \\  \frac{1}{4} & \frac{1}{4}  \end{array} \right)  \left( \begin{array}{ccc}  a_{n} \\  b_{n}  \end{array} \right)

for all n\geq 0. From matrix diagonalization , \displaystyle a_{n}=\frac{6}{5}-\frac{1}{5}\cdot\frac{1}{6^{n}} and \displaystyle b_{n}=\frac{2}{5}+\frac{3}{5}\cdot\frac{1}{6^{n}} for all n\geq 0.

Therefore,

\displaystyle \rho^{*}(x)=\lim_{k\rightarrow \infty}\frac{1}{k}\sum_{n=0}^{k-1}\rho_{n}(x)=\begin{cases}  \frac{6}{5} &\mbox{if } x\in (0,\frac{3}{4}), \\  \frac{2}{5} &\mbox{if } x\in (\frac{3}{4},1).  \end{cases}

Perron-Frobenius Theory

Definition. Let A=[a_{ij}] be a k\times k matrix. We say A is non-negative if a_{ij}\geq 0 for all i,j. Such a matrix is called irreducible if for any pair i,j there exists some n>0 such that a_{ij}^{(n)}>0 where a_{ij}^{(n)} is the (i,j)-th element of A^{n}. The matrix A is irreducible and aperiodic if there exists n>0 such that a_{ij}^{(n)}>0 for all i,j.

Perron-Frobenius Theorem Let A=[a_{ij}] be a non-negative k\times k matrix.

(i) There is a non-negative eigenvalue \lambda such that no eigenvalue of A has absolute value greater than \lambda.

(ii) We have \min_{i}(\sum_{j=1}^{k}a_{ij})\leq \lambda\leq \max_{i}(\sum_{j=1}^{k}a_{ij}).

(iii) Corresponding to the eigenvalue \lambda there is a non-negative left (row) eigenvector u=(u_{1},\cdot\cdot\cdot, u_{k}) and a non-negative right (column) eigenvector v=(v_{1},\cdot\cdot\cdot, v_{k})^{T}.

(iv) If A is irreducible then \lambda is a simple eigenvalue and the corresponding eigenvectors are strictly positive (i.e. u_{i}>0, v_{i}>0 all i).

(v) If A is irreducible then \lambda is the only eigenvalue of A with a non-negative eigenvector.

Theorem.
Let A be an irreducible and aperiodic non-negative matrix. Let u=(u_{1},\cdot\cdot\cdot, u_{k}) and v=(v_{1},\cdot\cdot\cdot, v_{k})^{T} be the strictly positive eigenvectors corresponding to the largest eigenvalue \lambda as in the previous theorem. Then for each pair i,j, \lim_{n\rightarrow \infty} \lambda^{-n}a_{ij}^{(n)}=u_{j}v_{i}.

Now, let us see previous examples, again. The matrix A is irreducible and aperiodic non-negative matrix, and \lambda=1 has the largest absolute value in the set of all eigenvalues of A. From Perron-Frobenius Theorem, u_{i}, v_{j}>0 for all pairs i,j. Then for each pari i,j,
\lim_{n\rightarrow \infty}a_{ij}^{(n)}=u_{j}v_{i}. That means \lim_{n\rightarrow \infty}A^{(n)} is a strictly positive k\times k matrix.

Markov Maps

Definition of Markov Maps. Let N be a compact interval. A C^{1} map f:N\rightarrow N is called Markov if there exists a finite or countable family I_{i} of disjoint open intervals in N such that

(a) N\setminus \cup_{i}I_{i} has Lebesgue measure zero and there exist C>0 and \gamma>0 such that for each n\in \mathbb{N} and each interval I such that f^{j}(I) is contained in one of the intervals I_{i} for each j=0,1,...,n one has

\displaystyle \left| \frac{Df^{n}(x)}{Df^{n}(y)}-1 \right| \leq C\cdot |f^{n}(x)-f^{n}(y)|^{\gamma} \text{ for all } x,y\in I;

(b) if f(I_{k})\cap I_{j}\neq \emptyset, then f(I_{k})\supseteq I_{j};

(c) there exists r>0 such that |f(I_{i})|\geq r for each i.

As usual, let \lambda be the Lebesgue measure on N. We may assume that \lambda is a probability measure, i.e., \lambda(N)=1. Usually, we will denote the Lebesgue measure of a Borel set A by |A|.

Theorem.  Let f:N\rightarrow N be a Markov map and let \cup_{i}I_{i} be corresponding partition. Then there exists a f-invariant probability measure \mu on the Borel sets of N which is absolutely continuous with respect to the Lebesgue measure \lambda. This measure satisfies the following properties:

(a) its density \frac{d\mu}{d\lambda} is uniformly bounded and Holder continuous. Moreover, for each i the density is either zero on I_{i} or uniformly bounded away from zero.

If for every i and j one has f^{n}(I_{j})\supseteq I_{i} for some n\geq 1 then

(b) the measure is unique and its density \frac{d\mu}{d\lambda} is strictly positive;

(c) f is exact with respect to \mu;

(d) \lim_{n\rightarrow \infty} |f^{-n}(A)|=\mu(A) for every Borel set A\subseteq N.

If f(I_{i})=N for each interval I_{i}, then

(e) the density of \mu is also uniformly bounded from below.

Notes on Shape of Inner Space

Shape of Inner Space

shing-tung_yau_nadis_s._the_shape_of_inner_space

String Theory and the Geometry of the Universe’s Hidden Dimensions

Shing-Tung YAU and Steve NADIS

Chapter 3: P.39

My personal involvement in this area began in 1969, during my first semester of graduate studies at Berkeley. I needed a book to read during Chrismas break. Rather than selecting Portnoy’s Complaint, The Godfather, The Love Machine, or The Andromeda Strain-four top-selling books of that year-I opted for a less popular title, Morse Theory, by the American mathematician John Milnor. I was especially intrigued by Milnor’s section on topology and curvature, which explored the notion that local curvature has a great influence on geometry and topology. This is a theme I’ve pursued ever since, because the local curvature of a surface is determined by taking the derivatives of that surface, which is another way of saying it is based on analysis. Studying how that curvature influences geometry, therefore, goes to the heart of geometric analysis.

Having no office, I practically lived in Berkeley’s math library in those days. Rumor has it that the first thing I did upon arriving in the United States was visiting that library, rather than, say, explore San Francisco as other might have done. While I can’t remember exactly what I did, forty years hence, I have no reason to doubt the veracity of that rumor. I wandered around the library, as was my habit, reading every journal I could get my hands on. In the course of rummaging through the reference section during winter break, I came across a 1968 article by Milnor, whose book I was still reading. That article, in turn, little else to do at the time (with most people away for the holiday), I tried to see if I could prove something related to Preissman’s theorem.

Chapter 4: P.80

From this sprang the work I’ve become most famous for. One might say it was my calling. No matter what our station, we’d all like to find our true calling in life-that special thing we were put on this earth to do. For an actor, it might be playing Stanley Kowalski in A Streetcar Named Desire. Or the lead role in Hamlet. For a firefighter, it could mean putting out a ten-alarm blaze. For a crime-fighter, it could mean capturing Public Enemy Number One. And in mathematics, it might come down to finding that one problem you’re destined to work on. Or maybe destiny has nothing to do with it. Maybe it’s just a question of finding a problem you can get lucky with.

To be perfectly honest, I never think about “destiny” when choosing a problem to work on, as I tend to be a bit more pragmatic. I try to seek out a new direction that could bring to light new mathematical problems, some of which might prove interesting in themselves. Or I might pick an existing problem that offers the hope that in the course of trying to understand it better, we will be led to a new horizon.

The Calabi conjecture, having been around a couple of decades, fell into the latter category. I latched on to this problem during my first year of graduate school, though sometimes it seemed as if the problem latched on to me. It caught my interest in a way that no other problem had before or has since, as I sensed that it could open a door to a new branch of mathematics. While the conjecture was vaguely related to Poincare’s classic problem, it struck me as more general because if Calabi’s hunch were true, it would lead to a large class of mathematical surfaces and spaces that we didn’t know anything about-and perhaps a new understanding of space-time. For me the conjecture was almost inescapable: Just about every road I pursued in my early investigations of curvature led to it.

Chapter 5: P.104

A mathematical proof is a bit like climbing a mountain. The first stage, of course, is discovering a mountain worth climbing. Imagine a remote wilderness area yet to be explored. It takes some wit just to find such an area, let alone to know whether something worthwhile might be found there. The mountaineer then devises a strategy for getting to the top-a plan that appears flawless, at least on paper. After acquiring the necessary tools and equipment, as well as mastering the necessary skills, the adventurer mounts an ascent, only to be stopped by unexpected difficulties. But others follow in their predecessor’s footsteps, using the successful strategies, while also pursuing different avenues-thereby reaching new heights in the process. Finally someone comes along who not only has a good plan of attack that avoids the pitfalls of the past but also has the fortitude and determination to reach the summit, perhaps planting a flag there to mark his or her presence. The risks to life and limb are not so great in math, and the adventure may not be so apparent to the outsider. And at the end of a long proof, the scholar does not plant a flag. He or she types in a period. Or a footnote. Or a technical appendix. Nevertheless, in our field there are thrill as well as perils to be had in the pursuit, and success still rewards those of us who’ve gained new views into nature’s hidden recesses.