<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.0">Jekyll</generator><link href="http://blog.chordowl.net/feed.xml" rel="self" type="application/atom+xml" /><link href="http://blog.chordowl.net/" rel="alternate" type="text/html" /><updated>2021-01-27T11:27:18+00:00</updated><id>http://blog.chordowl.net/feed.xml</id><title type="html">mostly math?</title><subtitle>correctness not guaranteed</subtitle><entry><title type="html">conjuring primes out of the riemann zeta function</title><link href="http://blog.chordowl.net/math/2017/09/02/conjuring-primes-out-of-zeta/" rel="alternate" type="text/html" title="conjuring primes out of the riemann zeta function" /><published>2017-09-02T14:58:00+00:00</published><updated>2017-09-02T14:58:00+00:00</updated><id>http://blog.chordowl.net/math/2017/09/02/conjuring-primes-out-of-zeta</id><content type="html" xml:base="http://blog.chordowl.net/math/2017/09/02/conjuring-primes-out-of-zeta/">&lt;p&gt;Maybe you’ve heard of the &lt;strong&gt;Riemann zeta function&lt;/strong&gt; before:
it’s defined as&lt;/p&gt;

\[\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} = \sum_{n=1}^\infty n^{-s}.\]

&lt;p&gt;This function has deep connections in number theory, and it’s at the heart of the &lt;em&gt;Riemann zeta hypothesis&lt;/em&gt;, one of the &lt;a href=&quot;https://en.wikipedia.org/wiki/Millennium_Prize_Problems&quot;&gt;Millenium Prize Problems&lt;/a&gt;&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;.
Back in the 18th century (which was quite a bit before Riemann’s time), good old &lt;strong&gt;Leonhard Euler&lt;/strong&gt; proved a quite beautiful formula for it:&lt;/p&gt;

\[\zeta(s) = \prod_p \frac{1}{1 - p^{-s}} \quad (s &amp;gt; 1),\]

&lt;p&gt;where \(\prod_p\) iterates over all prime numbers \(p\).
At least for me, appreciating this beauty took some time working through the proof, but I hope I can accelerate this process for you!&lt;/p&gt;

&lt;!--more--&gt;
&lt;hr /&gt;

&lt;p&gt;So, \(\zeta(s)\) is just the value of a giant sum: for all natural numbers \(n\) starting from 1, we add up all the \(n^{-s}\).
Now, if we use \(n\)’s &lt;em&gt;prime factorization&lt;/em&gt;, we can express each \(n^{-s}\) as&lt;/p&gt;

\[\begin{align}
    n^{-s} &amp;amp;= (2^{a_2} \cdot 3^{a_3} \cdot 5^{a_5} \cdots P^{a_P})^{-s} \\\\
           &amp;amp;= 2^{-a_2s} \cdot 3^{-a_3s} \cdot 5^{-a_5s} \cdot P^{-a_Ps}
\end{align}\]

&lt;p&gt;where all the bases are prime numbers, \(P\) is the greatest prime factor of \(n\), and each \(a_p\) is a natural number.
Now, if we look at the whole sum \(\zeta(s) = \sum_{n=1}^\infty n^{-s}\) this way, we can factor out \(2^{-a_2s}\) for each \(n^{-s}\), and we get&lt;/p&gt;

\[\begin{align}
    \sum_{n=1}^\infty n^{-s} &amp;amp;= 1^{-s} + 2^{-s} + 3^{-s} + 4^{-s} + 5^{-s} + \cdots \\\\
                             &amp;amp;= 1 + 2^{-s} + 3 + 2^{-2s} + 5 + 2^{-s} \cdot 3 + \cdots \\\\
                             &amp;amp;= (1 + 3 + \cdots) + 2^{-s} (1 + 3 + \cdots) + \cdots \\\\
                             &amp;amp;= (2^{-0s} + 2^{-1s} + \cdots)(1 + 3 + 5 + \cdots) \\\\
                             &amp;amp;= \left( \sum_{n=0}^\infty 2^{-ns} \right) (1 + 3 + 5 + 7 + \cdots).
\end{align}\]

&lt;p&gt;&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; Now, there are two important things to note here:
When you look at the &lt;em&gt;first&lt;/em&gt; factor, it’s similar to &lt;em&gt;the sum of all powers of two&lt;/em&gt;, but each exponent is multiplied with \(-s\).
The &lt;em&gt;second&lt;/em&gt; factor is the sum of &lt;em&gt;all numbers that aren’t divisible by 2&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Now, we can factor out all powers of 3 from this second factor, and then all powers of 5 from the remaining factor, and so on, infinitely often, until we’ve done this factoring for all the primes!&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;
We obtain&lt;/p&gt;

\[\sum_{n=1}^\infty n^{-s} = \prod_p \sum_{n=0}^\infty p^{-ns},\]

&lt;p&gt;which is the product of, for each prime \(p\), the sum of powers \(p^{-ns}\).&lt;/p&gt;

&lt;p&gt;If we imagine setting \(s = -1\), this simplifies to&lt;/p&gt;

\[\begin{align}
    \sum_{n=1}^\infty n &amp;amp;= \prod_p \sum_{n=0}^\infty p^n, \\\\
    1 + 2 + 3 + 4 + \cdots &amp;amp;= \prod_p 1 + p + p^2 + p^3 + \cdots,
\end{align}\]

&lt;p&gt;which in one way doesn’t really make sense, since the sum of all natural numbers doesn’t converge, but in another way can be grasped quite intuitively:
if we were to evaluate the product on the right side, we basically pick a perfect power of each prime (\(1 = p^0\) included), and then multiply them together.
Each combination occurs only once, and by the fundamental theorem of arithmetic, each natural number is &lt;em&gt;uniquely factored&lt;/em&gt; by one of these combinations:
for each term on the left, we can pick exactly one combination of factors on the right;
for each combination of factors we pick on the right, theres exactly one term on the left!&lt;/p&gt;

&lt;p&gt;In brief, the formula we’ve shown so far is really just a more complicated way of expressing the fundamental theorem of arithmetic (one might see it as an &lt;em&gt;analytical formulation&lt;/em&gt;)!&lt;/p&gt;

&lt;p&gt;To get from here to Euler’s formula, we first observe that \(\sum_{n=0}^\infty p^{-ns} = \sum_{n=0}^\infty (p^{-s})^n\) is a &lt;em&gt;geometric series&lt;/em&gt;.
Since \(s &amp;gt; 1\) and all primes are at least 2, we have \(\lvert p^{-s} \rvert \leq 1\), so we can apply the formula for geometric series&lt;sup id=&quot;fnref:30&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:30&quot; class=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; to get&lt;/p&gt;

\[\sum_{n=0}^\infty p^{-ns} = \frac{1}{1 - p^{-s}}.\]

&lt;p&gt;Putting that into the formula from above, we get what Euler proved:&lt;/p&gt;

\[\zeta(s) = \sum_{n=1}^\infty n^{-s} = \prod_p \sum_{n=0}^\infty p^{ns} = \prod_p \frac{1}{1 - p^{-s}}.\]

&lt;h2 id=&quot;not-so-fast-here-be-limits&quot;&gt;not so fast (here be limits)&lt;/h2&gt;

&lt;p&gt;Now, so far we’ve been playing around with infinities fairly lightly, so maybe you’re not convinced that what we’ve purported to show has &lt;em&gt;actually&lt;/em&gt; been shown.
We can fix this though:
instead of multiplying the series&lt;sup id=&quot;fnref:40&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:40&quot; class=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; \(\sum_{n=0}^\infty p^{-ns}\) for all \(p\), we only multiply the series&lt;sup id=&quot;fnref:40:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:40&quot; class=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; for \(p \leq P\) for some arbitrary \(P\).
What we get is the sum of \(n^{-s}\) for all \(n\) that don’t have prime factors greater than \(P\), which we’ll write as&lt;/p&gt;

\[\prod_{p \leq P} \sum_{n=0}^\infty p^{-ns} = \sum_{n \in (P)} n^{-s}.\]

&lt;p&gt;Those \(n \in (P)\) include all numbers up to \(P\), so that&lt;/p&gt;

\[0 &amp;lt; \sum_{n=1}^\infty n^{-s} - \sum_{n \in (P)} n^{-s} &amp;lt; \sum_{n = P + 1}^\infty n^{-s}.\]

&lt;p&gt;Now, as \(P \rightarrow \infty\), that last sum approaches 0.
We conclude:&lt;/p&gt;

\[\begin{align}
    \sum_{n=1}^\infty n^{-s} &amp;amp;= \lim_{P \rightarrow \infty} \sum_{n \in (P)} n^{-s} \\\\
                             &amp;amp;= \lim_{P \rightarrow \infty} \prod_{p \leq P} \sum_{n=0}^\infty p^{-ns} \\\\
                             &amp;amp;= \prod_p \sum_{n=0}^\infty p^{-ns}
\end{align}\]

&lt;p&gt;and Euler’s formula follows as before.&lt;/p&gt;

&lt;hr /&gt;
&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We won’t go into these deeper connections here: as it’s written, the definition doesn’t really make sense for \(s \leq 1\) (since the series doesn’t converge), so it’s so-called &lt;em&gt;analytic continuation in the complex plane&lt;/em&gt; is used. We don’t need to worry about that for what I wanna show here though (also, I don’t really understand it). If you’re interested, you should check out &lt;a href=&quot;https://www.youtube.com/watch?v=sD0NjbwqlYw&quot;&gt;3Blue1Brown’s excellent video about this&lt;/a&gt;, if you haven’t seen it already. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The factoring might become clearer if you try to factor out the powers of two for more numbers. &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We’re playing a bit fast and loose here, but we’ll make this more precise later. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:30&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This formula states that \(\sum_{n=0}^\infty x^n = \frac{1}{1 - x}\) whenever \(\lvert x \rvert &amp;lt; 1\). &lt;a href=&quot;#fnref:30&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:40&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Plural! &lt;a href=&quot;#fnref:40&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:40:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;</content><author><name></name></author><category term="math" /><category term="math" /><category term="number-theory" /><category term="riemann-zeta-function" /><summary type="html">Maybe you’ve heard of the Riemann zeta function before: it’s defined as \[\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} = \sum_{n=1}^\infty n^{-s}.\] This function has deep connections in number theory, and it’s at the heart of the Riemann zeta hypothesis, one of the Millenium Prize Problems1. Back in the 18th century (which was quite a bit before Riemann’s time), good old Leonhard Euler proved a quite beautiful formula for it: \[\zeta(s) = \prod_p \frac{1}{1 - p^{-s}} \quad (s &amp;gt; 1),\] where \(\prod_p\) iterates over all prime numbers \(p\). At least for me, appreciating this beauty took some time working through the proof, but I hope I can accelerate this process for you! We won’t go into these deeper connections here: as it’s written, the definition doesn’t really make sense for \(s \leq 1\) (since the series doesn’t converge), so it’s so-called analytic continuation in the complex plane is used. We don’t need to worry about that for what I wanna show here though (also, I don’t really understand it). If you’re interested, you should check out 3Blue1Brown’s excellent video about this, if you haven’t seen it already. &amp;#8617;</summary></entry><entry><title type="html">proving terrible lower bounds for the prime-counting function</title><link href="http://blog.chordowl.net/math/2017/08/31/terrible-bounds-for-prime-counting-function/" rel="alternate" type="text/html" title="proving terrible lower bounds for the prime-counting function" /><published>2017-08-31T21:40:00+00:00</published><updated>2017-08-31T21:40:00+00:00</updated><id>http://blog.chordowl.net/math/2017/08/31/terrible-bounds-for-prime-counting-function</id><content type="html" xml:base="http://blog.chordowl.net/math/2017/08/31/terrible-bounds-for-prime-counting-function/">&lt;p&gt;Even if you don’t know it by its name, you’re probably familiar with &lt;strong&gt;Euclid’s theorem&lt;/strong&gt; (and you’ve maybe heard its proof too at some point):&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;There are infinitely many prime numbers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, less people are familiar with the remarkable &lt;strong&gt;prime number theorem&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Let \(\pi(N)\) be the number of primes \(p\) with \(p \leq N\).
Then \(\pi(N)\) approaches \(\frac{N}{\log{N}}\) as \(N \rightarrow \infty\)&lt;sup id=&quot;fnref:00&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:00&quot; class=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:05&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:05&quot; class=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This tells us a lot more about how many primes there are: not only are there infinitely many primes, they are actually not too rare.&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;
While this is significantly harder to prove (it took until 1896), we &lt;em&gt;can&lt;/em&gt; however get some (much worse) &lt;strong&gt;lower bounds for the number of primes&lt;/strong&gt; up to \(N\) (in other words: upper bounds on the \(n\)th prime) without too much hassle.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;!--more--&gt;

&lt;h2 id=&quot;baby-steps-with-euclid&quot;&gt;baby steps with euclid&lt;/h2&gt;

&lt;p&gt;But first, let’s recap Euclid’s proof of his theorem:
If the primes were finite, you could list them all as \(2, 3, 5, \ldots, p\), where \(p\) is the greatest prime.
Now, let \(q = 2 \cdot 3 \cdot 5 \cdots p + 1\).
We can see that \(q\) is not divisible by any prime number in our list, so either \(q\) is prime, or there’s another prime between \(p\) and \(q\): a contradiction.
In terms of \(\pi(N)\), this shows that \(\pi(N) \rightarrow \infty\).&lt;/p&gt;

&lt;p&gt;There’s actually a lower bound hidden in Euclid’s argument:
if \(p_n\) is the \(n\)th prime, then the next prime is less than or equal to \(q\).
Without an explicit list of primes, we can tell that \[p_{n+1} \leq p_n^n + 1,\] and that equality only holds for \(n = 1\) (so that \(p_2 = p_1^1 + 1 = 2^1 + 1 = 3\)).
So, we get&lt;/p&gt;

\[\begin{align}
p_{n+1} &amp;amp;\leq p_n^n \leq (p_{n-1}^{n-1})^n \leq p_{n-1}^{(n-1)n} \leq p_{n-2}^{(n-2)(n-1)n} \\\\
        &amp;amp;\leq \ldots \leq p_1^{n!} = 2^{n!}
\end{align}\]

&lt;p&gt;for \(n &amp;gt; 1\) and thus \[p_n \leq 2^{(n-1)!} \quad (n &amp;gt; 2).\]
As you might be able to tell, this lower bound is &lt;em&gt;pretty terrible&lt;/em&gt;: while \(p_7 = 17\), we have \(2^{6!} \approx 5.52 \cdot 10^{216}\) (and it only gets worse…).&lt;/p&gt;

&lt;p&gt;We can actually do a little better yet: we can prove \(p_n &amp;lt; 2^{2^n}\) by induction:
For \(n = 1\), we have \(p_n = p_1 = 2 &amp;lt; 2^{2^1} = 2^{2^n}\).
Now, let’s assume \(p_n &amp;lt; 2^{2^n}\) for all \(n \leq N\).
By Euclid’s argument, we get&lt;/p&gt;

\[\begin{align}
p_{N+1} &amp;amp;\leq p_1 p_2 p_3 \cdots p_N + 1 \\\\
        &amp;amp;&amp;lt; 2^2 \cdot 2^4 \cdot 2^8 \cdots 2^{2^N} + 1 = 2^{2 + 4 + \cdots + 2^N} + 1 \\\\
p_{N+1} &amp;amp;&amp;lt; 2^{2^{N+1}}.
\end{align}\]

&lt;p&gt;So, we can conclude that \[p_n &amp;lt; 2^{2^n}.\]
While this bound is still pretty terrible, it is &lt;em&gt;miles better&lt;/em&gt;: while before we had \(p_7 \leq 2^{6!} \approx 5.52 \cdot 10^{216}\), we now get \(p_7 &amp;lt; 2^{2^7} = 2^{128} \approx 3.40 \cdot 10^{38}\).&lt;/p&gt;

&lt;p&gt;If we wanna think in terms of \(\pi(N)\), we have \(\pi(2^{2^n}) = n\), which after solving \(N = 2^{2^n}\) for \(n\) gets us \[\pi(N) &amp;gt; \log_2\log_2{N}.\]&lt;sup id=&quot;fnref:30&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:30&quot; class=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;
To compare again: while \(\pi(10^{10}) = 455,052,511\), this gets us \(\pi(10^{10}) &amp;gt; \log_2\log_2{10^{10}} \approx 5.05\).
&lt;em&gt;Getting there!&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;one-1-upping-ourselves-using-fermat-numbers&quot;&gt;one-(1)-upping ourselves: using fermat numbers&lt;/h2&gt;

&lt;p&gt;Since we’re spending time with terrible bounds anyway, we might as well just try to add one (\(1\)) to our last lower bound for \(\pi(n)\).&lt;sup id=&quot;fnref:40&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:40&quot; class=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;For that, we need Fermat numbers: the \(n\)th Fermat number is defined as \(F_n = 2^{2^n} + 1\).
Now, we need a small theorem which &lt;a href=&quot;https://en.wikipedia.org/wiki/Fermat_number#Basic_properties&quot;&gt;Wikipedia assures me&lt;/a&gt; is called &lt;em&gt;Goldbach’s theorem&lt;/em&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;No two Fermat number have a common divisor greater than 1.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The proof isn’t too hard: let’s say we have two Fermat numbers, \(F_n\) and \(F_{n+k}\), who have a common divisor in \(m\).
Let’s first check check if \(F_n\) divides \(F_{n+k} - 2\) (this helps us, I swear!):&lt;/p&gt;

\[\begin{align}
    \frac{F_{n+k} - 2}{F_n} &amp;amp;= \frac{2^{2^{n+k}} + 1 - 2}{2^{2^n} + 1} = \frac{(2^{2^n})^{2^k} - 1}{2^{2^n} + 1} \\\\
                            &amp;amp;= \frac{x^{2^k} - 1}{x  + 1} \quad (\mathrm{with}\ x = 2^{2^n}) \\\\
                            &amp;amp;= x^{2^k - 1} - x^{2^k - 2} + \cdots - 1.
\end{align}\]

&lt;p&gt;Now, since \(m \mid F_n\) and \(F_n \mid F_{n-k} - 2\), we also get \(m \mid F_{n+k} - 2\), and since we also have \(m \mid F_{n+k}\), we get \(m \mid 2\)!
But from their definition, we can see that all Fermat numbers are odd, so \(m\) must be odd, too:
so, \(m\) must be 1, and we have proved what we set out to prove.&lt;/p&gt;

&lt;p&gt;Now, since no two Fermat numbers share a common factor, there must be at least \(n\) odd prime numbers less than or equal to \(F_n\).
With our one even prime number two, we get \[p_n \leq F_{n-1} = 2^{2^{n-1}} + 1,\] which is a square root better than our previous upper bound:&lt;sup id=&quot;fnref:50&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:50&quot; class=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;
when we had \(p_7 \lessapprox 3.40 \cdot 10^{38}\) before, we now have \(p_7 \lessapprox 1.84 \cdot 10^{19}\).
Still pretty terrible!&lt;/p&gt;

&lt;p&gt;In terms of \(\pi(N)\), this didn’t help much, though: solving \(N = 2^{2^{n-1}} + 1\) for \(n\) gets us \[\pi(N) \gtrapprox \log_2\log_2{N} + 1,\] which is the added one (1) I promised earlier&lt;sup id=&quot;fnref:60&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:60&quot; class=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h2 id=&quot;from-loglog-to-log-erdős-to-the-rescue&quot;&gt;from loglog to log: erdős to the rescue!&lt;/h2&gt;

&lt;p&gt;As the heading suggests, we’re able to drop from \(\log\log\) to \(\log\) with an argument due to Erdős.&lt;sup id=&quot;fnref:70&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:70&quot; class=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;We let \(N_j(x)\) be the function counting all \(n \leq x\) that aren’t divisible by a prime greater than \(p_j\).
We can express any such \(n\) as \(n_1^2 m\) where \(m\) is a &lt;em&gt;square-free number&lt;/em&gt;, which means that it’s not divisible by any perfect square other than 1, so that we can write \[m = 2^{b_1} \cdot 3^{b_2} \cdots p_j^{b_j}\] where all the \(b\)s are either 0 or 1.&lt;sup id=&quot;fnref:80&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:80&quot; class=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;
Because each number only has one prime factorization, there are only \(2^j\) possible \(m\)!
Now, since \(n_1 \leq \sqrt{n}\) and \(n \leq x\), we have \(n_1 \leq \sqrt{n} \leq \sqrt{x}\), so there are only up to \(\sqrt{x}\) possible values of \(n_1\)!&lt;/p&gt;

&lt;p&gt;We can thus conclude that there can only be up to \(2^j \sqrt{x}\) such \(n\), so that \[N_j(x) \leq 2^j \sqrt{x}.\]
Now comes the trick: we let \(j = \pi(x)\), so that \(p_{j+1} &amp;gt; x\) and thus \[N_j(x) = x\] by definition.
This gets us \[x = N_j(x) \leq 2^{\pi(x)} \sqrt{x}.\]
Now we solve for \(\pi(x)\):&lt;/p&gt;

\[\begin{align}
    x &amp;amp;\leq 2^{\pi(x)} \sqrt{x} \\\\
    \sqrt{x} &amp;amp;\leq 2^{\pi(x)} \\\\
    \frac{\log{x}}{2} &amp;amp;\leq \pi(x)\log{2} \\\\
    \pi(x) &amp;amp;\geq \frac{\log{x}}{2\log{2}}.
\end{align}\]

&lt;p&gt;When we were at \(\pi(10^{10}) &amp;gt; \log_2\log_2{10^{10}} + 1 \approx 6.05\), we now have a whopping bound of \(\pi(10^{10}) \gtrapprox 16.61\)! Still ways to go to \(455,052,511\) though…&lt;/p&gt;

&lt;p&gt;Substituting \(x = p_n\) so that \(\pi(x) = n\), we get \[p_n \leq 4^n\] (I’ll spare you the details), which improves our previous upper bound \(p_7 \leq 1.84 \cdot 10{19}\) to \(p_7 \leq 16384\), which is a number you can actually hold in your head!&lt;/p&gt;

&lt;h2 id=&quot;conclusions&quot;&gt;conclusions&lt;/h2&gt;

&lt;p&gt;Well, while our bounds are still quite terrible, they at least have improved significantly from the ones we got from Euclid’s argument. Technology!&lt;/p&gt;

&lt;p&gt;Jokes aside, I hope you learned something, and even if you didn’t, I hope you still had fun! Until next time~~&lt;/p&gt;

&lt;hr /&gt;
&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:00&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Unless indicated otherwise, all \(\log\)s are always natural logarithms. &lt;a href=&quot;#fnref:00&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:05&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This is also often written as \(\pi(N) \sim \frac{N}{\log{N}}\). &lt;a href=&quot;#fnref:05&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;There is a much better approximation of the number of primes up til \(N\): the so-called &lt;em&gt;logrithmic integral&lt;/em&gt; \[\operatorname{Li}(x) = \int_2^x \frac{dt}{\log(t)}.\] As far as I understand and/or remember, assuming the Riemann hypothesis, it has been proven that the error of this approximation is as if the primes were distributed truly pseudorandomly! (I could be wrong though!) &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;So what we’re basically doing is improving Euclid’s theorem over and over! &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:30&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;As far as I can tell, number theorists don’t really like having non-natural logarithms, but we can fix that: for all \(N \geq 2\), we have \(\log_2\log_2{N} &amp;gt; \log\log{N}\), and these two terms only differ by a constant factor, so the bound \(\pi(N) &amp;gt; \log\log{N}\) is almost as good. &lt;a href=&quot;#fnref:30&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:40&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This argument is due to Pólya, as far as I know. &lt;a href=&quot;#fnref:40&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:50&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Since \(\sqrt{2^{2^n}} = 2^{\frac{2^n}{2}} = 2^{2^{n-1}}\). &lt;a href=&quot;#fnref:50&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:60&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The \(\gtrapprox\) is due to me not wanting to be bothered with the \(1\) in \(2^{2^{n-1}} + 1\) &lt;a href=&quot;#fnref:60&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:70&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;What’s a math post without Erdős, anyway? &lt;a href=&quot;#fnref:70&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:80&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;There are no prime factors bigger than \(p_j\) because we only look at \(n\) that are not divisible by any larger primes. &lt;a href=&quot;#fnref:80&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;</content><author><name></name></author><category term="math" /><category term="math" /><category term="number-theory" /><summary type="html">Even if you don’t know it by its name, you’re probably familiar with Euclid’s theorem (and you’ve maybe heard its proof too at some point): There are infinitely many prime numbers. Now, less people are familiar with the remarkable prime number theorem: Let \(\pi(N)\) be the number of primes \(p\) with \(p \leq N\). Then \(\pi(N)\) approaches \(\frac{N}{\log{N}}\) as \(N \rightarrow \infty\)12. This tells us a lot more about how many primes there are: not only are there infinitely many primes, they are actually not too rare.3 While this is significantly harder to prove (it took until 1896), we can however get some (much worse) lower bounds for the number of primes up to \(N\) (in other words: upper bounds on the \(n\)th prime) without too much hassle.4 Unless indicated otherwise, all \(\log\)s are always natural logarithms. &amp;#8617; This is also often written as \(\pi(N) \sim \frac{N}{\log{N}}\). &amp;#8617; There is a much better approximation of the number of primes up til \(N\): the so-called logrithmic integral \[\operatorname{Li}(x) = \int_2^x \frac{dt}{\log(t)}.\] As far as I understand and/or remember, assuming the Riemann hypothesis, it has been proven that the error of this approximation is as if the primes were distributed truly pseudorandomly! (I could be wrong though!) &amp;#8617; So what we’re basically doing is improving Euclid’s theorem over and over! &amp;#8617;</summary></entry></feed>