Euclids Algorithm And Continued Fractions
Algorithmic-Number-theory
Number-theory
]
I am following Professor Swastik’s Algorithmic Number Theory course notes offered at Rudgerts in 2014.
Contents
Euclid’s Algorithm and Continued Fractions
We start with Lectures 1 and 2!
Greatest common divisor and recognizing rational numbers
Complexity of elementary arithmetic operations
Note: We know the computer processes and stores information in binary digits, so our input for any natural number $(a)_{10}$ will be in binary, so it will have $[\log_2(a)]$ many digits.
- Addition/ Subtraction Naive algorithm ( simply add digit wise) for addition takes $O(\log a +\log b)$ time.
- Multiplication Naive algorithm ( simply multiplying the way we did in high school) will take $O(\log a+\log b)$.
- Factoring We essentially try finding the prime factors of the number. Say if the number is an $n$ bit number, then it would be atmost $2^n$ decimal number. The number of times we would have to perform the trial division is bounded above to the number of distinct primes dividing the square root of the number which is calculated by the $\pi$ function. We know that the $\pi(n)= \frac{n}{\log_e(n)}$. So it would take around $\frac{2^{n/2}}{n/2\log_e(2)}$. We also have to account for the division so $O(n^2\cdot \frac{2^{n/2}}{n/2\log_e(2)})$. In any case, this is exponential in terms of $n$ ( which is the number of digits in base $2$).
GCD
We will give a polynomial time algorithm for computing the GCD for two positive integers. So if we have $a,b$ in decimal form, then the algorithm will have time complexity of $O(\text{ poly} (\log a,\log b))$.
Euclid’s GCD algorithm
We begin with the following claim.
So we get the following algorithm to compute $\gcd(a,b)$:
- Find $q,r$ such that $a=qb+r,0\le r< b$
- If $r=0$ then $b$ is the answer
- Else compute $\gcd(b,r)$
Runtime of the algorithm
We begin with the lemma.
Note that $F_n=\theta((\frac{1+\sqrt{5}}{2})^n)$. So, if $a,b\le n$ bits long, then $\gcd(a,b)$ will halt in $O(n)$ steps. ( it grows in log time). Accounting, for the division, we get $O(n^3)$.
Finding a rational approximation
So, essentially, we are trying to find the closest integer fraction to $\alpha$ with a bounded denominator.
Note that, we can simply bash and find a solution, but that will take some large time ( not sure?)
Continued Fractions
We consider the Euclid’s GCD algorithm on $a>b$. Then: \(a=bq_0+r_0\)
\[b=r_0q_1+r_1\] \[r_0=r_1q_2+r_2\] \[\vdots\] \[r_{n-2}=r_{n-1}q_n+0\]So \(\frac{a}{b}=q_0+\frac{r_0}{b}\)
\[\frac{b}{r_0}=q_1+\frac{r_1}{r_0}\] \[\vdots\] \[\frac{r_{n-2}}{r_{n-1}}=q_n\]We can also represent it this way:
\[\frac{a}{b}= q_0+\cfrac{1}{q_1+\cfrac{1}{q_2+\cfrac{1}{q_3+\dots \frac{1}{q_n}}}}\]Note this fraction will be continued since we are dealing with rational numbers. Infact, we can do it for any $\alpha\in \Bbb{R}^{+}$.
Given $\alpha\in \Bbb{R}^{+}$, $\alpha=[\alpha]+\frac{1}{\alpha_1}, \alpha_2=[\alpha_2]+\alpha_3$ and so on.
Notation: $[\alpha_i]=a_i,[\alpha]=a_0$.
We get that \(\alpha= a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\dots \vphantom{\cfrac{1}{1}}}}}\)
Note that, the fraction terminates $\iff \alpha$ is rational.
We define $\frac{p_0}{q_0}=a_0, \frac{p_1}{q_1}=a_0+\frac{1}{a_1},\dots $
So, this implies $p_n,p_{n-1}$ are relatively prime. Similarly $q_n,q_{n-1}$.
\(\frac{a}{b}= a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\dots \frac{1}{a_n+\frac{1}{z}}}}}\) where $z$ is a variable.
Note that the above would be of the form \(\frac{rz+s}{tz+u}, r,t,u,s,\in \Bbb{Z}.\)
How good a rational approximation can one get to a given real number $\alpha$? One trivial rational approximation to $\alpha$ is a number $\frac{a}{q}$ with \(\mid\alpha - \frac{a}{q}\mid \leq \frac{1}{2q},\) (for any $q$ we can simply choose $a \in \Bbb{Z}$ at a distance at most $1/2$ from $q\alpha$)
We prove the above theorem by proving the following stronger theorem:
We claim that theorem $2.2$ proves theorem $2.1$.
Approx
We are keeping in mind the convergents.
Hence, L must be between $\frac{p_{n+1}}{q_{n+1}}$ and $\frac{p_{n-1}}{q_{n-1}}$, both of which are at least $\alpha$. Hence $L$ is at least $\alpha$. If $\frac{r}{s}$ is strictly closer to $\alpha$ than both $c_n$ and $L$, then $\frac{r}{s} \in I = (c_n, L)$. For contradiction, suppose $0 $ less than $s \leq Q$.
We will ask the readers to figure out the ending. ( The full proof is in the notes).
As we noted before, $q_n$ increases monotonically until $n = N$. As $\frac{a}{b}$ is the best approximation with denominator at most $Q$ for any $Q \geq b$, we cannot have $q_n $ greater than $ b$ for any $n$. If so, then $q_{n+1} $ greater than $ b$ as well. Either $L$ or $\frac{p_{n+1}}{q_{n+1}}$ would at least as good an approximation as $\frac{a}{b}$ with denominator at most $q_{n+1}$. For any $t $ greater than $ 0$, $tq_{n+1} + q_{n} $ greater than $q_{n+1}$, so $t=0$. Then $L = \frac{p_{n}}{q_{n}}$. This is impossible, because $\frac{p_n}{q_n}$ and $\frac{p_{n+1}}{q_{n+1}}$ are reduced and therefore neither are equal to $\frac{a}{b}$. This also applies that $N$ exists for $\alpha$ if and only if $\alpha$ is rational., i.e. the continued fraction representation of $\alpha$ terminates iff $\alpha$ is rational.
Running time
It will be of use to bound $p_n$ and $q_n$. We already know that $q_n \leq b$. From Remark, we know that $|p_n - q_n\frac{a}{b}| \leq \frac{1}{q_{n-1}}$. Hence $|p_n| \leq |\frac{q_n}{b}| |a| + \frac{1}{q_{n-1}} = O(a)$.
- We know that the $n$ from Theorem is $O(\log(Q))$ because the $q_n$ grow at least as fast as the $n^{th}$ entry of the Fibonacci sequence provided $n \leq N$.
- At each step in the computation of the continued fraction, we compute the floor of a rational number $\alpha_i = \frac{r_i}{s_i}$, which is simply the number of times the denominator of $\alpha_i$ goes into the numerator. This is a division with remainder, which we know to take $O(\log(r_i)\log(s_i))$ operations.
- We know $\alpha_i = \frac{1}{\alpha_{i-1} - \lfloor\alpha_{i - 1}\rfloor}$, which has denominator the remainder of $r_{i - 1}$ under division by $s_{i - 1}$ (which is strictly smaller than $s_{i-1}$) and numerator $s_{i-1}$. As $\alpha_0 =\alpha = \frac{a}{b}$, we can conclude by induction that $r_i, s_i \leq \max(a, b)$ and so the number of operations per step is certainly at most $O((\log(a) + \log(b))^2)$.
- Finding $t$ can take at most $O(\log(Q)^2)$ steps, as we find it by division with the remainder of $Q - q_{n-1}$ by $q_{n-1}$. $t$ is at most $a_{n+1}$, which is at most $q_{n+1} \leq b$. $p_n, p_{n-1} = O(a)$.
- Taken together, finding $L$ must take $O(poly(\log(a), \log(b), \log(Q)))$.
- Finding whether $L$ or $c_n$ is closer to $\alpha$ can also certainly take at most $O(poly(\log(a), \log(b), \log(Q)))$, and finding the continued fraction is also $O(poly(\log(a), \log(b), \log(Q)))$.
Hence the entire running time must be $O(poly(\log(a), \log(b), \log(Q)))$.