Justin Douglas ←Home

PIM notes, Chapter 2: Exercises

These exercises exclude the coding projects, which I will write about later and post on GitHub.

Answering these has been very tedious. The first few were easy, but they quickly ballooned in difficulty / time required and so they got put on the back burner for a bit. I hope this book becomes more enjoyable.

2.1

  1. The highest-degree term in \(f\) is \(x^2\); the highest-degree term in \(g\) is \(x\). The highest possible power of \(x\) that can occur in \(f \cdot g\) is \(x^2 \cdot x\), or \(x^3\).

  2. The highest-degree term in \(f\) is \(x^n\); the highest-degree term in \(g\) is \(m\). The highest possible power of \(x\) that can occur in \(f \cdot g\) is \(x^n \cdot x^m = x^{n + m}\), resulting in a polynomial of degree \(n+m\).

  3. This does not hold when \(f\) or \(g\) are the zero polynomial. The generalization could be changed to say that \(f\) and \(g\) are polynomials with degrees \(n\) and \(m\) where \(n \geq 0\) and \(m \geq 0\).

2.2

  • If \(n = 24\), then the relative prime numbers of \(n\) are \(5, 7, 11, 13, 17, 19, 23\), which means \(\phi(n) = 8\).

  • One example of a monic polynomial is \(f(x) = x^5 + 3x^4 + 12x^3 + x + 7\).

  • If \(f(x) = 6x^3 + 18x^2 - 3x\), a \(3x\) can be factored out of each term, leaving the factors \(g(x) = 2x^3 + 6x^2 - 1\) and \(h(x) = 3x\).

  • \(f(x) = 17x^3 - 2\) and \(g(x) = 6x^2\) are relatively prime polynomials. \(f(x)\) is also irreducible. For the functions \(f(x) = 2x^2 + 4\) and \(g(x) = 4x^2 + 8\), their greatest common divisor, since it must be monic, is \((j)x = x^2 + 4\).

2.3

According to Euler’s theorem, \(a^{\varphi(n)}\over{n}\) has remainder \(1\) (or \(a^{\varphi(n)}\) mod \(n = 1\)). This means that

$$\begin{aligned} a^{\varphi(n)} \bmod n &= 1 \\ \frac{a^{\varphi(n)}}{n} &= c + \frac{1}{n} \\ a^{\varphi(n)} &= cn + 1 \\ \end{aligned}$$

\(c\) must be a nonnegative integer, which means that \(\frac{a^{\varphi(n)} - 1}{n}\) must be a nonnegative integer. Using the numbers from the previous example (\(n = 24\), \(\varphi(n) = 8\)), we can let \(a = 5\).

$$\begin{aligned} \frac{5^{8}}{24} &= c + \frac{1}{24} \\ 5^{8} &= 24c + 1 \\ 390625 - 1 &= 24c \\ c &= 16276 \end{aligned}$$

Indeed, \(\frac{390625 - 1}{24}\) is a nonnegative integer.

2.4

Based on the definition, a number \(z\) is algebraic if there is some polynomial function \(f(x) = a_0 + a_1x + \cdots + a_nx^n\), where all \(a_i\) are rational, such that \(f(z) = 0\). \(\sqrt 2\) is algebraic because it is the root of \(x^2 - 2\), which has the rational coefficients \(a_0 = -2\), \(a_1 = 0\), and \(a_2 = 1\).

You can find this by trying to get to \(z\) from 0, then going backwards through that order of operations starting with \(x\). I find this easier to visualize by writing in Clojure:

(sqrt         ; x^2
  (+ 2 0))    ; - 2

What about \(\phi = \frac{1 + \sqrt 5}{2}\)?

(/ 2              ; 2x
  (+ 1            ; - 1
    (sqrt         ; all of that ^2
      (+ 5 0))))  ; - 5

Thus,

$$ \begin{aligned} f(x) &= (2x - 1)^2 - 5 \\ &= 2x^2 - 4x + 1 - 5 \\ &= 2x^2 - 4x - 4 \end{aligned} $$

So, we have a polynomial. Since we started from 0 and worked in reverse to come up with this polynomial, that should mean \(\phi\) is algebraic. Let’s check:

$$ \begin{aligned} f(\phi) &= \Big(\cancel{2}(\frac{1 + \sqrt 5}{\cancel{2}}) - 1\Big)^2 - 5 \\ &= (\cancel{1} + \sqrt 5 - \cancel{1})^2 - 5 \\ &= (\sqrt 5)^2 - 5 \\ &= 0 \end{aligned} $$

And \(\phi\) is its root, so yes, \(\phi\) is an algebraic number. Now, how about \(\sqrt 2 + \sqrt 3\)? At first I tried to work it out with Clojure and I got the wrong answer. But there is definitely a way to make \(\sqrt 2 + \sqrt 3 = 0\). I did it the old-fashioned way, filling up an entire sheet of paper with algebra.

You have to square the term to get rid of the square roots, but because \((a + b)^2 = a^2 + 2ab + b^2\), you’ll also have to get rid of the \(2\sqrt 2\sqrt 3\) that remains when you do \((\sqrt 2 + \sqrt 3)^2\).

$$ \begin{aligned} f(x) &= \Big(\frac{x^2 - 5}{2}\Big)^2 - 6 \\ &= \frac{1}{4}x^4 + \frac{5}{2}x^2 + \frac{25}{4} - 6 \\ &= \frac{1}{4}x^4 + \frac{5}{2}x^2 + \frac{1}{4} \end{aligned} $$

2.5

Product and sum of algebraic numbers

A polynomial encompasses the operations of addition, multiplication, and exponentiation. To find the root of a polynomial is to perform the ”opposite” of these operations.

Addition and multiplication are commutative, so their opposite is themselves; the opposite of exponentiation is to take a root.

It would seem from the above work that any number made from combinations of these “opposite” operations performed on rational numbers is the root of some polynomial and is therefore algebraic.

Therefore, for any two algebraic numbers, their sum and their product are both algebraic as well.

Proof regarding \(\pi+e\) and \(\pi e\) 

For this part, I was stuck, so I got some help from the author himself:

For the \(\pi+e\) and \(\pi e\) part, there are two steps: (1) prove that a number which is the root of a polynomial whose coefficients are algebraic is also algebraic, and (2) construct a polynomial whose roots are \(\pi\) and \(e\), and whose coefficients can be expressed in terms of \(\pi+e\) and \(\pi e\).

The author said the second part was easy. It is, but it’s not obvious. Apparently you can create a polynomial from a series of roots by just subtracting them from \(x\) and multiplying them together:

$$ (x-r_1)(x-r_2) $$

Since we want a polynomial with roots \(\pi\) and \(e\),

$$ \begin{aligned} f(x) &= (x-\pi)(x-e) \\ &= x^2 - ex - \pi x + \pi e \\ &= x^2 - (\textcolor{teal}{\pi + e})x + \textcolor{orange}{\pi e} \end{aligned} $$

Done!

Now, back to the first part. In a more general form, the statement about the roots of a polynomial would look like

$$ f(x) = \prod_{i=1}^n (x-r_i) $$

for a polynomial with \(n\) roots.

That means all that can ever happen to any \(r_i\) is the accumulation of multiplication or addition operations.

As discussed above, for any two algebraic numbers, their sum and their product are both algebraic as well. So for a set of \(r_1, \cdots, r_n\) that are all algebraic, \(f(x)\) will also be algebraic.

\(\pi\) and \(e\) are known to not be algebraic, so accumulating multiplication or addition with algebraic numbers will never make them algebraic. Therefore the resulting coefficients will not be algebraic.

Let’s try making \(\pi + e\) a root of a polynomial that has an algebraic root as well.

$$ \begin{aligned} g(x) &= \Big(x - (\pi + e)\Big)(x + 2) \\ &= x^2 + 2x - (\pi + e)x - 2(\pi + e) \\ &= x^2 + (2 - \pi - e)x - 2\pi - e \end{aligned} $$

Nope! How about \(\pi e\)?

$$ \begin{aligned} h(x) &= (x - \pi e)(x + 2) \\ &= x^2 + 2x - \pi e x - 2 \pi e \\ &= x^2 + (2 - \pi e)x - 2 \pi e \end{aligned} $$

Here, we cannot know for sure. If \(\pi e\) is algebraic, then \(h(x)\) will also be algebraic. But we don’t know. So it is true that \(\pi\) and \(e\) are not algebraic, but \(\pi + e\) and \(\pi e\) cannot both be algebraic.

2.6

I don’t know how to prove Vieta’s formulas other than working through them. The product is easier than the sum and can be done without substituting actual numbers. Let’s try a polynomial of degree 3 (\(n=3\)):

$$ \begin{gathered} (x - r_3)(x - r_2)(x - r_1) \\ (x^2 - r_2x - r_3x + \textcolor{red}{r_2r_3})(x - r_1) \\ x^3 - r_2x^2 - r_3x^2 + r_2r_3x - (r_1x^2 + r_1r_2x + r_1r_3x - \textcolor{red}{r_1r_2r_3}) \\ x^3 \textcolor{teal}{- r_2x^2 - r_3x^2} + \textcolor{orange}{r_2r_3x} - \textcolor{teal}{r_1x^2} \textcolor{orange}{- r_1r_2x - r_1r_3x} + \textcolor{red}{r_1r_2r_3} \\ {\underbrace{\textcolor{lightgray}{1}}_{a_3}} x^3 + {\underbrace{(\textcolor{teal}{-r_1 - r_2- r_3}}_{a_2})} x^2 + {\underbrace{(\textcolor{orange}{-r_1r_2 - r_1r_3 + r_2r_3})}_{a_1}} x + \underbrace{\textcolor{red}{r_1r_2r_3}}_{a_0} \end{gathered} $$

According to Vieta’s formula, the following should be true:

$$ \begin{aligned} \prod_{i=1}^n r_i &= (-1)^n \frac{a_0}{a_n} \\ r_1r_2r_3 &\overset{?}{=} (-1)^3 \frac{r_1r_2r_3}{1} \end{aligned} $$

Indeed it is! And if you look at the term I went back and highlighted in red above, you can see why:

As you keep multiplying binomials, the final coefficient (the one without a variable) is always going to be the cumulative product of the roots (the term in the binomials without a variable), with the sign switching for each binomial you multiply.

But what about the sum of the roots?

$$ \begin{aligned} \sum_{i=1}^n r_i &= -\frac{a_{n-1}}{a_n} \\ r_1 + r_2 + r_3 &\overset{?}{=} -\frac{a_2}{a_3} \\ &\overset{?}{=} -\frac{-r_1 - r_2 - r_3}{1} \\ &= r_1 + r_2 + r_3 \end{aligned} $$

I can’t explain why this is the case, but I can see that it does work.

So Vieta’s formulas are basically saying that for any degree \(n\) polynomial \(a_n \prod_{i=1}^n (x - r_i)\) with roots \(r_1, ..., r_n\), where \(a_n\) acts to “scale” the polynomial, then

  • \(a_0\) (the coefficient without a variable) is the de-scaled product of all the roots (flipping its sign for each iteration), and
  • \(a_{n-1}\), the coefficient of the second highest term, is the flipped-sign de-scaled sum of all the roots.

2.9

Wilkinson’s polynomial has a very precise shape because it was created from multiplying very simple binomials. It is “infinitely” steep near its roots (almost a straight line) and because the terms are of such a high order, altering the coefficients slightly turns what was a very “simple” function (the product of simple binomials) into an extremely complicated one.

2.12

As far as I found, fields of math involved in different proofs of the Fundamental Theorem of Algebra include: Complex analysis, real analysis, topology, and Riemannian differential geometry. Pretty scary stuff.

Go Top