These exercises exclude the coding projects, which I will write about later and post on GitHub.
Answering these has been very tedious. The first few were easy, but they quickly ballooned in difficulty / time required and so they got put on the back burner for a bit. I hope this book becomes more enjoyable.
2.1
-
The highest-degree term in \(f\) is \(x^2\); the highest-degree term in \(g\) is \(x\). The highest possible power of \(x\) that can occur in \(f \cdot g\) is \(x^2 \cdot x\), or \(x^3\).
-
The highest-degree term in \(f\) is \(x^n\); the highest-degree term in \(g\) is \(m\). The highest possible power of \(x\) that can occur in \(f \cdot g\) is \(x^n \cdot x^m = x^{n + m}\), resulting in a polynomial of degree \(n+m\).
-
This does not hold when \(f\) or \(g\) are the zero polynomial. The generalization could be changed to say that \(f\) and \(g\) are polynomials with degrees \(n\) and \(m\) where \(n \geq 0\) and \(m \geq 0\).
2.2
-
If \(n = 24\), then the relative prime numbers of \(n\) are \(5, 7, 11, 13, 17, 19, 23\), which means \(\phi(n) = 8\).
-
One example of a monic polynomial is \(f(x) = x^5 + 3x^4 + 12x^3 + x + 7\).
-
If \(f(x) = 6x^3 + 18x^2 - 3x\), a \(3x\) can be factored out of each term, leaving the factors \(g(x) = 2x^3 + 6x^2 - 1\) and \(h(x) = 3x\).
-
\(f(x) = 17x^3 - 2\) and \(g(x) = 6x^2\) are relatively prime polynomials. \(f(x)\) is also irreducible. For the functions \(f(x) = 2x^2 + 4\) and \(g(x) = 4x^2 + 8\), their greatest common divisor, since it must be monic, is \((j)x = x^2 + 4\).
2.3
According to Euler’s theorem, \(a^{\varphi(n)}\over{n}\) has remainder \(1\) (or \(a^{\varphi(n)}\) mod
\(n = 1\)). This means that
\(c\) must be a nonnegative integer, which means that \(\frac{a^{\varphi(n)} - 1}{n}\) must be a nonnegative integer. Using the numbers from the previous example (\(n = 24\), \(\varphi(n) = 8\)), we can let \(a = 5\).
Indeed, \(\frac{390625 - 1}{24}\) is a nonnegative integer.
2.4
Based on the definition, a number \(z\) is algebraic if there is some polynomial function \(f(x) = a_0 + a_1x + \cdots + a_nx^n\), where all \(a_i\) are rational, such that \(f(z) = 0\). \(\sqrt 2\) is algebraic because it is the root of \(x^2 - 2\), which has the rational coefficients \(a_0 = -2\), \(a_1 = 0\), and \(a_2 = 1\).
You can find this by trying to get to \(z\) from 0, then going backwards through that order of operations starting with \(x\). I find this easier to visualize by writing in Clojure:
(sqrt ; x^2
(+ 2 0)) ; - 2
What about \(\phi = \frac{1 + \sqrt 5}{2}\)?
(/ 2 ; 2x
(+ 1 ; - 1
(sqrt ; all of that ^2
(+ 5 0)))) ; - 5
Thus,
So, we have a polynomial. Since we started from 0 and worked in reverse to come up with this polynomial, that should mean \(\phi\) is algebraic. Let’s check:
And \(\phi\) is its root, so yes, \(\phi\) is an algebraic number. Now, how about \(\sqrt 2 + \sqrt 3\)? At first I tried to work it out with Clojure and I got the wrong answer. But there is definitely a way to make \(\sqrt 2 + \sqrt 3 = 0\). I did it the old-fashioned way, filling up an entire sheet of paper with algebra.
You have to square the term to get rid of the square roots, but because \((a + b)^2 = a^2 + 2ab + b^2\), you’ll also have to get rid of the \(2\sqrt 2\sqrt 3\) that remains when you do \((\sqrt 2 + \sqrt 3)^2\).
2.5
Product and sum of algebraic numbers
A polynomial encompasses the operations of addition, multiplication, and exponentiation. To find the root of a polynomial is to perform the ”opposite” of these operations.
Addition and multiplication are commutative, so their opposite is themselves; the opposite of exponentiation is to take a root.
It would seem from the above work that any number made from combinations of these “opposite” operations performed on rational numbers is the root of some polynomial and is therefore algebraic.
Therefore, for any two algebraic numbers, their sum and their product are both algebraic as well.
Proof regarding \(\pi+e\) and \(\pi e\)
For this part, I was stuck, so I got some help from the author himself:
For the \(\pi+e\) and \(\pi e\) part, there are two steps: (1) prove that a number which is the root of a polynomial whose coefficients are algebraic is also algebraic, and (2) construct a polynomial whose roots are \(\pi\) and \(e\), and whose coefficients can be expressed in terms of \(\pi+e\) and \(\pi e\).
The author said the second part was easy. It is, but it’s not obvious. Apparently you can create a polynomial from a series of roots by just subtracting them from \(x\) and multiplying them together:
Since we want a polynomial with roots \(\pi\) and \(e\),
Done!
Now, back to the first part. In a more general form, the statement about the roots of a polynomial would look like
for a polynomial with \(n\) roots.
That means all that can ever happen to any \(r_i\) is the accumulation of multiplication or addition operations.
As discussed above, for any two algebraic numbers, their sum and their product are both algebraic as well. So for a set of \(r_1, \cdots, r_n\) that are all algebraic, \(f(x)\) will also be algebraic.
\(\pi\) and \(e\) are known to not be algebraic, so accumulating multiplication or addition with algebraic numbers will never make them algebraic. Therefore the resulting coefficients will not be algebraic.
Let’s try making \(\pi + e\) a root of a polynomial that has an algebraic root as well.
Nope! How about \(\pi e\)?
Here, we cannot know for sure. If \(\pi e\) is algebraic, then \(h(x)\) will also be algebraic. But we don’t know. So it is true that \(\pi\) and \(e\) are not algebraic, but \(\pi + e\) and \(\pi e\) cannot both be algebraic.
2.6
I don’t know how to prove Vieta’s formulas other than working through them. The product is easier than the sum and can be done without substituting actual numbers. Let’s try a polynomial of degree 3 (\(n=3\)):
According to Vieta’s formula, the following should be true:
Indeed it is! And if you look at the term I went back and highlighted in red above, you can see why:
As you keep multiplying binomials, the final coefficient (the one without a variable) is always going to be the cumulative product of the roots (the term in the binomials without a variable), with the sign switching for each binomial you multiply.
But what about the sum of the roots?
I can’t explain why this is the case, but I can see that it does work.
So Vieta’s formulas are basically saying that for any degree \(n\) polynomial \(a_n \prod_{i=1}^n (x - r_i)\) with roots \(r_1, ..., r_n\), where \(a_n\) acts to “scale” the polynomial, then
- \(a_0\) (the coefficient without a variable) is the de-scaled product of all the roots (flipping its sign for each iteration), and
- \(a_{n-1}\), the coefficient of the second highest term, is the flipped-sign de-scaled sum of all the roots.
2.9
Wilkinson’s polynomial has a very precise shape because it was created from multiplying very simple binomials. It is “infinitely” steep near its roots (almost a straight line) and because the terms are of such a high order, altering the coefficients slightly turns what was a very “simple” function (the product of simple binomials) into an extremely complicated one.
2.12
As far as I found, fields of math involved in different proofs of the Fundamental Theorem of Algebra include: Complex analysis, real analysis, topology, and Riemannian differential geometry. Pretty scary stuff.
Go Top