Home> urticator.net Search About This Site > Domains Glue Stories Computers Driving Games Humor Law > Math Numbers Science > Continued Fractions Game Theory (Section) Group Theory Probability Miscellaneous Approximation Quality Two Examples The Markov Equation Equivalence The Markov Constant The Pattern Group Elements and Matrices The Random Thoughts Symmetries Chains and Denominators
Reflection Symmetry A Nice Correspondence
|
The DiscriminantLast, and least, I'd like to say a bit more about the discriminant. Let's start with a quick review of what we learned in Group Elements and Matrices. Given a repeating expansion, we can run the second algorithm on one iteration to get a matrix, a group element, and a quadratic equation. The positive solution of that quadratic equation is the value of the expansion, and the discriminant D of that quadratic equation, traditionally b2 − 4ac, is the discriminant that I'm talking about. We can write the discriminant in terms of the matrix, …
D = (tr G)2 − 4 det G … and when the expansion is expansion(u), the discriminant has a familiar value.
D = 9u2 − 4 What more is there to say? Well, back in The Markov Constant, the discriminant appeared as part of the Markov constant of expansion(u).
M = (root D)/u = root(9 − 4/u2) At the time, I was trying to present results, not derive them, but now it bugs me that I didn't establish that there's a connection between that discriminant and the one above. So, let's see what we can do! Again, it will help to have an example, fortunately only a small one this time. Here's the result of running the second algorithm on one iteration of [1,2,3,4].
And here's the matrix G that we get, along with two different sets of names for the entries.
We can use the trace and determinant of that matrix to compute the discriminant.
D = 502 − 4 = 2496 = 26 × 3 × 13 Let's also define K = root D for short since we're going to be seeing a lot of that soon. Really soon. Behold!
K is approximately 50, or less approximately 50.040016 = 49.960016, so that for example [1,2,3,4] is almost exactly 86/60 = 43/30. Convergents are good approximations, remember. We only need to solve one quadratic equation to compute that whole table. Once we know [1,2,3,4], for example, we can subtract 1 to get [0,2,3,4,1], take the reciprocal to get [2,3,4,1], and so on until we come back around to [1,2,3,4]. Then we can check our work by confirming that the value matches. And, of course you can run through the cycle in the other direction if you like that better. Let me say a few words here about reciprocals. For rational r and s, numbers of the form r + sK constitute a field, meaning that among other things they're closed under addition, subtraction, multiplication, and division. The first three are easy to understand, but what about division? Well, the procedure's the same as for complex numbers. First, we observe that since division is just multiplication by a reciprocal, we really only need to understand how to take reciprocals. Second, we define the conjugate of r + sK to be r − sK. Third, we use the conjugate to move the irrational part from the denominator to the numerator.
1 / (r + sK) = (r − sK) / (r + sK)(r − sK) = (r − sK) / (r2 − s2D) Now let's go back to the example above and see how the procedure works there (with minor adjustments). The reciprocal of (K − 24)/60 is 60/(K − 24). If we multiply the numerator and denominator by K + 24, the denominator becomes
(K − 24)(K + 24) = K2 − 242 = 2496 − 576 = 1920 = 60 × 32. Then the factors of 60 cancel, leaving (K + 24)/32. If we were presented with the number (K + 36)/60 and told to run the first algorithm on it, we'd find ourselves doing exactly the same calculation—subtract 1, take the reciprocal, and so on. Compare that to the calculation of the expansion of root 3 in Continued Fractions (“you can take reciprocals with just a little algebra”). Now, in The Recurrence Relation we saw that the matrices for the rotations of an expansion all have the same determinant and the same trace. Therefore, they also all have the same discriminant, so it makes sense that the same value K is present throughout the table of rotations above. But why is it present in this table of reflections?
Let me give a two-part answer to that question. First, the matrix for the reversed expansion is the transpose of the matrix for the original expansion, and the transpose of a matrix has the same determinant and the same trace.
Second, there's a reason that the matrix is the transpose. We can express the matrix for the original expansion in terms of the prefix function from The Random Thoughts.
From there, we just make use of the fact that taking the transpose, like taking the inverse, is an order-reversing operation (antihomomorphism) and the fact that the one-argument prefix function returns symmetric matrices, and voilà.
By the way, here's a fun connection. We just saw that GT is the matrix for the reversed expansion, but of course it's also the transpose of G.
Therefore, the last convergent of the reversed expansion is equal to the ratio of the last two numerators of the original expansion, and the next-to-last convergent of the reversed expansion is equal to the ratio of the last two denominators of the original expansion, like so.
pn / pn−1 = [an, an−1, an−2, … , a3, a2, a1, a0] That gives us a fancy new proof of the result from Triangles about the ratio of denominators! It also suggests that there ought to be a non-fancy proof of the above result about the ratio of numerators, and indeed there is. Anyway, let's get back on track. The tables of rotations and reflections contain everything we need to compute the Markov constant of [1,2,3,4]. The approximation qualities converge to the following four values, …
[1,2,3,4] + [0,4,3,2,1] = K / 30 … so the Markov constant is the supremum, K/10, slightly less than 5. The other rotations are equivalent, and therefore have the same four limiting qualities and the same Markov constant. Of course that's also easy to see directly. The reflections are not equivalent, but for some reason they still have the same four limiting qualities and the same Markov constant.
[4,3,2,1] + [0,1,2,3,4] = K / 10 Here's one example of the reason.
A medium-sized digression starts here. I'd save it for later. I can also tell you the reason that the qualities are always equal to K divided by an integer, but for that we'll need to go all the way back to the quadratic equation.
cz2 − (a − d)z − b = 0 We'll also need to know the solutions.
z = (1/2c) [ (a − d) ± K ] Let's call the one with the plus sign z+ and the one with the minus sign z−. Note that they're conjugate. As I mentioned in The Random Thoughts, the numbers a, b, c, and d are all positive integers except that d is zero when the expansion has length 1. Now, if we divide the quadratic equation by c and compare it to (z − z+)(z − z−) = 0, we see that the product of the solutions is −b/c. Since b and c are positive, the product is negative, so one solution is positive and one negative. And, since c is positive, z+ has to be the positive solution. If instead we divide the quadratic equation by −z2, we get another quadratic equation.
b(−1/z)2 − (a − d)(−1/z) − c = 0 The division swapped the coefficients b and c, so that's the equation for the transposed matrix GT and therefore also for the reversed expansion. However, the division didn't change the values of z that satisfy the equation, so the solutions are −1/z+ and −1/z−. Due to the minus signs, the latter is now the positive solution. The presence of the function S(z) = −1/z probably means something, but I don't know what. Of course it's also possible to solve the equation for the reversed expansion in the usual way. I'm continually amazed that these are the negative reciprocals of the other solutions.
z = (1/2b) [ (a − d) ± K ] Anyway, at last we've reached the point. Each approximation quality is the sum of an expansion and the reciprocal of the reversed expansion.
Q = z+ + 1/(−1/z−) = z+ − z− = K/c So, yes, the quality is always K divided by an integer, and that integer is c, a.k.a. the denominator qn. Please take a moment to admire that nice result. Where does that leave us? For any repeating expansion, the approximation qualities converge to a finite number of values of the form K/c. The smallest value of c produces the largest value of K/c, and that's the Markov constant of the expansion. So, the connection is established: the discriminant that appears in the Markov constant really is the one from the quadratic equation. That result is completely general, but when the expansion is expansion(u) we can say a bit more. To compute c = qn, we run the second algorithm on one iteration of the expansion. The first denominator that we get is q0, so the last, for an expansion of length L, is qL−1. But, from Chains and Denominators we know that the rotations that contain the upper and lower chains also contain at position L(u) − 1 the value u. Therefore the smallest value of c is less than or equal to u, and therefore M ≥ K/u. Sadly, I don't know how to prove the stronger statement that the smallest value of c is in fact equal to u. The material in the essay Continued Fractions has a small amount of practical value, but the material in the thread Approximation Quality is as far as I know completely impractical. The material in the subthread Chains and Denominators started out as a digression from an impractical subject, so I've been thinking of it as doubly impractical. In fact, back in July I nearly made it into a separate thread in order to keep the length of the main thread down. But, now that we've found a use for it, I guess it's only singly impractical after all.
|
See AlsoInteresting Detail, An Nice Correspondence, A Properties of Rotations Six Operations o July (2023) @ January (2024) |