Home> urticator.net Search About This Site > Domains Glue Stories Computers Driving Games Humor Law > Math Numbers Science > Continued Fractions Game Theory (Section) Group Theory Probability Miscellaneous Approximation Quality Two Examples The Markov Equation Equivalence The Markov Constant The Pattern Group Elements and Matrices The Random Thoughts Symmetries Chains and Denominators The Discriminant Properties of Rotations Reflection Symmetry A Nice Correspondence > Epilogue The True Pattern Mutations
More About Symmetry Bibliography |
Some DefinitionsNext I'd like to present some definitions and simple results that will give us a solid foundation for the rest of the essays. First, a bit of history. In The Pattern I called the expansions of the original pattern expansion(u), and in Group Elements and Matrices I called the associated matrices E(u). In A Nice Correspondence, though, I established that the two are essentially the same, so from now on I'm just going to call them both E(u). In same way, let's call the expansions and matrices of the true pattern T(u). As an expansion, we know from Mutations that T(u) starts with 22 and ends with 11. In other words, it looks like 22ABCD11, where ABCD represents zero or more pairs of 1s and 2s in some specific order. Since T(u) starts with 22, its numerical value must lie between [2,2] and [2,3].
2 1/3 = [2,3] < value(T(u)) < [2,2] = 2 1/2 Here I want the expression “value(T(u))” to mean the rational value of T(u) as a finite continued fraction expansion, not the irrational value that we get when we let the expansion repeat. Two notes:
As a matrix, we know from The True Pattern that T(u) is a Cohn matrix, which means that the trace is 3u and the entry in the lower left corner is u. And, since the expansion length is even, we also know that the determinant is 1. (I should have included that as part of the definition of a Cohn matrix.) That gives us three constraints on the four-dimensional space of 2×2 matrices, not enough that we can just solve for T(u), but enough that we can reduce the possibilities to some kind of one-dimensional structure. But what? It's not a subspace, since it isn't flat and doesn't include the origin. It seems like it ought to be a manifold, but technically it's a variety because it's defined by polynomial constraints. In general, a variety can have singular points, but this one doesn't. It has only one connected component, and that component is an open curve, not a closed loop. In fact, it's a parabola. We can parametrize it with a parameter t, like so.
The exact value of b isn't important, but you can compute it via the determinant if you want to see the parabola.
b = 2u − t − (t2 + 1)/u About the constants 2u in the upper left corner and u in the lower right corner … the sum has to be 3u, but the difference is arbitrary, because changing the constants is equivalent to changing the origin of t. At least, right now it seems arbitrary, but later (in the next essay) we'll see that there's a very good reason for this particular choice. Remember how the first column of the matrix is also the last convergent?
value(T(u)) = (2u + t)/u = 2 + t/u When we combine that with the bounds on value(T(u)) from earlier, we get a nice result.
u/3 < t < u/2 What about the special cases u = 1 and u = 2? Well, the matrix parametrization still works, but we have to disregard the “22ABCD11” paragraph, the bounds on value(T(u)), and the nice result that depends on the bounds. Here's what we have instead.
Now let's do the same exercise for a different set of expansions and matrices. In Going Backward I introduced the idea of reading an expansion backward starting from the first coefficient. It turns out to be more important than I thought! T(u) looks like 22ABCD11, so when we read it backward, we get 211DCBA2. Let's call that B(u). Clearly, B(u) starts with 211 and ends with 2. Since it starts with 211, its numerical value must lie between [2,1,1] and [2,1,2].
2 1/2 = [2,1,1] < value(B(u)) < [2,1,2] = 2 2/3 About reading backward in general, let's compare and contrast:
Although reading backward doesn't correspond to any standard matrix operation, we can still say a couple of things about the result. First, the determinant and the trace are the same. That too is covered by the arguments in the middle of The Discriminant. Second, the entry in the lower left corner is the same. To prove that, we'll need to remember (again) that the first column of the matrix is also the last convergent. In other words, the entry in the upper left corner is the numerator of the value and the entry in the lower left corner is the denominator. The proof is a proof by example, and the example might as well be T(u).
So, in the general case, reading backward produces a matrix with the same determinant, the same trace, and the same entry in the lower left corner. Therefore, in the specific case of T(u), reading backward produces … another Cohn matrix! In fact, in a future essay we'll see that the matrix B(u) looks like this, …
… so that where T(u) has parameter t, B(u) has parameter u − t. Here the question mark represents a value that we don't care about. As before, we can obtain the value from the first column of the matrix.
value(B(u)) = (3u − t)/u = 3 − t/u If we combine that with the bounds on value(B(u)), we don't learn anything new, but if we combine it with the equation for value(T(u)), we get a very pleasing result.
value(T(u)) + value(B(u)) = 5 So, although the values can vary within their respective bounds-es, the variations aren't independent. What about the special cases u = 1 and u = 2? The short version: When we read T(1) = 11 and T(2) = 22 backward, we get the same expansions, and therefore also the same matrices. The end. The long version:
|
See Also@ May (2025) |