Home

> urticator.net
  Search

  About This Site
> Domains
  Glue
  Stories

  Computers
  Driving
  Games
  Humor
  Law
> Math
  Numbers
  Science

> Continued Fractions
  Game Theory (Section)
  Group Theory
  Probability
  Miscellaneous

  Approximation Quality
  Two Examples
  The Markov Equation
  Equivalence
  The Markov Constant
  The Pattern
  Group Elements and Matrices
> The Random Thoughts
  Symmetries
  Chains and Denominators
  The Discriminant
  Properties of Rotations
  Reflection Symmetry
  A Nice Correspondence

The Random Thoughts

prev

Once I'd gotten my notes in order, I realized that I had way too many random thoughts for one essay. Fortunately, it was easy to divide them up. There were three large clusters of thoughts, so I moved those into the next three essays and left the rest here. With that said, let's get started!

First let's set up some new notation. We know about group elements that add prefixes, and we know about associated matrices, so we're fully equipped to define a prefix function P that takes any number of arguments and yields the matrix that adds the arguments as a prefix, so that P(a) is the matrix that adds the prefix “a”, P(a,b) is the matrix that adds the prefix “a,b”, and so on. To bring that down to earth, here are the values of P(a), P(b), and P(a,b).

a1   b1   ab+1a
1010b1

How do we know those values? P(a) is easy, it's basically just the group element [a,z] = a + 1/z = (az + 1)/(1z + 0). P(a,b) is pretty easy too. We can think of it as the group element [a,b,z] and run the second algorithm, or we can think of it as the product P(a)P(b) and do matrix multiplication. That's why I included P(b), because I can't do matrix multiplication unless the matrices are written out side by side.

Now we can say a bit more about why it's correct for expansion(1) and expansion(2) to have length 2 instead of length 1. What if they did have length 1? We already know that the pattern wouldn't work. The matrix E(1) would be P(1) instead of P(1,1), and similarly for E(2), so the matrix version of the pattern wouldn't work either. The determinants of the matrices would be wrong, not 1 like the other matrices E(u). Consequently, the matrices wouldn't be elements of the modular group. The traces of the matrices would be wrong too, not 3u like the other matrices E(u), and not even divisible by 3.

dettrD
P(1)−115
P(2)−128
P(1,1)135
P(2,2)1632
E(u)13u9u2 − 4

Finally, when we compute the discriminants with the following general-purpose formula, …

D = (tr G)2 − 4 det G

… we find that the discriminant of P(2) would be wrong. That may be the strongest argument against length 1, because (root D)/u is the Markov constant, and the Markov constant represents a verifiable fact about how well an irrational number, here 1 + root 2, can be approximated by rational numbers.

I'd also like to look more closely at the phrase “the associated matrix”. The way I've used it, it sounds like we can take any element of the continued fraction group and somehow produce the one matrix that goes with it. But, we already know that's not right. The function f that maps matrices to group elements is two-to-one, so for every element, there are two matrices that go with it, not just one. How do we know which one is the associated matrix?

The easy answer is that for group elements that add prefixes, the associated matrix is right there in the output of the second algorithm. If anything, it's logically prior to the group element, not posterior. To put it another way, the associated matrix is the one with four positive entries, not the one with four negative entries. (One entry can be zero if the prefix has length 1, as in P(a) above.)

The hard answer is that in general we don't know which one is the associated matrix. It doesn't work to count positive and negative entries, because there are plenty of matrices in GL(2,Z) with two of each. We could make up some rule, but that would just sweep the problem under the rug. I think the best plan is to accept that there are two matrices and try to understand what they have in common.

To that end, let's start with a very general statement. Let d be the number of dimensions, let G be an arbitrary d×d matrix of complex numbers, and let λ be a scale factor instead of an eigenvalue. Then, here's how the determinant and the trace respond to scaling.

det λG = λd det G
tr λG = λ tr G

That's fun, but a bit too general, so let's require that d = 2 and det G ≠ 0. Then we can define the function f, and it has the property that f(λG) = f(G) because the scale factors in the numerator and denominator cancel. We can also define the discriminant D, and it has the following property.

D(λG)= (tr λG)2 − 4 det λG
= (λ tr G)2 − 4λ2 det G
= λ2 [ (tr G)2 − 4 det G ]
= λ2 D(G)

Now, here's the point. If we have one of a pair of associated matrices, we can get the other by scaling by λ = −1. That means that λ2 = 1, and that means that the two matrices have the same determinant and discriminant. So, it makes sense to say that the group element also has the same determinant and discriminant … which is fortunate because I've done that several times already. However, that also means that the two matrices have opposite traces, so it doesn't make sense to say that the group element has any particular trace. So don't do that!

next

 

  See Also

  Discriminant, The
  Nice Correspondence, A
  One More Discriminant
  Recurrence Relation, The

@ July (2023)