Home

> urticator.net
  Search

  About This Site
> Domains
  Glue
  Stories

  Computers
  Driving
  Games
  Humor
  Law
> Math
  Numbers
  Science

> Continued Fractions
  Game Theory (Section)
  Group Theory
  Probability
  Miscellaneous

  Approximation Quality
  Two Examples
  The Markov Equation
  Equivalence
  The Markov Constant
  The Pattern
  Group Elements and Matrices
  The Random Thoughts
  Symmetries
  Chains and Denominators
  The Discriminant
  Properties of Rotations
  Reflection Symmetry
> A Nice Correspondence

  Six Operations

A Nice Correspondence

In Group Elements and Matrices I introduced the idea that there's a one-to-one correspondence between expansions and matrices, and in The Random Thoughts and later essays I used the idea and developed it further. Now, though, it bugs me that the information is so scattered and disorganized. To remedy that, I'll bring all the pieces together here and make a coherent picture out of them.

Let's start with a slight redefinition of the prefix function P from The Random Thoughts. First we define a helper function P1 that takes a single positive integer as argument and returns the same matrix as the old one-argument function, …

P1(a) = a1
10

… then we redefine P as a function that takes a finite array of positive integers as argument, maps P1 onto it, and then multiplies the matrices together.

P(A) = product ( P1(ai) )

That function P is the function that implements the correspondence.

If we throw the switch and make P into a varargs function, then it has exactly the same behavior as the old function. What, then, was the point of the redefinition?

  • To implement the correspondence, we need a one-argument function.
  • The new definition is nice and crisp.
  • The new definition doesn't involve the second algorithm. Don't get me wrong, the second algorithm is great. You can easily run it with pencil and paper, and it's faster than matrix multiplication—two multiplications per coefficient versus eight for matrix multiplication (or seven with Strassen's algorithm). However, it doesn't provide good intuition for how the results for different expansions are related.
  • It's technically nice that calling the new function with the empty array produces the identity matrix. Calling the old function with zero arguments is possible but much less natural.

I claimed that the function P represents a one-to-one correspondence between expansions and matrices, but you shouldn't believe me until I've answered a lot of questions. What's the domain? In particular, why did I say the domain is expansions when really it's finite arrays of positive integers? What's the range? Is the function really one-to-one and onto? I'll tackle the questions in order.

What's the domain? To answer that question, let's step back and get some perspective.

We know that numbers have continued fraction expansions. Every rational number has two finite expansions, one proper and one improper, while every irrational number has a unique infinite expansion. Some irrational numbers have expansions that eventually repeat, …

… the expansion repeats if and only if the original number was the root of a quadratic equation … technically, an irreducible quadratic with rational coefficients.

… and some of those irrational numbers have expansions that are purely repeating, with no non-repeating prefix. That's the class of numbers and expansions that we're interested in.

If we take all the purely repeating expansions and convert them into arrays, we get nearly all the finite arrays of positive integers, but there are a few little gaps. So, we're very close, but we're not done yet.

At the start of Symmetries we saw that we can write a repeating expansion in an infinite number of ways by unrolling it like a roll of toilet paper. We're not going to do that here, but we can also write a repeating expansion in an infinite number of ways by unrolling it in a different way, a way that computer science folks call loop unrolling. All we have to do is write out some copies of the repeating part. For example, we can write the golden mean not just as [1] but also as [1,1], [1,1,1], [1,1,1,1], and so on. (For the previous discussion of why we should write expansion(1) as [1,1] instead of [1], see The Random Thoughts.)

As it happens, that takes care of the gaps. If we take all the ways of writing purely repeating expansions and convert them into arrays, we get all the finite arrays of positive integers except the empty array. So, to answer the question, the domain is in fact finite arrays of positive integers, but that's also essentially the same as ways of writing purely repeating expansions.

Did you notice the strange thing? Somehow the purely repeating expansions are simultaneously so large that one copy is enough to cover nearly all the finite arrays and so small that an infinite number of additional copies are needed to fill in the gaps. Let's look briefly at how that works. If we take the arrays of length 2 and regard the array elements as coordinates, we get a discrete two-dimensional space with a one-dimensional gap along the diagonal. In the same way, if we take the arrays of length n and regard the array elements as coordinates, we get a discrete n-dimensional space with a d-dimensional gap for every d that's a proper divisor of n. So, there's one gap for every pair (n,d). The first gap (2,1) is one dimension smaller than the ambient space, the next two gaps (3,1) and (4,2) are two dimensions smaller, the next two gaps (4,1) and (6,3) are three dimensions smaller, and so on, with the gaps constantly getting smaller. Then the second copy of the purely repeating expansions, the one where we write the repeating part twice, fills in all the gaps of the form (2d,d), the third copy fills in all the gaps of the form (3d,d), and so on. It just goes to show that infinity is strange.

What's the range? Well, we know from various places that P(A) has entries that are integers, that for an array of length L, det P(A) = (−1)L, and that (therefore) P(A) belongs to the group GL(2,Z). (For the definitions of the matrix groups, see Equivalence.) We also know from The Random Thoughts that the entries are positive, except that one can be zero if the array has length 1, and now two can be zero if the array has length 0. Now, given a matrix P(A), we can construct other matrices in the group by permuting the entries and flipping an even number of signs, and as far as I can tell, we can construct the whole group in that way. In the general case, when the entries are nonzero and distinct, there are eight ways of permuting and eight ways of flipping, but one of the constructed matrices is the transpose of P(A), also part of the range. Therefore, the range is about 1/32nd of GL(2,Z), a small subset.

Is the function P one-to-one? Yes, because when mapped to an element of the continued fraction group, the matrix P(A) adds the prefix A, and it adds that one prefix and not any other. We can even compute the prefix from the matrix, as we saw near the end of Group Elements and Matrices. In fact, that gives us the inverse function P−1!

How does that square with the fact that the function f that maps matrices to elements of the continued fraction group is two-to-one? Well, it's true that if we negate P(A), we get another matrix that maps to the same element and so adds the same prefix. However, that other matrix has at least two entries that are negative, so it doesn't lie within the range of P, so no problem. (For the previous discussions of the function f, see Equivalence and The Random Thoughts.)

Is the function P onto? Yes, because any function is onto its range.

So, yes, the function P really does represent a one-to-one correspondence between expansions (finite arrays of positive integers a.k.a. ways of writing purely repeating expansions) and matrices (a certain small subset of GL(2,Z)). That's not the end of the essay, though, because I also want to talk about two properties that the function P has.

The first property is what makes the correspondence into a nice correspondence. With the new definition of P, it's obvious that if we concatenate two arrays and then apply P, we get the same P1-factors as if we apply P to the arrays separately and then multiply the matrices.

P(A & B) = P(A)P(B)

That property is perhaps as close as we're going to get to a new result here. In various places we've seen matrices exploded into factors, but now we'd call those P1-factors, so we weren't using the first property, we were just using the new definition of P. However, we have seen one special case of the property. If we make one little definition, …

E(u) = P(expansion(u))

… then the result at the end of Group Elements and Matrices follows immediately from the pattern and the first property.

E(u)= P(expansion(u))
= P(expansion(v) & expansion(w))
= P(expansion(v))P(expansion(w))
= E(v)E(w)

Here's another fun fact about the first property: if we read it backward, it says that the product of any two matrices in the range of P is also in the range of P. In other words, the range is closed under multiplication. The range also contains the identity matrix. So, if it contained inverses, it would be a group, a subgroup of GL(2,Z). Sadly, it doesn't, and isn't, but it is an example of an algebraic structure called a monoid.

The domain of P is another example of a monoid, because the set of finite arrays of positive integers is closed under concatenation and has the empty array as an identity element.

Now we can really understand what the first property says. The function P turns concatenation in the domain into matrix multiplication in the range, so it's a homomorphism. A homomorphism that's also a one-to-one correspondence is an isomorphism, so the domain and range are essentially the same not just as sets but also as monoids!

The second property is about order-reversing operations. In the domain, we can reverse the elements in an array. In the range, we can take the transpose of a matrix. And, once again, the function P turns one operation into the other.

P(reverse(A)) = P(A)T

The equation looks more profound if we adopt the notation that I picked in Reflection Symmetry.

P(AT) = P(A)T

I could write down a proof, but there's already a perfectly good proof by example right in the middle of The Discriminant.

next

 

  See Also

  One More Discriminant
  Reflection Symmetry

@ January (2024)