Ally Skills Tutorial at ICFP

I just signed up for the Ally Skills Tutorial at ICFP, and if you are (1) a man who (2) will be at ICFP in Vancouver, I encourage you to sign up, too! From the website:

The Ally Skills Tutorial teaches men simple, everyday ways to support women in their workplaces and communities. Participants learn techniques that work at the office, in classrooms, at conferences, and online. The skills we teach are relevant everywhere, including skills particularly relevant to open technology and culture communities. At the end of the tutorial, participants will feel more confident in speaking up to support women, be more aware of the challenges facing women in their workplaces and communities, and have closer relationships with the other participants.

This sounds super helpful—I suspect there is often a large gap between the extent to which I want to support women and the extent to which I actually know, practically, how to do so. The workshop will be taught by Valerie Aurora, Linux filesystem developer and Ada Initiative co-founder; I expect it will be high quality!

Posted in haskell | Tagged , , , , | 2 Comments

Crystal Ball Connection Patterns via Species and Generating Functions

A couple weeks ago, Denise posted Puzzle: Crystal Ball Connection Patterns on her blog, Let’s Play Math. I had fun playing with it and thought I would demonstrate how to apply some high-powered combinatorial techniques to it (probably not what Denise had in mind!).

The setup is that there are n (distinct) friends who can talk to each other on the phone. Only two people can talk at a time (no conference calls). The question is to determine how many different “configurations” there are. Not everyone has to talk, so a configuration consists of some subset of the friends arranged in (unordered) conversational pairs.

Warning: spoilers ahead! If you’d like to play around with this yourself (and it is indeed a nice, accessible combinatorics problem to play with), stop reading now. My goal in this post is to have fun applying some advanced tools to this (relatively) simple problem.

Telephone numbers

Let’s start by visualizing some configurations. In her post, Denise illustrated the complete set of configurations for n = 4, which I will visualize like this:

Notice how I’ve arranged them: in the first row is the unique configuration where no one is talking (yes, that counts). In the second row are the six possible configurations with just a single conversation. The last row has the three possible configurations with two conversations.

One good approach at this point would be to derive some recurrences. This problem does indeed admit a nice recurrence, but I will let you ponder it. Instead, let’s see if we can just “brute-force” our way to a general formula, using our combinatorial wits. Later I will demonstrate a much more principled, mechanical way to derive a general formula.

Let’s start by coming up with a formula for T_{n,k}, the number of configurations with n people and k conversations. The number of ways of choosing k pairs out of a total of n is the multinomial coefficient \displaystyle \binom{n}{2,2,\dots,2,n-2k} = \frac{n!}{(2!)^k(n-2k)!}. However, that overcounts things: it actually distinguishes the first pair, second pair, and so on, but we don’t want to have any ordering on the pairs. So we have to divide by k!, the number of distinct orderings of the pairs. Thus,

\displaystyle T_{n,k} = \frac{n!}{2^k (n-2k)! k!}.

Let’s do a few sanity checks. First, when k=0, we have T_{n,0} = \frac{n!}{n!} = 1. We can also try some other small numbers we’ve already enumerated by hand: for example, T_{4,1} = \frac{4!}{2 \cdot 2 \cdot 1} = 6, and T_{4,2} = \frac{4!}{4 \cdot 1 \cdot 2} = 3. So this seems to work.

For n people, there can be at most \lfloor n/2 \rfloor conversations. So, the total number of configurations is going to be

\displaystyle T_n = \sum_{k=0}^{\lfloor n/2 \rfloor} T_{n,k}.

We can use this to compute T_n for the first few values of n:

\begin{array}{rcl}T_0 &=& 1\\T_1 &=& 1 \\ T_2 &=& 1 + 1 = 2 \\ T_3 &=& 1 + 3!/2 = 4 \\ T_4 &=& 1 + 6 + 3 = 10 \\ T_5 &=& 1 + 5!/(2 \cdot 3!) + 5!/(4 \cdot 2) = 1 + 10 + 15 = 26 \\ T_6 &=& 1 + 6!/(2 \cdot 4!) + 6!/(4 \cdot 2 \cdot 2) + 6!/(8 \cdot 3!) = 1 + 15 + 45 + 15 = 76 \end{array}

At this point we could look up the sequence 1,1,2,4,10,26,76 on the OEIS and find out all sorts of fun things: e.g. that we are also counting self-inverse permutations, i.e. involutions, that these numbers are also called “restricted Stirling numbers of the second kind”, some recurrence relations, etc., as well as enough references to keep us busy reading for a whole year.

Species

We can describe configurations as elements of the combinatorial species C = E \circ (E_2 + X). That is, a configuration is an unordered set (E) of (\circ) things (E_2 + X), where each thing can either be an unordered pair (E_2) of people talking on the phone, or (+) a single person (X) who is not talking.

We can now use the Haskell species library to automatically generate some counts and see whether they agree with our manual enumerations. First, some boilerplate setup:

ghci> :set -XNoImplicitPrelude
ghci> :m +NumericPrelude
ghci> :m +Math.Combinatorics.Species

Now we define the species of configurations:

ghci> let configurations = set `o` (set `ofSizeExactly` 2 + singleton)

We can ask the library to count the number of configurations for different n:

ghci> take 10 (labelled configurations)
  [1,1,2,4,10,26,76,232,764,2620]

Oh good, those numbers look familiar! Now, I wonder how many configurations there are for n = 100?

ghci> labelled configurations !! 100
  24053347438333478953622433243028232812964119825419485684849162710512551427284402176

Yikes!

We can also use the library to generate exhaustive lists of configurations, and draw them using diagrams. For example, here are all 76 configurations for n=6. (If you want to see the code used to generate this diagram, you can find it here.)

And just for fun, let’s draw all 232 configurations for n = 7:

Whee!

A general formula, via generating functions

Finally, I want to show how to use the species definition given above and the theory of generating functions to (somewhat) mechanically derive a general formula for the number of configurations. (Hopefully it will end up being equivalent to the formula we came up with near the beginning of the post!) Of course, this is also what the species library is doing, but only numerically—we will do things symbolically.

First, note that we are counting labelled configurations (the friends are all distinct), so we want to consider exponential generating functions (egfs). Recall that the egf for a species F is given by

\displaystyle F(x) = \sum_{n \geq 0} |F[n]| \frac{x^n}{n!},

that is, a (possibly infinite) formal power series where the coefficient of x^n/n! is the number of distinct labelled F-structures of size n. In our case, we need

\displaystyle E(x) = \sum_{n \geq 0} 1 \cdot \frac{x^n}{n!} = e^x,

since there is exactly one set structure of any size, and

\displaystyle E_2(x) = \frac{x^2}{2},

which is just the restriction of E(x) to only the x^2 term. Of course, we also have X(x) = x. Putting this together, we calculate

\begin{array}{rcl}\displaystyle (E \circ (E_2 + X))(x) &=& e^{(x^2/2 + x)} \\[1em] &=& \displaystyle \sum_{n \geq 0} \frac{(x^2/2 + x)^n}{n!} \\[1em] &=& \displaystyle \sum_{n \geq 0} \sum_{k = 0}^n \frac{1}{n!} \binom{n}{k} \left(\frac{x^2}{2}\right)^k x^{n-k} \\[1em] &=& \displaystyle \sum_{n \geq 0} \sum_{k=0}^n \frac{1}{n!} \binom{n}{k} \frac{x^{n+k}}{2^k} \end{array}

Ultimately, we want something of the form \displaystyle \sum_{m \geq 0} f_m \frac{x^m}{m!}, so we’ll need to collect up like powers of x. To do that, we can do a bit of reindexing. Right now, the double sum is adding up a bunch of terms that can be thought of as making a triangle:

Each ordered pair in the triangle corresponds to a single term being added. Each column corresponds to a particular value of n, with n increasing to the right. Within each column, k goes from 0 up to n.

The powers of x in our double sum are given by n+k. If we draw in lines showing terms that have the same power of x, it looks like this:

So let’s choose a new variable m, defined by m = n + k. We can see that we will have terms for every m \geq 0. We will also keep the variable k for our other index, and substitute n = m - k to get rid of n. In other words, instead of adding up the triangle by columns, we are going to add it up by diagonals.

Previously we had k \leq n; substituting for n that now turns into k \leq m - k. Adding k to both sides and dividing by 2 yields k \leq \lfloor m/2 \rfloor (we can round down since k is an integer). Looking at the diagram above, this makes sense: the height of each diagonal line is indeed half its index. Rewriting our indices of summation and substituting m - k for n, we now have:

\begin{array}{rcl}\displaystyle \sum_{n \geq 0} \sum_{k=0}^n \frac{1}{n!} \binom{n}{k} \frac{x^{n+k}}{2^k} &=& \displaystyle \sum_{m \geq 0} \sum_{k=0}^{\lfloor m/2 \rfloor} \frac{1}{(m-k)!} \binom{m-k}{k} \frac{x^m}{2^k} \\[1em] &=& \displaystyle \sum_{m \geq 0} \sum_{k=0}^{\lfloor m/2 \rfloor} \frac{1}{k!(m-2k)!} \frac{x^m}{2^k} \\[1em] &=& \displaystyle \sum_{m \geq 0} \frac{x^m}{m!} \sum_{k=0}^{\lfloor m/2 \rfloor} \frac{m!}{k!(m-2k)!2^k} \end{array}

And hey, look at that! The coefficient of x^m/m! is exactly what we previously came up with for T_m. Math works!

Posted in combinatorics, haskell, math, species | Tagged , , , , , | 1 Comment

BlogLiterately 0.8.1, now with HTTPS!

I’ve just released version 0.8.1 of BlogLiterately, a tool for formatting and posting stuff (especially Haskelly stuff) to blogs. This is in conjunction with the release of haxr-3000.11. After much blood, sweat, and tears, I was able to rip the HTTP package out of the guts of haxr, replace it with http-streams, and carefully sew everything back together around the edges. The result is that haxr now finally supports making XML-RPC calls via HTTPS, which in turn means that BlogLiterately once again works with WordPress, which no longer supports XML-RPC over HTTP. Happy blogging!

Posted in haskell | Tagged , , , , | 1 Comment

Thoughts on grade inflation, part I: is grade inflation bad?

Grade inflation.  It’s terrible, horrible, no good, very bad, and ruining everything.  …right?

Well… I’m not so sure.  What I do know is that the typical conversation around grade inflation frustrates me. At best, it often leaves many important assumptions unstated and unquestioned.  Is grade inflation really bad? If so, why? What are the underlying assumptions and values that drive us to think of it in one way or another? At worst, the conversation is completely at the wrong level.  Grade inflation is actually a symptom pointing at a much deeper question, one that gets at the heart of education and pedagogy: what do grades mean? Or, put another way, what do grades measure?

This will be a two-part series.  In this first post, I consider the first question: is grade inflation bad?  In most conversations I have been a part of, this is taken as given, but I think it deserves more careful thought. I don’t know of any reasons to think that grade inflation is good, but I also don’t buy many of the common arguments (often implied rather than explicitly stated) as to why it is bad; in this post I consider three common ones.

What is grade inflation?

Just to make sure everyone is on the same page: by grade inflation I mean the phenomenon where average student grades are increasing over time, that is, the average student now receives higher grades than the average student of n years ago. (You could also think of it as the average value of a given grade going down over time.) This phenomenon is widespread in the US. I am only really familiar with the educational system in the US, so this post will of necessity be rather US-centric; I would be interested to hear about similarities and differences with other countries.

Let’s now consider some common arguments as to why grade inflation is bad.

The “Back in my day…” argument

This is not so much an “argument” as an attitude, and it goes something like this: “Back in MY day, a C really meant a C! These ungrateful, entitled young whippersnappers don’t understand the true value of grades…”

This is a caricature, of course, but I have definitely encountered variants of this attitude. This makes about as much sense to me as complaining “back in MY day, a dollar was really worth a dollar! And now my daughter is asking me for twenty dollars to go to a movie with her friends. TWENTY DOLLARS! These ungrateful, entitled young whippersnappers don’t understand the true value of money…” Nonsense, of course they do.  It just costs $20 to go to a movie these days. A dollar is worth what it is now worth; a C is worth what it is now worth. Get over it.  It’s not that students don’t understand “the true value of grades”, it’s just that the value of grades is different than it used to be.

There are a couple important caveats here: first, one can, of course, argue about what the value of a C ought to be, based on some ideas or assumptions about what grades (should) mean. I will talk about this at length in my next post. But you cannot blame students for not understanding your idea of what grades ought to mean! Second, it is certainly possible (even likely) that student attitudes towards grades have changed, and one can (and I do!) complain about those attitudes as compared to student attitudes in the past. But that is different than claiming that students don’t understand the value of grades.

If I may hazard a guess, I think what this often boils down to is that people blame grade inflation on student attitudes of entitlement. As a potential contributing factor to grade inflation (and insofar as we would like to teach students different attitudes), that is certainly worth thinking about. But grade inflation potentially being caused by something one dislikes is not an argument that grade inflation itself is bad.

The compression argument

Of course, there’s one important difference between money and grades: amounts of money have no upper limit, whereas grades are capped at A+.  This brings us to what I often hear put forth as the biggest argument against grade inflation, that it compresses grades into a narrower and narrower band, squeezed from above by that highest possible A+.  The problem with this, some argue, is that grade compression causes information to be lost.  The “signal” of grades becomes noisier, and it becomes harder for, say, employers and grad schools to be able to distinguish between different students.

My first, more cynical reaction is this: well, cry me a river for those poor, poor employers and grad schools, who will now have to assess students on real accomplishments, skills, and personal qualities, or (more likely) find some other arbitrary measurement to use. Do we really think grades are such a high-quality signal in the first place? Do they really measure something important and intrinsic about a student? (More on this in my next post.)  If the signal is noisy or arbitrary in the first place then compressing it really doesn’t matter that much.

Less cynically, let’s suppose the grade-signal really is that high-quality and important, and we are actually worried about the possibility of losing information. Consider the extreme situation, where grade inflation has progressed to such a degree that professors only give one of two possible grades: A (“outstandingly excellent”) or A+ (“superlatively superb”). An A- is so insultingly low that professors never give it (for fear of lawsuits, perhaps); for simplicity’s sake let’s suppose that no one ever fails, either. In this hypothetical scenario, at an institution like Williams where students take 32 courses, there are only 33 possible GPAs: you could get 32 A+’s, or one A and 31 A+’s, or two A’s and 30 A+’s… all the way down to getting all A’s (“straight-A student” means something rather different in this imaginary universe!).

But here’s the thing: I think 33 different GPAs would still be enough! I honestly don’t think companies or grad schools can meaningfully care about distinctions finer than having 33 different buckets of students. (If you think differently, I’d love to hear your argument.) If student GPAs are normally distributed, this even means that the top few buckets have much less than 1/33 of all the students. So if the top grad schools and companies want to only consider the top 1% of all students (or whatever), they can just look at the top bucket or two. You might say this is unfair for the students, but really, I can’t see how this would be any more or less fair than the current system.

Of course, under this hypothetical two-grade system, GPAs might not be normally distributed. For one thing, if grade inflation kept going, the distribution might become more and more skewed to the right, until, for example, half of all students were getting straight A+’s, or, in the theoretical limit, all students get only A+’s. But I really don’t think this would actually happen; I think you would see some regulating effects kick in far before this theoretical limit was reached. Professors would not actually be willing to give all A+’s (or even, for that matter, all A’s and A+’s).

The GPAs could also be very bimodal, if, for example, students are extremely consistent: a student who consistently scores in the top 40% of every class would get the same grades (all A+’s) as a student who consistently scores in the top 10%. However, I doubt this is how it would work (as any professor knows, “consistent” and “student” are a rare pairing). It would be interesting to actually work out what GPA distributions would result from various assumptions about student behavior.

The moving target argument

The final argument against grade inflation that I sometimes hear goes like this: the problem is not so much that the average GPA is going up but simply that it is moving at all, which makes it harder for grad schools and employers to know how to calibrate their interpretations. But I don’t really buy this one either. The value of money is moving too, and yes, in some grand sense I suppose that makes it slightly harder for people to figure out how much things are worth. But somehow, everyone seems to manage just fine. I think employers and grad schools will manage just fine too. I don’t think GPAs are changing anywhere near fast enough for it to make much difference. And in any case, most of the time, the only thing employers and grad schools really care about is comparing the GPAs of students who graduated around the same time, in which case the absolute average GPA doesn’t matter at all. (One can make an argument about the difficulties caused by different schools having different average GPAs, but that is always going to be an issue, grade inflation or no.)

In the end, then, I am not so sure that grade inflation per se is such a terrible thing. However, it is well worth pondering the causes of grade inflation, and the deeper questions it leads to: what are grades? Why do we give them? What purposes do they serve, and what do they measure? I’ll take up these questions in a subsequent post.

Posted in teaching | Tagged , , , | 3 Comments

Pan-Galactic Division in Haskell

Summary: given an injective function A \times N \hookrightarrow B \times N, it is possible to constructively “divide by N” to obtain an injection A \hookrightarrow B, as shown recently by Peter Doyle and Cecil Qiu and expounded by Richard Schwartz. Their algorithm is nontrivial to come up with—this had been a longstanding open question—but it’s not too difficult to explain. I exhibit some Haskell code implementing the algorithm, and show some examples.

Introduction: division by two

Suppose someone hands you the following:

  • A Haskell function f :: (A, Bool) -> (B, Bool), where A and B are abstract types (i.e. their constructors are not exported, and you have no other functions whose types mention A or B).

  • A promise that the function f is injective, that is, no two values of (A, Bool) map to the same (B, Bool) value. (Thus (B, Bool) must contain at least as many inhabitants as (A, Bool).)

  • A list as :: [A], with a promise that it contains every value of type A exactly once, at a finite position.

Can you explicitly produce an injective function f' :: A -> B? Moreover, your answer should not depend on the order of elements in as.

It really seems like this ought to be possible. After all, if (B, Bool) has at least as many inhabitants as (A, Bool), then surely B must have at least as many inhabitants as A. But it is not enough to reason merely that some injection must exist; we have to actually construct one. This, it turns out, is tricky. As a first attempt, we might try f' a = fst (f (a, True)). That is certainly a function of type A -> B, but there is no guarantee that it is injective. There could be a1, a2 :: A which both map to the same b, that is, one maps to (b, False) and the other to (b, True). The picture below illustrates such a situation: (a1, True) and (a2, True) both map to b2. So the function f may be injective overall, but we can’t say much about f restricted to a particular Bool value.

9577e0a81dc50817

The requirement that the answer not depend on the order of as also makes things difficult. (Over in math-land, depending on a particular ordering of the elements in as would amount to the well-ordering principle, which is equivalent to the axiom of choice, which in turn implies the law of excluded middle—and as we all know, every time someone uses the law of excluded middle, a puppy dies. …I feel like I’m in one of those DirecTV commercials. “Don’t let a puppy die. Ignore the order of elements in as.”) Anyway, making use of the order of values in as, we could do something like the following:

  • For each a :: A:
    • Look at the B values generated by f (a,True) and f (a,False). (Note that there might only be one distinct such B value).
    • If neither B value has been used so far, pick the one that corresponds to (a,True), and add the other one to a queue of available B values.
    • If one is used and one unused, pick the unused one.
    • If both are used, pick the next available B value from the queue.

It is not too hard I couldn’t be bothered to show that this will always successfully result in a total function A -> B, which is injective by construction. (One has to show that there will always be an available B value in the queue when you need it.) The only problem is that the particular function we get depends on the order in which we iterate through the A values. The above example illustrates this as well: if the A values are listed in the order [a_1, a_2], then we first choose a_1 \mapsto b_2, and then a_2 \mapsto b_3. If they are listed in the other order, we end up with a_2 \mapsto b_2 and a_1 \mapsto b_1. Whichever value comes first “steals” b_2, and then the other one takes whatever is left. We’d like to avoid this sort of dependence on order. That is, we want a well-defined algorithm which will yield a total, injective function A -> B, which is canonical in the sense that the algorithm yields the same function given any permutation of as.

It is possible—you might enjoy puzzling over this a bit before reading on!

Division by N

The above example is a somewhat special case. More generally, let N = \{0, \dots, n-1\} denote a canonical finite set of size n, and let A and B be arbitrary sets. Then, given an injection f : A \times N \hookrightarrow B \times N, is it possible to effectively (that is, without excluded middle or the axiom of choice) compute an injection A \hookrightarrow B?

Translating down to the world of numbers representing set cardinalities—natural numbers if A and B are finite, or cardinal numbers in general—this just says that if an \leq bn then a \leq b. This statement about numbers is obviously true, so it would be nice if we could say something similar about sets, so that this fact about numbers and inequalities can be seen as just a “shadow” of a more general theorem about sets and injections.

As hinted in the introduction, the interesting part of this problem is really the word “effectively”. Using the Axiom of Choice/Law of Excluded Middle makes the problem a lot easier, but either fails to yield an actual function that we can compute with, instead merely guaranteeing the existence of such a function, or gives us a function that depends on a particular ordering of A.

Apparently this has been a longstanding open question, recently answered in the affirmative by Peter Doyle and Cecil Qiu in their paper Division By Four. It’s a really great paper: they give some fascinating historical context for the problem, and explain their algorithm (which is conceptually not all that difficult) using an intuitive analogy to a card game with certain rules. (It is not a “game” in the usual sense of having winners and losers, but really just an algorithm implemented with “players” and “cards”. In fact, you could get some friends together and actually perform this algorithm in parallel (if you have sufficiently nerdy friends).) Richard Schwartz’s companion article is also great fun and easy to follow (you should read it first).

A Game of Thrones Cards

Here’s a quick introduction to the way Doyle, Qiu, and Schwartz use a card game to formulate their algorithm. (Porting this framework to use “thrones” and “claimants” instead of “spots” and “cards” is left as an exercise to the reader.)

The finite set N is to be thought of as a set of suits. The set A will correspond to a set of players, and B to a set of ranks or values (for example, Ace, 2, 3, …) In that case B \times N corresponds to a deck of cards, each card having a rank and a suit; and we can think of A \times N in terms of each player having in front of them a number of “spots” or “slots”, each labelled by a suit. An injection A \times N \hookrightarrow B \times N is then a particular “deal” where one card has been dealt into each of the spots in front of the players. (There may be some cards left over in the deck, but the fact that the function is total means every spot has a card, and the fact that it is injective is encoded in the common-sense idea that a given card cannot be in two spots at once.) For example, the example function from before:

9577e0a81dc50817

corresponds to the following deal:

88e26b8c78506653

Here each column corresponds to one player’s hand, and the rows correspond to suit spots (with the spade spots on top and the heart spots beneath). We have mapped \{b_1, b_2, b_3\} to the ranks A, 2, 3, and mapped T and F to Spades and Hearts respectively. The spades are also highlighted in green, since later we will want to pay particular attention to what is happening with them. You might want to take a moment to convince yourself that the deal above really does correspond to the example function from before.

A Haskell implementation

Of course, doing everything effectively means we are really talking about computation. Doyle and Qiu do talk a bit about computation, but it’s still pretty abstract, in the sort of way that mathematicians talk about computation, so I thought it would be interesting to actually implement the algorithm in Haskell.

The algorithm “works” for infinite sets, but only (as far as I understand) if you consider some notion of transfinite recursion. It still counts as “effective” in math-land, but over here in programming-land I’d like to stick to (finitely) terminating computations, so we will stick to finite sets A and B.

First, some extensions and imports. Nothing too controversial.

> {-# LANGUAGE DataKinds                  #-}
> {-# LANGUAGE GADTs                      #-}
> {-# LANGUAGE GeneralizedNewtypeDeriving #-}
> {-# LANGUAGE KindSignatures             #-}
> {-# LANGUAGE RankNTypes                 #-}
> {-# LANGUAGE ScopedTypeVariables        #-}
> {-# LANGUAGE StandaloneDeriving         #-}
> {-# LANGUAGE TypeOperators              #-}
> 
> module PanGalacticDivision where
> 
> import           Control.Arrow (second, (&&&), (***))
> import           Data.Char
> import           Data.List     (find, findIndex, transpose)
> import           Data.Maybe
> 
> import           Diagrams.Prelude hiding (universe, value)
> import           Diagrams.Backend.Rasterific.CmdLine
> import           Graphics.SVGFonts

We’ll need some standard machinery for type-level natural numbers. Probably all this stuff is in a library somewhere but I couldn’t be bothered to find out. Pointers welcome.

> -- Standard unary natural number type
> data Nat :: * where
>   Z :: Nat
>   Suc :: Nat -> Nat
> 
> type One = Suc Z
> type Two = Suc One
> type Three = Suc Two
> type Four = Suc Three
> type Six = Suc (Suc Four)
> type Eight = Suc (Suc Six)
> type Ten = Suc (Suc Eight)
> type Thirteen = Suc (Suc (Suc Ten))
> 
> -- Singleton Nat-indexed natural numbers, to connect value-level and
> -- type-level Nats
> data SNat :: Nat -> * where
>   SZ :: SNat Z
>   SS :: Natural n => SNat n -> SNat (Suc n)
> 
> -- A class for converting type-level nats to value-level ones
> class Natural n where
>   toSNat :: SNat n
> 
> instance Natural Z where
>   toSNat = SZ
> 
> instance Natural n => Natural (Suc n) where
>   toSNat = SS toSNat
> 
> -- A function for turning explicit nat evidence into implicit
> natty :: SNat n -> (Natural n => r) -> r
> natty SZ r     = r
> natty (SS n) r = natty n r
> 
> -- The usual canonical finite type.  Fin n has exactly n
> -- (non-bottom) values.
> data Fin :: Nat -> * where
>   FZ :: Fin (Suc n)
>   FS :: Fin n -> Fin (Suc n)
> 
> finToInt :: Fin n -> Int
> finToInt FZ     = 0
> finToInt (FS n) = 1 + finToInt n
> 
> deriving instance Eq (Fin n)

Finiteness

Next, a type class to represent finiteness. For our purposes, a type a is finite if we can explicitly list its elements. For convenience we throw in decidable equality as well, since we will usually need that in conjunction. Of course, we have to be careful: although we can get a list of elements for a finite type, we don’t want to depend on the ordering. We must ensure that the output of the algorithm is independent of the order of elements.1 This is in fact true, although somewhat nontrivial to prove formally; I mention some of the intuitive ideas behind the proof below.

While we are at it, we give Finite instances for Fin n and for products of finite types.

> class Eq a => Finite a where
>   universe :: [a]
> 
> instance Natural n => Finite (Fin n) where
>   universe = fins toSNat
> 
> fins :: SNat n -> [Fin n]
> fins SZ     = []
> fins (SS n) = FZ : map FS (fins n)
> 
> -- The product of two finite types is finite.
> instance (Finite a, Finite b) => Finite (a,b) where
>   universe = [(a,b) | a <- universe, b <- universe]

Division, inductively

Now we come to the division algorithm proper. The idea is that panGalacticPred turns an injection A \times N \hookrightarrow B \times N into an injection A \times (N-1) \hookrightarrow B \times (N-1), and then we use induction on N to repeatedly apply panGalacticPred until we get an injection A \times 1 \hookrightarrow B \times 1.

> panGalacticDivision
>   :: forall a b n. (Finite a, Eq b)
>   => SNat n -> ((a, Fin (Suc n)) -> (b, Fin (Suc n))) -> (a -> b)

In the base case, we are given an injection A \times 1 \hookrightarrow B \times 1, so we just pass a unit value in along with the A and project out the B.

> panGalacticDivision SZ f = \a -> fst (f (a, FZ))

In the inductive case, we call panGalacticPred and recurse.

> panGalacticDivision (SS n') f = panGalacticDivision n' (panGalacticPred n' f)

Pan-Galactic Predecessor

And now for the real meat of the algorithm, the panGalacticPred function. The idea is that we swap outputs around until the function has the property that every output of the form (b,0) corresponds to an input also of the form (a,0). That is, using the card game analogy, every spade in play should be in the leftmost spot (the spades spot) of some player’s hand (some spades can also be in the deck). Then simply dropping the leftmost card in everyone’s hand (and all the spades in the deck) yields a game with no spades. That is, we will have an injection A \times \{1, \dots, n-1\} \hookrightarrow B \times \{1, \dots, n-1\}. Taking predecessors everywhere (i.e. “hearts are the new spades”) yields the desired injection A \times (N-1) \hookrightarrow B \times (N-1).

We need a Finite constraint on a so that we can enumerate all possible inputs to the function, and an Eq constraint on b so that we can compare functions for extensional equality (we iterate until reaching a fixed point). Note that whether two functions are extensionally equal does not depend on the order in which we enumerate their inputs, so far validating my claim that nothing depends on the order of elements returned by universe.

> panGalacticPred
>   :: (Finite a, Eq b, Natural n)
>   => SNat n
>   -> ((a, Fin (Suc (Suc n))) -> (b, Fin (Suc (Suc n))))
>   -> ((a, Fin (Suc n)) -> (b, Fin (Suc n)))

We construct a function f' which is related to f by a series of swaps, and has the property that it only outputs FZ when given FZ as an input. So given (a,i) we can call f' on (a, FS i) which is guaranteed to give us something of the form (b, FS j). Thus it is safe to strip off the FS and return (b, j) (though the Haskell type checker most certainly does not know this, so we just have to tell it to trust us).

> panGalacticPred n f = \(a,i) -> second unFS (f' (a, FS i))
>   where
>     unFS :: Fin (Suc n) -> Fin n
>     unFS FZ = error "impossible!"
>     unFS (FS i) = i

To construct f' we iterate a certain transformation until reaching a fixed point. For finite sets A and B this is guaranteed to terminate, though it is certainly not obvious from the Haskell code. (Encoding this in Agda so that it is accepted by the termination checker would be a fun (?) exercise.)

One round of the algorithm consists of two phases called “shape up” and “ship out” (to be described shortly).

>     oneRound = natty n $ shipOut . shapeUp
> 
>     -- iterate 'oneRound' beginning with the original function...
>     fs = iterate oneRound f
>     -- ... and stop when we reach a fixed point.
>     f' = fst . head . dropWhile (uncurry (=/=)) $ zip fs (tail fs)
>     f1 =/= f2 = all (\x -> f1 x == f2 x) universe

Encoding Card Games

Recall that a “card” is a pair of a value and a suit; we think of B as the set of values and N as the set of suits.

> type Card v s = (v, s)
> 
> value :: Card v s -> v
> value = fst
> 
> suit :: Card v s -> s
> suit = snd

Again, there are a number of players (one for each element of A), each of which has a “hand” of cards. A hand has a number of “spots” for cards, each one labelled by a different suit (which may not have any relation to the actual suit of the card in that position).

> type PlayerSpot p s = (p, s)
> type Hand v s = s -> Card v s

A “game” is an injective function from player spots to cards. Of course, the type system is not enforcing injectivity here.

> type Game p v s = PlayerSpot p s -> Card v s

Some utility functions. First, a function to project out the hand of a given player.

> hand :: p -> Game p v s -> Hand v s
> hand p g = \s -> g (p, s)

A function to swap two cards, yielding a bijection on cards.

> swap :: (Eq s, Eq v) => Card v s -> Card v s -> (Card v s -> Card v s)
> swap c1 c2 = f
>   where
>     f c
>       | c == c1   = c2
>       | c == c2   = c1
>       | otherwise = c

leftmost finds the leftmost card in a player’s hand which has a given suit.

> leftmost :: Finite s => s -> Hand v s -> Maybe s
> leftmost targetSuit h = find (\s -> suit (h s) == targetSuit) universe

Playing Rounds

playRound abstracts out a pattern that is used by both shapeUp and shipOut. The first argument is a function which, given a hand, produces a function on cards; that is, based on looking at a single hand, it decides how to swap some cards around.2 playRound then applies that function to every hand, and composes together all the resulting permutations.

Note that playRound has both Finite s and Finite p constraints, so we should think about whether the result depends on the order of elements returned by any call to universe—I claimed it does not. Finite s corresponds to suits/spots, which corresponds to N in the original problem formulation. N explicitly has a canonical ordering, so this is not a problem. The Finite p constraint, on the face of it, is more problematic. We will have to think carefully about each of the rounds implemented in terms of playRound and make sure they do not depend on the order of players. Put another way, it should be possible for all the players to take their turn simultaneously.

> playRound :: (Finite s, Finite p, Eq v) => (Hand v s -> Card v s -> Card v s) -> Game p v s -> Game p v s
> playRound withHand g = foldr (.) id swaps . g
>   where
>     swaps = map (withHand . flip hand g) players
>     players = universe

Shape Up and Ship Out

Finally, we can describe the “shape up” and “ship out” phases, beginning with “shape up”. A “bad” card is defined as one having the lowest suit; make sure every hand with any bad cards has one in the leftmost spot (by swapping the leftmost bad card with the card in the leftmost spot, if necessary).

> shapeUp :: (Finite s, Finite p, Eq v) => Game p v s -> Game p v s
> shapeUp = playRound shapeUp1
>   where
>     badSuit = head universe
>     shapeUp1 theHand =
>       case leftmost badSuit theHand of
>         Nothing      -> id
>         Just badSpot -> swap (theHand badSuit) (theHand badSpot)

And now for the “ship out” phase. Send any “bad” cards not in the leftmost spot somewhere else, by swapping with a replacement, namely, the card whose suit is the same as the suit of the spot, and whose value is the same as the value of the bad card in the leftmost spot. The point is that bad cards in the leftmost spot are OK, since we will eventually just ignore the leftmost spot. So we have to keep shipping out bad cards not in the leftmost spot until they all end up in the leftmost spot. For some intuition as to why this is guaranteed to terminate, consult Schwartz; note that columns tend to acquire more and more cards that have the same rank as a spade in the top spot (which never moves).

> shipOut :: (Finite s, Finite p, Eq v) => Game p v s -> Game p v s
> shipOut = playRound shipOutHand
>   where
>     badSuit = head universe
>     spots = universe
>     shipOutHand theHand = foldr (.) id swaps
>       where
>         swaps = map (shipOut1 . (theHand &&& id)) (drop 1 spots)
>         shipOut1 ((_,s), spot)
>           | s == badSuit = swap (theHand spot) (value (theHand badSuit), spot)
>           | otherwise    = id

And that’s it! Note that both shapeUp and shipOut are implemented by composing a bunch of swaps; in fact, in both cases, all the swaps commute, so the order in which they are composed does not matter. (For proof, see Schwartz.) Thus, the result is independent of the order of the players (i.e. the set A).

Enough code, let’s see an example! This example is taken directly from Doyle and Qiu’s paper, and the diagrams are being generated literally (literately?) by running the code in this blog post. Here’s the starting configuration:

08c40ab96ca385c0

Again, the spades are all highlighted in green. Recall that our goal is to get them all to be in the first row, but we have to do it in a completely deterministic, canonical way. After shaping up, we have:

e32d54891cc5e470

Notice how the 6, K, 5, A, and 8 of spades have all been swapped to the top of their column. However, there are still spades which are not at the top of their column (in particular the 10, 9, and J) so we are not done yet.

Now, we ship out. For example, the 10 of spades is in the diamonds position in the column with the Ace of spades, so we swap it with the Ace of diamonds. Similarly, we swap the 9 of spades with the Queen of diamonds, and the Jack of spades with the 4 of hearts.

271c3505d198b229

Shaping up does nothing at this point so we ship out again, and then continue to alternate rounds.

d2012b69fc3cc161

In the final deal above, all the spades are at the top of a column, so there is an injection from the set of all non-spade spots to the deck of cards with all spades removed. This example was, I suspect, carefully constructed so that none of the spades get swapped out into the undealt portion of the deck, and so that we end up with only spades in the top row. In general, we might end up with some non-spades also in the top row, but that’s not a problem. The point is that ignoring the top row gets rid of all the spades.

Anyway, I hope to write more about some “practical” examples and about what this has to do with combinatorial species, but this post is long enough already. Doyle and Qiu also describe a “short division” algorithm (the above is “long division”) that I hope to explore as well.

The rest of the code

For completeness, here’s the code I used to represent the example game above, and to render all the card diagrams (using diagrams 1.3).

> type Suit = Fin
> type Rank = Fin
> type Player = Fin
> 
> readRank :: SNat n -> Char -> Rank n
> readRank n c = fins n !! (fromJust $ findIndex (==c) "A23456789TJQK")
> 
> readSuit :: SNat n -> Char -> Suit n
> readSuit (SS _) 'S'                = FZ
> readSuit (SS (SS _)) 'H'           = FS FZ
> readSuit (SS (SS (SS _))) 'D'      = FS (FS FZ)
> readSuit (SS (SS (SS (SS _)))) 'C' = FS (FS (FS FZ))
> 
> readGame :: SNat a -> SNat b -> SNat n -> String -> Game (Player a) (Rank b) (Suit n)
> readGame a b n str = \(p, s) -> table !! finToInt p !! finToInt s
>   where
>     table = transpose . map (map readCard . words) . lines $ str
>     readCard [r,s] = (readRank b r, readSuit n s)
> 
> -- Example game from Doyle & Qiu
> exampleGameStr :: String
> exampleGameStr = unlines
>   [ "4D 6H QD 8D 9H QS 4C AD 6C 4S"
>   , "JH AH 9C 8H AS TC TD 5H QC JS"
>   , "KC 6S 4H 6D TS 9S JC KD 8S 8C"
>   , "5C 5D KS 5S TH JD AC QH 9D KH"
>   ]
> 
> exampleGame :: Game (Player Ten) (Rank Thirteen) (Suit Four)
> exampleGame = readGame toSNat toSNat toSNat exampleGameStr
> 
> suitSymbol :: Suit n -> String
> suitSymbol = (:[]) . ("♠♥♦♣"!!) . finToInt  -- Huzzah for Unicode
> 
> suitDia :: Suit n -> Diagram B
> suitDia = (suitDias!!) . finToInt
> 
> suitDias = map mkSuitDia (fins (toSNat :: SNat Four))
> mkSuitDia s = text' (suitSymbol s) # fc (suitColor s) # lw none
> 
> suitColor :: Suit n -> Colour Double
> suitColor n
>   | finToInt n `elem` [0,3] = black
>   | otherwise               = red
> 
> rankStr :: Rank n -> String
> rankStr n = rankStr' (finToInt n + 1)
>   where
>     rankStr' 1 = "A"
>     rankStr' i | i <= 10    = show i
>                | otherwise = ["JQK" !! (i - 11)]
> 
> text' t = stroke (textSVG' (TextOpts lin INSIDE_H KERN False 1 1) t)
> 
> renderCard :: (Rank b, Suit n) -> Diagram B
> renderCard (r, s) = mconcat
>   [ mirror label
>   , cardContent (finToInt r + 1)
>   , back
>   ]
>   where
>     cardWidth  = 2.25
>     cardHeight = 3.5
>     cardCorners = 0.1
>     mirror d = d  d # rotateBy (1/2)
>     back  = roundedRect cardWidth cardHeight cardCorners # fc white
>           # lc (case s of { FZ -> green; _ -> black })
>     label = vsep 0.1 [text' (rankStr r), text' (suitSymbol s)]
>           # scale 0.6 # fc (suitColor s) # lw none
>           # translate ((-0.9) ^& 1.5)
>     cardContent n
>       | n <= 10   = pips n
>       | otherwise = face n # fc (suitColor s) # lw none
>                            # sized (mkWidth (cardWidth * 0.6))
>     pip = suitDia s # scale 1.1
>     pips 1 = pip # scale 2
>     pips 2 = mirror (pip # up 2)
>     pips 3 = pips 2  pip
>     pips 4 = mirror (pair pip # up 2)
>     pips 5 = pips 4  pip
>     pips 6 = mirror (pair pip # up 2)  pair pip
>     pips 7 = pips 6  pip # up 1
>     pips 8 = pips 6  mirror (pip # up 1)
>     pips 9 = mirror (pair (pip # up (2/3)  pip # up 2))  pip # up (case finToInt s of {1 -> -0.1; 3 -> 0; _ -> 0.1})
>     pips 10 = mirror (pair (pip # up (2/3)  pip # up 2)  pip # up (4/3))
>     pips _ = mempty
>     up n = translateY (0.5*n)
>     pair d = hsep 0.4 [d, d] # centerX
>     face 11 = squares # frame 0.1
>     face 12 = loopyStar
>     face 13 = burst # centerXY
>     squares
>       = strokeP (mirror (square 1 # translate (0.2 ^& 0.2)))
>       # fillRule EvenOdd
>     loopyStar
>       = regPoly 7 1
>       # star (StarSkip 3)
>       # pathVertices
>       # map (cubicSpline True)
>       # mconcat
>       # fillRule EvenOdd
>     burst
>       = [(1,5), (1,-5)] # map r2 # fromOffsets
>       # iterateN 13 (rotateBy (-1/13))
>       # mconcat # glueLine
>       # strokeLoop
> 
> renderGame :: (Natural n, Natural a) => Game (Player a) (Rank b) (Suit n) -> Diagram B
> renderGame g = hsep 0.5 $ map (\p -> renderHand p $ hand p g) universe
> 
> renderHand :: Natural n => Player a -> Hand (Rank b) (Suit n) -> Diagram B
> renderHand p h = vsep 0.2 $ map (renderCard . h) universe

  1. If we could program in Homotopy Type Theory, we could make this very formal by using the notion of cardinal-finiteness developed in my dissertation (see section 2.4).

  2. In practice this function on cards will always be a permutation, though the Haskell type system is not enforcing that at all. An early version of this code used the Iso type from lens, but it wasn’t really paying its way.

Posted in haskell, math, species | Tagged , , , , , , , , , | 6 Comments

Polynomial Functors Constrained by Regular Expressions

I’ve now finished revising the paper that Dan Piponi and I had accepted to MPC 2015; you can find a PDF here:

Polynomial Functors Constrained by Regular Expressions

Here’s the 2-minute version: certain operations or restrictions on functors can be described by regular expressions, where the elements of the alphabet correspond to type arguments. The idea is to restrict to only those structures for which an inorder traversal yields a sequence of types matching the regular expression. For example, (aa)^* gives you even-size things; a^*ha^* gives you the derivative (the structure has a bunch of values of type a, a single hole of type h, and then more values of type a), and b^*ha^* the dissection.

dissected-tree

The punchline is that we show how to use the machinery of semirings, finite automata, and some basic matrix algebra to automatically derive an algebraic description of any functor constrained by any regular expression. This gives a nice unified way to view differentiation and dissection; we also draw some connections to the theory of divided differences.

I’m still open to discussion, suggestions, typo fixes, etc., though at this point they won’t make it into the proceedings. There’s certainly a lot more that could be said or ways this could be extended further.

Posted in math, writing | Tagged , , , , , , | 4 Comments

Blogging again, & some major life events

It’s been a long time since I’ve written anything here; the blog was on hold while I was finishing my PhD and on the academic job market. Now that things have settled down a bit I plan to get back to blogging.

For starters, here are a few of the major events that have happened in the meantime, that readers of this blog might care about:

  • I successfully defended my PhD dissertation in October, officially graduated in December, and got an actual diploma in the mail a few weeks ago. I’ll be back in Philadelphia for the official graduation ceremony in May.
  • I accepted a tenure-track position at Hendrix College in Conway, Arkansas, and will be moving there this summer.
  • Dan Piponi and I had a paper accepted to MPC 2015. Here’s the github repo, and I plan to post a PDF copy here soon (once I get around to incorporating feedback from the reviewers). I look forward to seeing a bunch of folks (Volk?) in Königswinter this summer; I already have my plane tickets (CIU -> DTW -> AMS -> CGN, it’s a long story).
  • Work on diagrams continues strong (no thanks to me!), and we are aiming for a big new release soon—I will certainly post about that here as well.
Posted in diagrams, meta | Tagged , , , , , | Leave a comment