I prepared three questions for the exam. The first was fairly simple (“explain algorithm X and analyze its time complexity”) and I actually told the students ahead of time what it would be—to help them feel more comfortable and prepared. The other questions were a bit more open-ended:
The second question was of the form “I want to store X information and do operations Y and Z on it. What sorts of data structure(s) might you use, and what would be the tradeoffs?” There were then a couple rounds of “OK, now I want to add another operation W. How does that change your analysis?” In answering this I expected them to deploy metrics like code complexity, time and memory usage etc. to compare different data structures. I wanted to see them think about a lot of the different data structures we had discussed over the course of the semester and their advantages and disadvantages at a high level.
The final question was of the form “Here is some code. What does it do? What is its time complexity? Now please design a more efficient version that does the same thing.” With some students there was enough time to have them actually write code, with other students I just had them talk through the design of an algorithm. This question got more at their ability to design and analyze appropriate algorithms on data structures. The algorithm I asked them to develop was not something they had seen before, but it was similar to other things they had seen, put together in a new way.
Overall I was happy with the questions and the quality of the responses they elicited. If I do this again I would use similar sorts of questions.
You might well be wondering how long all of this took. I had about 30 students. I planned for the exam to take 30 minutes, and blocked out 45-minute chunks of time (to allow time for transitioning and for the exam to go a bit over 30 minutes if necessary; in practice the exams always went at least 40 minutes and I was scrambling at the end to jot down final thoughts before the next students showed up). I allowed them to choose whether to come in by themselves or with a partner (more on this later). As seems typical, about 1/3 of them chose to come by themselves, and the other 2/3 in pairs, for a total of about 20 exam slots. 20 slots at 45 minutes per slot comes out to 15 hours, or 3 hours per day for a week. This might sound like a lot, but if you compare it to the time required for a traditional written exam it compares quite favorably. First of all, I spent only two or three hours preparing the exam, whereas I estimate I would easily spend 5 or 10 hours preparing a written exam—a written exam has to be very precise in explaining what is wanted and in trying to anticipate potential questions and confusions. When you are asking the questions in person, it is easy to just clear up these confusions as they arise. Second, I was mostly grading students during their exam (more on this in the next section) so that by about five minutes after the end of their slot I had their exam completely graded. With a written exam, I could easily have spent at least 15 hours just grading all the exams.
So overall, the oral exam took up less of my time, and I can tell you, hands down, that my time was spent much more enjoyably than it would have been with a written exam. It was really fun to have each student come into my office, to get a chance to talk with them individually (or as a pair) and see what they had learned. It felt like a fitting end to the semester.
In order to assess the students, I prepared a detailed rubric beforehand, which was really critical. With a written exam you can just give the exam and then later come up with a rubric when you go to grade them (although I think even written exams are usually improved by coming up with a rubric beforehand, as part of the exam design process—it helps you to analyze whether your exam is really assessing the things you want it to). For an oral exam, this is impossible: there is no way to remember all of the responses that each student gives, and even if you write down a bunch of notes during or after each exam, you would probably find later that you didn’t write down everything that you should have.
In any case, it worked pretty well to have a rubric in front of me, where I could check things off or jot down quick notes in real time.
People are often surprised when I say that I allowed the students to come in pairs. My reasons were as follows:
Overall I was really happy with the result. Many of the students had been working with a particular partner on their labs for the whole semester and came to the exam with that same partner. For quite a few pairs this obviously worked well for them: it was really fun to watch the back-and-forth between them as they suggested different ideas, debated, corrected each other, and occasionally even seemed to forget that I was in the room.
One might worry about mismatched pairs, where one person does all of the talking and the other is just along for the ride. I only had this happen to some extent with one or two pairs. I told all the students up front that I would take points off in this sort of situation (I ended up taking off 10%). In the end this almost certainly meant that one member of the pair still ended up with a higher grade than they would have had they taken the exam individually. I decided I just didn’t care. I imagine I might rethink this for an individual class where there were many of these sorts of pairings going on during the semester—but in that case I would also try to do something about it before the final exam.
Another interesting social aspect of the process was figuring out what to do when students were floundering. One explicit thing one can do is to offer a hint in exchange for a certain number of points off, but I only ended up using this explicit option a few times. More often, after the right amount of time, I simply guided them on to the next part, either by suggesting that we move on in the interest of time, or by giving them whatever part of the answer they needed to move on to the next part of the question. I then took off points appropriately in my grading.
It was difficult figuring out how to verbally respond to students: on the one hand, stony-faced silence would be unnatural and unnerving; on the other hand, responding enthusiastically when they said something correct would give too much away (i.e. by the absence of such a response when they said something incorrect). As the exams went on I got better (I think) at giving interested-yet-non-committal sorts of responses that encouraged the students but didn’t give too much away. But I still found this to be one of the most perplexing challenges of the whole process.
One might wonder how much of the material from an entire semester can really be covered in a 30-minute conversation. Of course, you most certainly cannot cover every single detail. But you can actually cover quite a lot of the important ideas, along with enough details to get a sense for whether a student understands the details or not. In the end, after all, I don’t care whether a student remembers all the details from my course. Heck, I don’t even remember all the details from my course. But I care a great deal about whether they remember the big ideas, how the details fit, and how to re-derive or look up the details that they have forgotten. Overall, I am happy with the way the exam was able to cover the high points of the syllabus and to test students’ grasp of its breadth.
My one regret, content-wise, is that with only 30 minutes, it’s not really possible to put truly difficult questions on the exam—the sorts of questions that students might have to wrestle with for ten or twenty minutes before getting a handle on them.
Would I do this again? Absolutely, given the right circumstances. But there are probably a few things I would change or experiment with. Here are a few off the top of my head:
Again, I’m happy to answer questions in the comments or by email. If you are inspired to also try giving an oral exam, let me know how it goes!
As far as possible, I have tried to arrange the order so that each video only depends on concepts from earlier ones. (If you have any suggestions for improving the ordering, I would love to hear them!) Along with each video you can also find my cryptic notes; I make no guarantee that they will be useful to anyone (even me!), but hopefully they will at least give you an idea of what is in each video.
If and when they post any new videos (pretty please?) I will try to keep it updated.
species
package now has support for bracelets, i.e. equivalence classes of lists up to rotation and reversal. I show some examples of their use and then explain the (very interesting!) mathematics behind their implementation.
I recently released a new version of my species
package which adds support for the species of bracelets. A bracelet is a (nonempty) sequence of items which is considered equivalent up to rotation and reversal. For example, the two structures illustrated below are considered equivalent as bracelets, since you can transform one into the other by a rotation and a flip:
In other words, a bracelet has the same symmetry as a regular polygon—that is, its symmetry group is the dihedral group . (Actually, this is only true for —I’ll say more about this later.)
Bracelets came up for me recently in relation to a fun side project (more on that soon), and I am told they also show up in applications in biology and chemistry (for example, bracelet symmetry shows up in molecules with cycles, which are common in organic chemistry). There was no way to derive the species of bracelets from what was already in the library, so I added them as a new primitive.
Let’s see some examples (later I discuss how they work). First, we set some options and imports.
ghci> :set -XNoImplicitPrelude
ghci> :m +NumericPrelude
ghci> :m +Math.Combinatorics.Species
Unlabelled bracelets, by themselves, are completely uninteresting: there is only a single unlabelled bracelet shape of any positive size. (Unlabelled species built using bracelets can be interesting, however; we’ll see an example in just a bit). We can ask the library to tell us how many distinct size- unlabelled bracelets there are for :
ghci> take 10 $ unlabelled bracelets
[0,1,1,1,1,1,1,1,1,1]
Labelled bracelets are a bit more interesting. For there are labelled bracelets of size : there are cycles of size (there are lists, which counts each cycle times, once for each rotation), and counting cycles exactly double counts bracelets, since each bracelet can be flipped in one of two ways. For example, there are labelled bracelets of size .
ghci> take 10 $ labelled bracelets
[0,1,1,1,3,12,60,360,2520,20160]
In addition to counting these, we can exhaustively generate them (this is a bit annoying with the current API; I hope to improve it):
ghci> enumerate bracelets [0,1] :: [Bracelet Int]
[<<0,1>>]
ghci> enumerate bracelets [0..2] :: [Bracelet Int]
[<<0,1,2>>]
ghci> enumerate bracelets [0..3] :: [Bracelet Int]
[<<0,1,2,3>>,<<0,1,3,2>>,<<0,2,1,3>>]
And here are all of the size- bracelets, where I’ve used a different color to represent each label (see here for the code used to generate them):
As a final example, consider the species , the Cartesian product of bracelets with ordered pairs of sets. That is, given a set of labels, we simultaneously give the labels a bracelet structure and also partition them into two (distinguishable) sets. Considering unlabelled structures of this species—that is, equivalence classes of labelled structures under relabelling—means that we can’t tell the labels apart, other than the fact that we can still tell which are in the first set and which are in the second. So, if we call the first set “purple” and the second “green”, we are counting the number of bracelets made from (otherwise indistinguishable) purple and green beads. Let’s call these binary bracelets. Here’s how many there are of sizes through :
ghci> let biBracelets = bracelet >< (set * set)
ghci> take 15 $ unlabelled biBracelets
[0,2,3,4,6,8,13,18,30,46,78,126,224,380,687]
Let’s use the OEIS to check that we’re on the right track:
ghci> :m +Math.OEIS
ghci> let res = lookupSequence (drop 1 . take 10 $ unlabelled biBracelets)
ghci> fmap description res
Just "Number of necklaces with n beads of 2 colors, allowing turning over."
Unfortunately the species
library can’t currently enumerate unlabelled structures of species involving Cartesian product, though I hope to fix that. But for now we can draw these purple-green bracelets with some custom enumeration code. You can see the numbers show up here, and it’s not too hard to convince yourself that each row contains all possible binary bracelets of a given size.
If you’re just interested in what you can do with bracelets, you can stop reading now. If you’re interested in the mathematical and algorithmic details of how they are implemented, read on!
The exponential generating function (egf) associated to a combinatorial species is defined by
That is, the egf is an (infinite) formal power series where the coefficient of is the number of distinct labelled -structures on labels. We saw above that for there are labelled bracelets of size , and there is one bracelet each of sizes and . The egf for bracelets is thus:
(Challenge: show this is also equivalent to .) This egf is directly encoded in the species library, and this is what is being used to evaluate labelled bracelets
in the example above.
Incidentally, the reason only works for is in some sense due to the fact that the dihedral groups and are a bit weird: every dihedral group is a subgroup of the symmetric group except for and . The problem is that for , “flips” actually have no effect, as you can see below:
So, for example, has elements, corresponding to the identity, a 180 degree rotation, a flip, and a rotation + flip; but the symmetric group only has two elements, in this case corresponding to the identity and a 180 degree rotation. The reason doesn’t work, then, is that the division by two is superfluous: for , counting cycles doesn’t actually overcount bracelets, because every cycle is already a flipped version of itself. So it would also be correct (if rather baroque) to say that for there are actually bracelets.
I find this fascinating; it’s almost as if for bigger the dihedral symmetry has “enough room to breathe” whereas for small it doesn’t have enough space and gets crushed and folded in on itself, causing weird things to happen. It makes me wonder whether there are other sorts of symmetry with a transition from irregularity to regularity at even bigger . Probably this is an easy question for a group theorist to answer but I’ve never thought about it before.
The ordinary generating function (ogf) associated to a species is defined by
where is the equivalence relation on -structures induced by permuting the labels. That is, the coefficient of is the number of equivalence classes of -structures on labels up to relabelling. There is only one unlabelled bracelet of any size , that is, any bracelet of size can be transformed into any other just by switching labels around. The unique unlabelled bracelet of a given size can be visualized as a bracelet of uniform beads:
though it’s occasionally important to keep in mind the more formal definition as an equivalence class of labelled bracelets. Since there’s just one unlabelled bracelet of each size, the ogf for bracelets is rather boring:
.
This is encoded in the species
library too, and was used to compute unlabelled bracelets
above.
egfs are quite natural (in fact, species can be seen as a categorification of egfs), and the mapping from species to their associated egf is a homomorphism that preserves many operations such as sum, product, Cartesian product, composition, and derivative. ogfs, however, are not as nice. The mapping from species to ogfs preserves sum and product but does not, in general, preserve other operations like Cartesian product, composition or derivative. In some sense ogfs throw away too much information. Here’s a simple example to illustrate this: although the ogfs for bracelets and cycles are the same, namely, (there is only one unlabelled bracelet or cycle of each size), the ogfs for binary bracelets and binary cycles are different:
ghci> -- recall biBracelets = bracelet >< (set * set)
ghci> let biCycles = cycles >< (set * set)
ghci> take 15 $ unlabelled biBracelets
[0,2,3,4,6,8,13,18,30,46,78,126,224,380,687]
ghci> take 15 $ unlabelled biCycles
[0,2,3,4,6,8,14,20,36,60,108,188,352,632,1182]
(Puzzle: why are these the same up through ? Find the unique pair of distinct binary -cycles which are equivalent as bracelets.)
Clearly, there is no way to take equal ogfs, apply the same operation to both, and get different results out. So the species
library cannot be working directly with ogfs in the example above—something else must be going on. That something else is cycle index series, which generalize both egfs and ogfs, and retain enough information that they once again preserve many of the operations we care about.
Let denote the symmetric group of order , that is, the group of permutations on under composition. It is well-known that every permutation can be uniquely decomposed as a product of disjoint cycles. The cycle type of is the sequence of natural numbers where is the number of -cycles in the cycle decomposition of . For example, the permutation has cycle type since it has one -cycle, two -cycles, and one -cycle.
For a species and a permutation , let denote the number of -structures that are fixed by the action of , that is,
The cycle index series of a combinatorial species is a formal power series in an infinite set of variables defined by
We also sometimes write as an abbreviation for . As a simple example, consider the species of lists, i.e. linear orderings. For each , the identity permutation (with cycle type ) fixes all lists of length , whereas all other permutations do not fix any lists. Therefore
(This is not really that great of an example, though—since lists are regular species, that is, they have no nontrivial symmetry, their cycle index series, egf, and ogf are all essentially the same.)
Cycle index series are linked to both egfs and ogfs by the identities
To show the first, note that setting all to other than means that the only terms that survive are terms with only raised to some power. These correspond to permutations with only -cycles, that is, identity permutations. Identity permutations fix all -structures of a given size, so we have
To prove the link to ogfs, note first that for any permutation with cycle type we have . Thus:
where the final step is an application of Burnside’s Lemma.
The important point is that the mapping from species to cycle index series is again a homomorphism for many of the operations we care about, including Cartesian product and composition. So in order to compute an ogf for some species defined in terms of operations that are not compatible with ogfs, one can start out computing with cycle index series and then project down to an ogf at the end.
Let’s now see how to work out the cycle index series for the species of bracelets. For , the single bracelet is fixed by the only element of , giving a term of . For , the single bracelet is fixed by both elements of , one of which has cycle type and the other . Bracelets of size , as discussed previously, have the dihedral group as their symmetry group. That is, every one of the size- bracelets is fixed by the action of each element of , and no bracelets are fixed by the action of any other permutation. Putting this all together, we obtain
Our remaining task is thus to compute , that is, to compute the cycle types of elements of for . I don’t know whether there’s a nice closed form for , but for our purposes it doesn’t matter: it suffices to come up with a finite algorithm to generate all its terms with their coefficients. A closed form might be important if we want to compute with symbolically, but if we just want to generate coefficients, an algorithm is good enough.
In general, has elements corresponding to rotations (including the identity element, which we think of as a rotation by degrees) and elements corresponding to reflections across some axis. Below I’ve drawn illustrations showing the symmetries of bracelets of size and ; each symmetry corresponds to an element of .
The lines indicate reflections. You can see that in general there are lines of reflection. The curved arrows indicate clockwise rotations; taking any number of consecutive arrows from to gives a distinct rotational symmetry. Let’s label the rotations (for ), where indicates a rotation by of a turn (so is the identity element). We won’t bother labelling the reflections since it’s not clear how we would choose canonical names for them, and in any case (as we’ll see) we don’t have as much of a need to give them names as we do for the rotations. The only thing we will note is that for even there are two distinct types of reflections, as illustrated by the dark and light blue lines on the right: the dark blue lines pass through two vertices, and the light blue ones pass through two edges. In the odd case, on the other hand, every line of reflection passes through one vertex and one edge. If you haven’t studied dihedral groups before, you might want to take a minute to convince yourself that this covers all the possible symmetries. It’s clear that a rotation followed by a rotation is again a rotation; what may be less intuitively clear is that a reflection followed by a reflection is a rotation, and that a rotation followed by a reflection is a reflection.
So the name of the game is to consider each group element as a permutation of the labels, and compute the cycle type of the permutation. Let’s tackle the reflections first; we have to separately consider the cases when is odd and even. We saw above that when is odd, each line of reflection passes through exactly one vertex. As a permutation, that means the reflection will fix the label at the vertex it passes through, and swap the labels on other vertices in pairs, as shown in the leftmost diagram below:
So the permutation has cycle type . There is one -cycle, and the remaining elements are paired off in -cycles. There are of these reflections in total, yielding a term of (where ).
When is even, half of the reflections (the light blue ones) have no fixed points, as in the middle diagram above; they put everything in -cycles. The other half of the even reflections fix two vertices, with the rest in -cycles, as in the rightmost diagram above. In all, this yields terms .
Now let’s tackle the rotations. One could be forgiven for initially assuming that each rotation will just yield one big -cycle… a rotation is just cycling the vertices, right? But it is a bit more subtle than that. Let’s look at some examples. In each example below, the green curved arrow indicates a rotation applied to the bracelet. As you can check, the other arrows show the resulting permutation on the labels, that is, each arrow points from one node to the node where it ends up under the action of the rotation.
Do you see the pattern? In the case when (the first example above), or more generally when and are relatively prime (the second example above, with and ), indeed generates a single -cycle. But when and are not relatively prime, it generates multiple cycles. By symmetry the cycles must all be the same size; in general, the rotation generates cycles of size (where denotes the greatest common divisor of and ). So, for example, cycles are generated when and or (the next two examples above). The last example shows and ; we can see that three -cycles are generated. Note this even works when : we have , so we get cycles of size , i.e. the identity permutation.
So contributes a term . However, we can say something a bit more concise than this. Note, for example, when , as the contribution of all the we get
but we can collect like terms to get
For a given divisor , the coefficient of is the number of nonnegative integers less than whose with is equal to . For example, the coefficient of is , since there are two values of for which and hence generate a six-cycle, namely, and . So as the contribution of the we could write something like
but there is a better way. Note that
since multiplying and dividing by establishes a bijection between the two sets. For example, we saw that and are the two numbers whose with is ; this corresponds to the fact that and are relatively prime to .
But counting relatively prime numbers is precisely what Euler’s totient function (usually written ) does. So we can rewrite the coefficient of as
.
Finally, since we are adding up these terms for all divisors , we can swap and (divisors of always come in pairs whose product is ), and rewrite this as
.
To sum up, then, we have for each :
The only overlap is between (2) and (3): when both generate terms. Using Iverson brackets (the notation is equal to if the predicate is true, and if it is false), we can thus write the sum of the above for a particular as
.
Substituting this for yields a full definition of . You can see the result encoded in the species library here. Here’s the beginning of the full expanded series:
ghci> :m +Math.Combinatorics.Species.Types
ghci> take 107 $ show (bracelets :: CycleIndex)
"CI x1 + 1 % 2 x2 + 1 % 2 x1^2 + 1 % 3 x3 + 1 % 2 x1 x2 + 1 % 6 x1^3 + 1 % 4 x4 + 3 % 8 x2^2 + 1 % 4 x1^2 x2"
This, then, is how unlabelled biBracelets
(for example) is calculated, where biBracelets = bracelet >< (set * set)
. The cycle index series for bracelet
and set
are combined according to the operations on cycle index series corresponding to *
and ><
, and then the resulting cycle index series is mapped down to an ogf by substituting for each .
The final thing to mention is how bracelet generation works. Of course we can’t really generate actual bracelets, but only lists. Since bracelets can be thought of as equivalence classes of lists (under rotation and reversal), the idea is to pick a canonical representative element of each equivalence class, and generate those. A natural candidate is the lexicographically smallest among all rotations and reversals (assuming the labels have an ordering; if they don’t we can pick an ordering arbitrarily). One easy solution would be to generate all possible lists and throw out the redundant ones, but that would be rather inefficient. It is surprisingly tricky to do this efficiently. Fortunately, there is a series of papers by Joe Sawada (Generating bracelets with fixed content; A fast algorithm to generate necklaces with fixed content; Generating bracelets in constant amortized time) describing (and proving correct) some efficient algorithms for generating things like cycles and bracelets. In fact, they are as efficient as possible, theoretically speaking: they do only work per cycle or bracelet generated. One problem is that the algorithms are very imperative, so they cannot be directly transcribed into Haskell. But I played around with benchmarking various formulations in Haskell and got it as fast as I could. (Interestingly, using STUArray
was a lot slower in practice than a simple functional implementation, even though the imperative solution is asymptotically faster in theory—my functional implementation is at least per bracelet, and quite possibly , though since is typically quite small it doesn’t really matter very much. Of course it’s also quite possible that there are tricks to make the array version go faster that I don’t know about.) The result is released in the multiset-comb package; you can see the bracelet generation code here.
The Ally Skills Tutorial teaches men simple, everyday ways to support women in their workplaces and communities. Participants learn techniques that work at the office, in classrooms, at conferences, and online. The skills we teach are relevant everywhere, including skills particularly relevant to open technology and culture communities. At the end of the tutorial, participants will feel more confident in speaking up to support women, be more aware of the challenges facing women in their workplaces and communities, and have closer relationships with the other participants.
This sounds super helpful—I suspect there is often a large gap between the extent to which I want to support women and the extent to which I actually know, practically, how to do so. The workshop will be taught by Valerie Aurora, Linux filesystem developer and Ada Initiative co-founder; I expect it will be high quality!
The setup is that there are (distinct) friends who can talk to each other on the phone. Only two people can talk at a time (no conference calls). The question is to determine how many different “configurations” there are. Not everyone has to talk, so a configuration consists of some subset of the friends arranged in (unordered) conversational pairs.
Warning: spoilers ahead! If you’d like to play around with this yourself (and it is indeed a nice, accessible combinatorics problem to play with), stop reading now. My goal in this post is to have fun applying some advanced tools to this (relatively) simple problem.
Let’s start by visualizing some configurations. In her post, Denise illustrated the complete set of configurations for , which I will visualize like this:
Notice how I’ve arranged them: in the first row is the unique configuration where no one is talking (yes, that counts). In the second row are the six possible configurations with just a single conversation. The last row has the three possible configurations with two conversations.
One good approach at this point would be to derive some recurrences. This problem does indeed admit a nice recurrence, but I will let you ponder it. Instead, let’s see if we can just “brute-force” our way to a general formula, using our combinatorial wits. Later I will demonstrate a much more principled, mechanical way to derive a general formula.
Let’s start by coming up with a formula for , the number of configurations with people and conversations. The number of ways of choosing pairs out of a total of is the multinomial coefficient . However, that overcounts things: it actually distinguishes the first pair, second pair, and so on, but we don’t want to have any ordering on the pairs. So we have to divide by , the number of distinct orderings of the pairs. Thus,
Let’s do a few sanity checks. First, when , we have . We can also try some other small numbers we’ve already enumerated by hand: for example, , and . So this seems to work.
For people, there can be at most conversations. So, the total number of configurations is going to be
.
We can use this to compute for the first few values of :
At this point we could look up the sequence 1,1,2,4,10,26,76 on the OEIS and find out all sorts of fun things: e.g. that we are also counting self-inverse permutations, i.e. involutions, that these numbers are also called “restricted Stirling numbers of the second kind”, some recurrence relations, etc., as well as enough references to keep us busy reading for a whole year.
We can describe configurations as elements of the combinatorial species . That is, a configuration is an unordered set () of () things (), where each thing can either be an unordered pair () of people talking on the phone, or () a single person () who is not talking.
We can now use the Haskell species
library to automatically generate some counts and see whether they agree with our manual enumerations. First, some boilerplate setup:
ghci> :set -XNoImplicitPrelude
ghci> :m +NumericPrelude
ghci> :m +Math.Combinatorics.Species
Now we define the species of configurations:
ghci> let configurations = set `o` (set `ofSizeExactly` 2 + singleton)
We can ask the library to count the number of configurations for different :
ghci> take 10 (labelled configurations)
[1,1,2,4,10,26,76,232,764,2620]
Oh good, those numbers look familiar! Now, I wonder how many configurations there are for ?
ghci> labelled configurations !! 100
24053347438333478953622433243028232812964119825419485684849162710512551427284402176
Yikes!
We can also use the library to generate exhaustive lists of configurations, and draw them using diagrams. For example, here are all configurations for . (If you want to see the code used to generate this diagram, you can find it here.)
And just for fun, let’s draw all configurations for :
Whee!
Finally, I want to show how to use the species definition given above and the theory of generating functions to (somewhat) mechanically derive a general formula for the number of configurations. (Hopefully it will end up being equivalent to the formula we came up with near the beginning of the post!) Of course, this is also what the species
library is doing, but only numerically—we will do things symbolically.
First, note that we are counting labelled configurations (the friends are all distinct), so we want to consider exponential generating functions (egfs). Recall that the egf for a species is given by
,
that is, a (possibly infinite) formal power series where the coefficient of is the number of distinct labelled -structures of size . In our case, we need
,
since there is exactly one set structure of any size, and
,
which is just the restriction of to only the term. Of course, we also have . Putting this together, we calculate
Ultimately, we want something of the form , so we’ll need to collect up like powers of . To do that, we can do a bit of reindexing. Right now, the double sum is adding up a bunch of terms that can be thought of as making a triangle:
Each ordered pair in the triangle corresponds to a single term being added. Each column corresponds to a particular value of , with increasing to the right. Within each column, goes from up to .
The powers of in our double sum are given by . If we draw in lines showing terms that have the same power of , it looks like this:
So let’s choose a new variable , defined by . We can see that we will have terms for every . We will also keep the variable for our other index, and substitute to get rid of . In other words, instead of adding up the triangle by columns, we are going to add it up by diagonals.
Previously we had ; substituting for that now turns into . Adding to both sides and dividing by yields (we can round down since is an integer). Looking at the diagram above, this makes sense: the height of each diagonal line is indeed half its index. Rewriting our indices of summation and substituting for , we now have:
And hey, look at that! The coefficient of is exactly what we previously came up with for . Math works!
HTTP
package out of the guts of haxr
, replace it with http-streams
, and carefully sew everything back together around the edges. The result is that haxr
now finally supports making XML-RPC calls via HTTPS, which in turn means that BlogLiterately
once again works with WordPress, which no longer supports XML-RPC over HTTP. Happy blogging!
Well… I’m not so sure. What I do know is that the typical conversation around grade inflation frustrates me. At best, it often leaves many important assumptions unstated and unquestioned. Is grade inflation really bad? If so, why? What are the underlying assumptions and values that drive us to think of it in one way or another? At worst, the conversation is completely at the wrong level. Grade inflation is actually a symptom pointing at a much deeper question, one that gets at the heart of education and pedagogy: what do grades mean? Or, put another way, what do grades measure?
This will be a two-part series. In this first post, I consider the first question: is grade inflation bad? In most conversations I have been a part of, this is taken as given, but I think it deserves more careful thought. I don’t know of any reasons to think that grade inflation is good, but I also don’t buy many of the common arguments (often implied rather than explicitly stated) as to why it is bad; in this post I consider three common ones.
Just to make sure everyone is on the same page: by grade inflation I mean the phenomenon where average student grades are increasing over time, that is, the average student now receives higher grades than the average student of n years ago. (You could also think of it as the average value of a given grade going down over time.) This phenomenon is widespread in the US. I am only really familiar with the educational system in the US, so this post will of necessity be rather US-centric; I would be interested to hear about similarities and differences with other countries.
Let’s now consider some common arguments as to why grade inflation is bad.
This is not so much an “argument” as an attitude, and it goes something like this: “Back in MY day, a C really meant a C! These ungrateful, entitled young whippersnappers don’t understand the true value of grades…”
This is a caricature, of course, but I have definitely encountered variants of this attitude. This makes about as much sense to me as complaining “back in MY day, a dollar was really worth a dollar! And now my daughter is asking me for twenty dollars to go to a movie with her friends. TWENTY DOLLARS! These ungrateful, entitled young whippersnappers don’t understand the true value of money…” Nonsense, of course they do. It just costs $20 to go to a movie these days. A dollar is worth what it is now worth; a C is worth what it is now worth. Get over it. It’s not that students don’t understand “the true value of grades”, it’s just that the value of grades is different than it used to be.
There are a couple important caveats here: first, one can, of course, argue about what the value of a C ought to be, based on some ideas or assumptions about what grades (should) mean. I will talk about this at length in my next post. But you cannot blame students for not understanding your idea of what grades ought to mean! Second, it is certainly possible (even likely) that student attitudes towards grades have changed, and one can (and I do!) complain about those attitudes as compared to student attitudes in the past. But that is different than claiming that students don’t understand the value of grades.
If I may hazard a guess, I think what this often boils down to is that people blame grade inflation on student attitudes of entitlement. As a potential contributing factor to grade inflation (and insofar as we would like to teach students different attitudes), that is certainly worth thinking about. But grade inflation potentially being caused by something one dislikes is not an argument that grade inflation itself is bad.
Of course, there’s one important difference between money and grades: amounts of money have no upper limit, whereas grades are capped at A+. This brings us to what I often hear put forth as the biggest argument against grade inflation, that it compresses grades into a narrower and narrower band, squeezed from above by that highest possible A+. The problem with this, some argue, is that grade compression causes information to be lost. The “signal” of grades becomes noisier, and it becomes harder for, say, employers and grad schools to be able to distinguish between different students.
My first, more cynical reaction is this: well, cry me a river for those poor, poor employers and grad schools, who will now have to assess students on real accomplishments, skills, and personal qualities, or (more likely) find some other arbitrary measurement to use. Do we really think grades are such a high-quality signal in the first place? Do they really measure something important and intrinsic about a student? (More on this in my next post.) If the signal is noisy or arbitrary in the first place then compressing it really doesn’t matter that much.
Less cynically, let’s suppose the grade-signal really is that high-quality and important, and we are actually worried about the possibility of losing information. Consider the extreme situation, where grade inflation has progressed to such a degree that professors only give one of two possible grades: A (“outstandingly excellent”) or A+ (“superlatively superb”). An A- is so insultingly low that professors never give it (for fear of lawsuits, perhaps); for simplicity’s sake let’s suppose that no one ever fails, either. In this hypothetical scenario, at an institution like Williams where students take 32 courses, there are only 33 possible GPAs: you could get 32 A+’s, or one A and 31 A+’s, or two A’s and 30 A+’s… all the way down to getting all A’s (“straight-A student” means something rather different in this imaginary universe!).
But here’s the thing: I think 33 different GPAs would still be enough! I honestly don’t think companies or grad schools can meaningfully care about distinctions finer than having 33 different buckets of students. (If you think differently, I’d love to hear your argument.) If student GPAs are normally distributed, this even means that the top few buckets have much less than 1/33 of all the students. So if the top grad schools and companies want to only consider the top 1% of all students (or whatever), they can just look at the top bucket or two. You might say this is unfair for the students, but really, I can’t see how this would be any more or less fair than the current system.
Of course, under this hypothetical two-grade system, GPAs might not be normally distributed. For one thing, if grade inflation kept going, the distribution might become more and more skewed to the right, until, for example, half of all students were getting straight A+’s, or, in the theoretical limit, all students get only A+’s. But I really don’t think this would actually happen; I think you would see some regulating effects kick in far before this theoretical limit was reached. Professors would not actually be willing to give all A+’s (or even, for that matter, all A’s and A+’s).
The GPAs could also be very bimodal, if, for example, students are extremely consistent: a student who consistently scores in the top 40% of every class would get the same grades (all A+’s) as a student who consistently scores in the top 10%. However, I doubt this is how it would work (as any professor knows, “consistent” and “student” are a rare pairing). It would be interesting to actually work out what GPA distributions would result from various assumptions about student behavior.
The final argument against grade inflation that I sometimes hear goes like this: the problem is not so much that the average GPA is going up but simply that it is moving at all, which makes it harder for grad schools and employers to know how to calibrate their interpretations. But I don’t really buy this one either. The value of money is moving too, and yes, in some grand sense I suppose that makes it slightly harder for people to figure out how much things are worth. But somehow, everyone seems to manage just fine. I think employers and grad schools will manage just fine too. I don’t think GPAs are changing anywhere near fast enough for it to make much difference. And in any case, most of the time, the only thing employers and grad schools really care about is comparing the GPAs of students who graduated around the same time, in which case the absolute average GPA doesn’t matter at all. (One can make an argument about the difficulties caused by different schools having different average GPAs, but that is always going to be an issue, grade inflation or no.)
In the end, then, I am not so sure that grade inflation per se is such a terrible thing. However, it is well worth pondering the causes of grade inflation, and the deeper questions it leads to: what are grades? Why do we give them? What purposes do they serve, and what do they measure? I’ll take up these questions in a subsequent post.
Suppose someone hands you the following:
A Haskell function f :: (A, Bool) -> (B, Bool)
, where A
and B
are abstract types (i.e. their constructors are not exported, and you have no other functions whose types mention A
or B
).
A promise that the function f
is injective, that is, no two values of (A, Bool)
map to the same (B, Bool)
value. (Thus (B, Bool)
must contain at least as many inhabitants as (A, Bool)
.)
A list as :: [A]
, with a promise that it contains every value of type A
exactly once, at a finite position.
Can you explicitly produce an injective function f' :: A -> B
? Moreover, your answer should not depend on the order of elements in as
.
It really seems like this ought to be possible. After all, if (B, Bool)
has at least as many inhabitants as (A, Bool)
, then surely B
must have at least as many inhabitants as A
. But it is not enough to reason merely that some injection must exist; we have to actually construct one. This, it turns out, is tricky. As a first attempt, we might try f' a = fst (f (a, True))
. That is certainly a function of type A -> B
, but there is no guarantee that it is injective. There could be a1, a2 :: A
which both map to the same b
, that is, one maps to (b, False)
and the other to (b, True)
. The picture below illustrates such a situation: (a1, True)
and (a2, True)
both map to b2
. So the function f
may be injective overall, but we can’t say much about f
restricted to a particular Bool
value.
The requirement that the answer not depend on the order of as
also makes things difficult. (Over in math-land, depending on a particular ordering of the elements in as
would amount to the well-ordering principle, which is equivalent to the axiom of choice, which in turn implies the law of excluded middle—and as we all know, every time someone uses the law of excluded middle, a puppy dies. …I feel like I’m in one of those DirecTV commercials. “Don’t let a puppy die. Ignore the order of elements in as
.”) Anyway, making use of the order of values in as
, we could do something like the following:
a :: A
:
B
values generated by f (a,True)
and f (a,False)
. (Note that there might only be one distinct such B
value).B
value has been used so far, pick the one that corresponds to (a,True)
, and add the other one to a queue of available B
values.B
value from the queue.It is not too hard I couldn’t be bothered to show that this will always successfully result in a total function A -> B
, which is injective by construction. (One has to show that there will always be an available B
value in the queue when you need it.) The only problem is that the particular function we get depends on the order in which we iterate through the A
values. The above example illustrates this as well: if the A
values are listed in the order , then we first choose , and then . If they are listed in the other order, we end up with and . Whichever value comes first “steals” , and then the other one takes whatever is left. We’d like to avoid this sort of dependence on order. That is, we want a well-defined algorithm which will yield a total, injective function A -> B
, which is canonical in the sense that the algorithm yields the same function given any permutation of as
.
It is possible—you might enjoy puzzling over this a bit before reading on!
The above example is a somewhat special case. More generally, let denote a canonical finite set of size , and let and be arbitrary sets. Then, given an injection , is it possible to effectively (that is, without excluded middle or the axiom of choice) compute an injection ?
Translating down to the world of numbers representing set cardinalities—natural numbers if and are finite, or cardinal numbers in general—this just says that if then . This statement about numbers is obviously true, so it would be nice if we could say something similar about sets, so that this fact about numbers and inequalities can be seen as just a “shadow” of a more general theorem about sets and injections.
As hinted in the introduction, the interesting part of this problem is really the word “effectively”. Using the Axiom of Choice/Law of Excluded Middle makes the problem a lot easier, but either fails to yield an actual function that we can compute with, instead merely guaranteeing the existence of such a function, or gives us a function that depends on a particular ordering of .
Apparently this has been a longstanding open question, recently answered in the affirmative by Peter Doyle and Cecil Qiu in their paper Division By Four. It’s a really great paper: they give some fascinating historical context for the problem, and explain their algorithm (which is conceptually not all that difficult) using an intuitive analogy to a card game with certain rules. (It is not a “game” in the usual sense of having winners and losers, but really just an algorithm implemented with “players” and “cards”. In fact, you could get some friends together and actually perform this algorithm in parallel (if you have sufficiently nerdy friends).) Richard Schwartz’s companion article is also great fun and easy to follow (you should read it first).
Here’s a quick introduction to the way Doyle, Qiu, and Schwartz use a card game to formulate their algorithm. (Porting this framework to use “thrones” and “claimants” instead of “spots” and “cards” is left as an exercise to the reader.)
The finite set is to be thought of as a set of suits. The set will correspond to a set of players, and to a set of ranks or values (for example, Ace, 2, 3, …) In that case corresponds to a deck of cards, each card having a rank and a suit; and we can think of in terms of each player having in front of them a number of “spots” or “slots”, each labelled by a suit. An injection is then a particular “deal” where one card has been dealt into each of the spots in front of the players. (There may be some cards left over in the deck, but the fact that the function is total means every spot has a card, and the fact that it is injective is encoded in the common-sense idea that a given card cannot be in two spots at once.) For example, the example function from before:
corresponds to the following deal:
Here each column corresponds to one player’s hand, and the rows correspond to suit spots (with the spade spots on top and the heart spots beneath). We have mapped to the ranks A, 2, 3, and mapped T and F to Spades and Hearts respectively. The spades are also highlighted in green, since later we will want to pay particular attention to what is happening with them. You might want to take a moment to convince yourself that the deal above really does correspond to the example function from before.
Of course, doing everything effectively means we are really talking about computation. Doyle and Qiu do talk a bit about computation, but it’s still pretty abstract, in the sort of way that mathematicians talk about computation, so I thought it would be interesting to actually implement the algorithm in Haskell.
The algorithm “works” for infinite sets, but only (as far as I understand) if you consider some notion of transfinite recursion. It still counts as “effective” in math-land, but over here in programming-land I’d like to stick to (finitely) terminating computations, so we will stick to finite sets and .
First, some extensions and imports. Nothing too controversial.
> {-# LANGUAGE DataKinds #-}
> {-# LANGUAGE GADTs #-}
> {-# LANGUAGE GeneralizedNewtypeDeriving #-}
> {-# LANGUAGE KindSignatures #-}
> {-# LANGUAGE RankNTypes #-}
> {-# LANGUAGE ScopedTypeVariables #-}
> {-# LANGUAGE StandaloneDeriving #-}
> {-# LANGUAGE TypeOperators #-}
>
> module PanGalacticDivision where
>
> import Control.Arrow (second, (&&&), (***))
> import Data.Char
> import Data.List (find, findIndex, transpose)
> import Data.Maybe
>
> import Diagrams.Prelude hiding (universe, value)
> import Diagrams.Backend.Rasterific.CmdLine
> import Graphics.SVGFonts
We’ll need some standard machinery for type-level natural numbers. Probably all this stuff is in a library somewhere but I couldn’t be bothered to find out. Pointers welcome.
> -- Standard unary natural number type
> data Nat :: * where
> Z :: Nat
> Suc :: Nat -> Nat
>
> type One = Suc Z
> type Two = Suc One
> type Three = Suc Two
> type Four = Suc Three
> type Six = Suc (Suc Four)
> type Eight = Suc (Suc Six)
> type Ten = Suc (Suc Eight)
> type Thirteen = Suc (Suc (Suc Ten))
>
> -- Singleton Nat-indexed natural numbers, to connect value-level and
> -- type-level Nats
> data SNat :: Nat -> * where
> SZ :: SNat Z
> SS :: Natural n => SNat n -> SNat (Suc n)
>
> -- A class for converting type-level nats to value-level ones
> class Natural n where
> toSNat :: SNat n
>
> instance Natural Z where
> toSNat = SZ
>
> instance Natural n => Natural (Suc n) where
> toSNat = SS toSNat
>
> -- A function for turning explicit nat evidence into implicit
> natty :: SNat n -> (Natural n => r) -> r
> natty SZ r = r
> natty (SS n) r = natty n r
>
> -- The usual canonical finite type. Fin n has exactly n
> -- (non-bottom) values.
> data Fin :: Nat -> * where
> FZ :: Fin (Suc n)
> FS :: Fin n -> Fin (Suc n)
>
> finToInt :: Fin n -> Int
> finToInt FZ = 0
> finToInt (FS n) = 1 + finToInt n
>
> deriving instance Eq (Fin n)
Next, a type class to represent finiteness. For our purposes, a type a
is finite if we can explicitly list its elements. For convenience we throw in decidable equality as well, since we will usually need that in conjunction. Of course, we have to be careful: although we can get a list of elements for a finite type, we don’t want to depend on the ordering. We must ensure that the output of the algorithm is independent of the order of elements.^{1} This is in fact true, although somewhat nontrivial to prove formally; I mention some of the intuitive ideas behind the proof below.
While we are at it, we give Finite
instances for Fin n
and for products of finite types.
> class Eq a => Finite a where
> universe :: [a]
>
> instance Natural n => Finite (Fin n) where
> universe = fins toSNat
>
> fins :: SNat n -> [Fin n]
> fins SZ = []
> fins (SS n) = FZ : map FS (fins n)
>
> -- The product of two finite types is finite.
> instance (Finite a, Finite b) => Finite (a,b) where
> universe = [(a,b) | a <- universe, b <- universe]
Now we come to the division algorithm proper. The idea is that panGalacticPred
turns an injection into an injection , and then we use induction on to repeatedly apply panGalacticPred
until we get an injection .
> panGalacticDivision
> :: forall a b n. (Finite a, Eq b)
> => SNat n -> ((a, Fin (Suc n)) -> (b, Fin (Suc n))) -> (a -> b)
In the base case, we are given an injection , so we just pass a unit value in along with the and project out the .
> panGalacticDivision SZ f = \a -> fst (f (a, FZ))
In the inductive case, we call panGalacticPred
and recurse.
> panGalacticDivision (SS n') f = panGalacticDivision n' (panGalacticPred n' f)
And now for the real meat of the algorithm, the panGalacticPred
function. The idea is that we swap outputs around until the function has the property that every output of the form corresponds to an input also of the form . That is, using the card game analogy, every spade in play should be in the leftmost spot (the spades spot) of some player’s hand (some spades can also be in the deck). Then simply dropping the leftmost card in everyone’s hand (and all the spades in the deck) yields a game with no spades. That is, we will have an injection . Taking predecessors everywhere (i.e. “hearts are the new spades”) yields the desired injection .
We need a Finite
constraint on a
so that we can enumerate all possible inputs to the function, and an Eq
constraint on b
so that we can compare functions for extensional equality (we iterate until reaching a fixed point). Note that whether two functions are extensionally equal does not depend on the order in which we enumerate their inputs, so far validating my claim that nothing depends on the order of elements returned by universe
.
> panGalacticPred
> :: (Finite a, Eq b, Natural n)
> => SNat n
> -> ((a, Fin (Suc (Suc n))) -> (b, Fin (Suc (Suc n))))
> -> ((a, Fin (Suc n)) -> (b, Fin (Suc n)))
We construct a function f'
which is related to f
by a series of swaps, and has the property that it only outputs FZ
when given FZ
as an input. So given (a,i)
we can call f'
on (a, FS i)
which is guaranteed to give us something of the form (b, FS j)
. Thus it is safe to strip off the FS
and return (b, j)
(though the Haskell type checker most certainly does not know this, so we just have to tell it to trust us).
> panGalacticPred n f = \(a,i) -> second unFS (f' (a, FS i))
> where
> unFS :: Fin (Suc n) -> Fin n
> unFS FZ = error "impossible!"
> unFS (FS i) = i
To construct f'
we iterate a certain transformation until reaching a fixed point. For finite sets and this is guaranteed to terminate, though it is certainly not obvious from the Haskell code. (Encoding this in Agda so that it is accepted by the termination checker would be a fun (?) exercise.)
One round of the algorithm consists of two phases called “shape up” and “ship out” (to be described shortly).
> oneRound = natty n $ shipOut . shapeUp
>
> -- iterate 'oneRound' beginning with the original function...
> fs = iterate oneRound f
> -- ... and stop when we reach a fixed point.
> f' = fst . head . dropWhile (uncurry (=/=)) $ zip fs (tail fs)
> f1 =/= f2 = all (\x -> f1 x == f2 x) universe
Recall that a “card” is a pair of a value and a suit; we think of as the set of values and as the set of suits.
> type Card v s = (v, s)
>
> value :: Card v s -> v
> value = fst
>
> suit :: Card v s -> s
> suit = snd
Again, there are a number of players (one for each element of ), each of which has a “hand” of cards. A hand has a number of “spots” for cards, each one labelled by a different suit (which may not have any relation to the actual suit of the card in that position).
> type PlayerSpot p s = (p, s)
> type Hand v s = s -> Card v s
A “game” is an injective function from player spots to cards. Of course, the type system is not enforcing injectivity here.
> type Game p v s = PlayerSpot p s -> Card v s
Some utility functions. First, a function to project out the hand of a given player.
> hand :: p -> Game p v s -> Hand v s
> hand p g = \s -> g (p, s)
A function to swap two cards, yielding a bijection on cards.
> swap :: (Eq s, Eq v) => Card v s -> Card v s -> (Card v s -> Card v s)
> swap c1 c2 = f
> where
> f c
> | c == c1 = c2
> | c == c2 = c1
> | otherwise = c
leftmost
finds the leftmost card in a player’s hand which has a given suit.
> leftmost :: Finite s => s -> Hand v s -> Maybe s
> leftmost targetSuit h = find (\s -> suit (h s) == targetSuit) universe
playRound
abstracts out a pattern that is used by both shapeUp
and shipOut
. The first argument is a function which, given a hand, produces a function on cards; that is, based on looking at a single hand, it decides how to swap some cards around.^{2} playRound
then applies that function to every hand, and composes together all the resulting permutations.
Note that playRound
has both Finite s
and Finite p
constraints, so we should think about whether the result depends on the order of elements returned by any call to universe
—I claimed it does not. Finite s
corresponds to suits/spots, which corresponds to in the original problem formulation. explicitly has a canonical ordering, so this is not a problem. The Finite p
constraint, on the face of it, is more problematic. We will have to think carefully about each of the rounds implemented in terms of playRound
and make sure they do not depend on the order of players. Put another way, it should be possible for all the players to take their turn simultaneously.
> playRound :: (Finite s, Finite p, Eq v) => (Hand v s -> Card v s -> Card v s) -> Game p v s -> Game p v s
> playRound withHand g = foldr (.) id swaps . g
> where
> swaps = map (withHand . flip hand g) players
> players = universe
Finally, we can describe the “shape up” and “ship out” phases, beginning with “shape up”. A “bad” card is defined as one having the lowest suit; make sure every hand with any bad cards has one in the leftmost spot (by swapping the leftmost bad card with the card in the leftmost spot, if necessary).
> shapeUp :: (Finite s, Finite p, Eq v) => Game p v s -> Game p v s
> shapeUp = playRound shapeUp1
> where
> badSuit = head universe
> shapeUp1 theHand =
> case leftmost badSuit theHand of
> Nothing -> id
> Just badSpot -> swap (theHand badSuit) (theHand badSpot)
And now for the “ship out” phase. Send any “bad” cards not in the leftmost spot somewhere else, by swapping with a replacement, namely, the card whose suit is the same as the suit of the spot, and whose value is the same as the value of the bad card in the leftmost spot. The point is that bad cards in the leftmost spot are OK, since we will eventually just ignore the leftmost spot. So we have to keep shipping out bad cards not in the leftmost spot until they all end up in the leftmost spot. For some intuition as to why this is guaranteed to terminate, consult Schwartz; note that columns tend to acquire more and more cards that have the same rank as a spade in the top spot (which never moves).
> shipOut :: (Finite s, Finite p, Eq v) => Game p v s -> Game p v s
> shipOut = playRound shipOutHand
> where
> badSuit = head universe
> spots = universe
> shipOutHand theHand = foldr (.) id swaps
> where
> swaps = map (shipOut1 . (theHand &&& id)) (drop 1 spots)
> shipOut1 ((_,s), spot)
> | s == badSuit = swap (theHand spot) (value (theHand badSuit), spot)
> | otherwise = id
And that’s it! Note that both shapeUp
and shipOut
are implemented by composing a bunch of swaps; in fact, in both cases, all the swaps commute, so the order in which they are composed does not matter. (For proof, see Schwartz.) Thus, the result is independent of the order of the players (i.e. the set A
).
Enough code, let’s see an example! This example is taken directly from Doyle and Qiu’s paper, and the diagrams are being generated literally (literately?) by running the code in this blog post. Here’s the starting configuration:
Again, the spades are all highlighted in green. Recall that our goal is to get them all to be in the first row, but we have to do it in a completely deterministic, canonical way. After shaping up, we have:
Notice how the 6, K, 5, A, and 8 of spades have all been swapped to the top of their column. However, there are still spades which are not at the top of their column (in particular the 10, 9, and J) so we are not done yet.
Now, we ship out. For example, the 10 of spades is in the diamonds position in the column with the Ace of spades, so we swap it with the Ace of diamonds. Similarly, we swap the 9 of spades with the Queen of diamonds, and the Jack of spades with the 4 of hearts.
Shaping up does nothing at this point so we ship out again, and then continue to alternate rounds.
In the final deal above, all the spades are at the top of a column, so there is an injection from the set of all non-spade spots to the deck of cards with all spades removed. This example was, I suspect, carefully constructed so that none of the spades get swapped out into the undealt portion of the deck, and so that we end up with only spades in the top row. In general, we might end up with some non-spades also in the top row, but that’s not a problem. The point is that ignoring the top row gets rid of all the spades.
Anyway, I hope to write more about some “practical” examples and about what this has to do with combinatorial species, but this post is long enough already. Doyle and Qiu also describe a “short division” algorithm (the above is “long division”) that I hope to explore as well.
For completeness, here’s the code I used to represent the example game above, and to render all the card diagrams (using diagrams 1.3).
> type Suit = Fin
> type Rank = Fin
> type Player = Fin
>
> readRank :: SNat n -> Char -> Rank n
> readRank n c = fins n !! (fromJust $ findIndex (==c) "A23456789TJQK")
>
> readSuit :: SNat n -> Char -> Suit n
> readSuit (SS _) 'S' = FZ
> readSuit (SS (SS _)) 'H' = FS FZ
> readSuit (SS (SS (SS _))) 'D' = FS (FS FZ)
> readSuit (SS (SS (SS (SS _)))) 'C' = FS (FS (FS FZ))
>
> readGame :: SNat a -> SNat b -> SNat n -> String -> Game (Player a) (Rank b) (Suit n)
> readGame a b n str = \(p, s) -> table !! finToInt p !! finToInt s
> where
> table = transpose . map (map readCard . words) . lines $ str
> readCard [r,s] = (readRank b r, readSuit n s)
>
> -- Example game from Doyle & Qiu
> exampleGameStr :: String
> exampleGameStr = unlines
> [ "4D 6H QD 8D 9H QS 4C AD 6C 4S"
> , "JH AH 9C 8H AS TC TD 5H QC JS"
> , "KC 6S 4H 6D TS 9S JC KD 8S 8C"
> , "5C 5D KS 5S TH JD AC QH 9D KH"
> ]
>
> exampleGame :: Game (Player Ten) (Rank Thirteen) (Suit Four)
> exampleGame = readGame toSNat toSNat toSNat exampleGameStr
>
> suitSymbol :: Suit n -> String
> suitSymbol = (:[]) . ("♠♥♦♣"!!) . finToInt -- Huzzah for Unicode
>
> suitDia :: Suit n -> Diagram B
> suitDia = (suitDias!!) . finToInt
>
> suitDias = map mkSuitDia (fins (toSNat :: SNat Four))
> mkSuitDia s = text' (suitSymbol s) # fc (suitColor s) # lw none
>
> suitColor :: Suit n -> Colour Double
> suitColor n
> | finToInt n `elem` [0,3] = black
> | otherwise = red
>
> rankStr :: Rank n -> String
> rankStr n = rankStr' (finToInt n + 1)
> where
> rankStr' 1 = "A"
> rankStr' i | i <= 10 = show i
> | otherwise = ["JQK" !! (i - 11)]
>
> text' t = stroke (textSVG' (TextOpts lin INSIDE_H KERN False 1 1) t)
>
> renderCard :: (Rank b, Suit n) -> Diagram B
> renderCard (r, s) = mconcat
> [ mirror label
> , cardContent (finToInt r + 1)
> , back
> ]
> where
> cardWidth = 2.25
> cardHeight = 3.5
> cardCorners = 0.1
> mirror d = d d # rotateBy (1/2)
> back = roundedRect cardWidth cardHeight cardCorners # fc white
> # lc (case s of { FZ -> green; _ -> black })
> label = vsep 0.1 [text' (rankStr r), text' (suitSymbol s)]
> # scale 0.6 # fc (suitColor s) # lw none
> # translate ((-0.9) ^& 1.5)
> cardContent n
> | n <= 10 = pips n
> | otherwise = face n # fc (suitColor s) # lw none
> # sized (mkWidth (cardWidth * 0.6))
> pip = suitDia s # scale 1.1
> pips 1 = pip # scale 2
> pips 2 = mirror (pip # up 2)
> pips 3 = pips 2 pip
> pips 4 = mirror (pair pip # up 2)
> pips 5 = pips 4 pip
> pips 6 = mirror (pair pip # up 2) pair pip
> pips 7 = pips 6 pip # up 1
> pips 8 = pips 6 mirror (pip # up 1)
> pips 9 = mirror (pair (pip # up (2/3) pip # up 2)) pip # up (case finToInt s of {1 -> -0.1; 3 -> 0; _ -> 0.1})
> pips 10 = mirror (pair (pip # up (2/3) pip # up 2) pip # up (4/3))
> pips _ = mempty
> up n = translateY (0.5*n)
> pair d = hsep 0.4 [d, d] # centerX
> face 11 = squares # frame 0.1
> face 12 = loopyStar
> face 13 = burst # centerXY
> squares
> = strokeP (mirror (square 1 # translate (0.2 ^& 0.2)))
> # fillRule EvenOdd
> loopyStar
> = regPoly 7 1
> # star (StarSkip 3)
> # pathVertices
> # map (cubicSpline True)
> # mconcat
> # fillRule EvenOdd
> burst
> = [(1,5), (1,-5)] # map r2 # fromOffsets
> # iterateN 13 (rotateBy (-1/13))
> # mconcat # glueLine
> # strokeLoop
>
> renderGame :: (Natural n, Natural a) => Game (Player a) (Rank b) (Suit n) -> Diagram B
> renderGame g = hsep 0.5 $ map (\p -> renderHand p $ hand p g) universe
>
> renderHand :: Natural n => Player a -> Hand (Rank b) (Suit n) -> Diagram B
> renderHand p h = vsep 0.2 $ map (renderCard . h) universe
If we could program in Homotopy Type Theory, we could make this very formal by using the notion of cardinal-finiteness developed in my dissertation (see section 2.4).↩
In practice this function on cards will always be a permutation, though the Haskell type system is not enforcing that at all. An early version of this code used the Iso
type from lens
, but it wasn’t really paying its way.↩
Polynomial Functors Constrained by Regular Expressions
Here’s the 2-minute version: certain operations or restrictions on functors can be described by regular expressions, where the elements of the alphabet correspond to type arguments. The idea is to restrict to only those structures for which an inorder traversal yields a sequence of types matching the regular expression. For example, gives you even-size things; gives you the derivative (the structure has a bunch of values of type , a single hole of type , and then more values of type ), and the dissection.
The punchline is that we show how to use the machinery of semirings, finite automata, and some basic matrix algebra to automatically derive an algebraic description of any functor constrained by any regular expression. This gives a nice unified way to view differentiation and dissection; we also draw some connections to the theory of divided differences.
I’m still open to discussion, suggestions, typo fixes, etc., though at this point they won’t make it into the proceedings. There’s certainly a lot more that could be said or ways this could be extended further.