Let be a monoid, and let denote the subset of elements of which actually have an inverse. Then it is not hard to show that is a group: the identity is its own inverse and hence is in ; it is closed under the monoid operation since if and have inverses then so does (namely, ); and clearly the inverse of every element in is also in , because being an inverse also implies having one.
Now let , where the operation is multiplication, but the coefficients and are reduced modulo 3. For example, . This does turn out to be associative, and is clearly commutative; and is the identity. I wrote a little program to see which elements have inverses, and it turns out that the three elements with do not, but the other six do. So this is an Abelian group of order 6; but there’s only one such group, namely, the cyclic group . And, sure enough, turns out to be generated by and .
I had never been to Vancouver before; it seems like a beautiful and fun city. One afternoon I skipped all the talks and went for a long hike—I ended up walking around the entire perimeter of the Stanley Park seawall, which was gorgeous. The banquet was at the (really cool) aquarium—definitely the first time I have eaten dinner while being watched by an octopus.
Instead of staying in the conference hotel, four of us (me, Ryan Yates, Ryan Trinkle, and Michael Sloan) rented an apartment through AirBnB.^{1} The apartment was really great, it ended up being cheaper per person than sharing two hotel rooms, and it was a lot of fun to have a comfortable place to hang out in the evenings—where we could sit around in our pajamas, and talk, or write code, or whatever, without having to be “on”.
I met some new people, including Aahlad Gogineni from Tufts (along with another Tufts student whose name I unfortunately forget); Zac Slade and Boyd Smith from my new home state of Arkansas; and some folks from Vancouver whose names I am also blanking on at the moment. I also met a few people in person for the first time I had previously only communicated with electronically, like Rein Heinrich and Chris Smith.
I also saw lots of old friends—way too many to list. It once again reminded me how thankful I am to be part of such a great community. Of course, the community is also far from perfect; towards that end I really enjoyed and appreciated the ally skills tutorial taught by Valerie Aurora (which probably deserves its own post).
Here are just a few of my favorite talks:
I can’t really say the tribute to Paul Hudak was one of my “favorites”, since I would have much preferred to have Paul still with us instead! But I thought John Hughes and John Peterson did a great job. Paul will live on through the many, many people he has loved and inspired.
The FARM keynote by Fabienne Serriere was wonderful: funny, erudite, astounding, and inspiring.
I really enjoyed Mary Sheeran’s keynote, Hardware Design and Functional Programming: Still Interesting after All These Years. She did a great job of presenting some of the history and current and future challenges of the area in a way that was accessible and engaging.
Kenny Foner’s talk, Getting a Quick Fix on Comonads, was fantastic.^{2}
Dan Piponi’s presentation of his Moodler project was a lot of fun. I love his use of digital technology to enable, rather than move away from, an analog/physical interface.
I had a lot of great discussions relating to diagrams. For example:
I talked with Alan Zimmerman about using his and Matthew Pickering’s great work on ghc-exactprint with an eye towards shipping future diagrams releases along with an automated refactoring tool for updating users’ diagrams code.
After talking a bit with Michael Sloan I got a much better sense for the ways stack can support our development and CI testing process.
I had a lot of fun talking with Ryan Yates about various things, including projecting diagrams from 3D into 2D, and reworking the semantics and API for diagrams’ paths and trails to be more elegant and consistent. We gave a presentation at FARM which seemed to be well-received.
I got another peek at how well Idris is coming along, including a few personal demonstrations from David Christiansen (thanks David!). I am quite impressed, and plan to look into using it during the last few weeks of my functional programming course this spring (in the past I have used Agda).
If I had written this as soon as I got back, I probably could have remembered a lot more; oh well. All in all, a wonderful week, and I’m looking forward to Japan next year!
Yes, I know that hotel bookings help pay for the conference, and I admit to feeling somewhat conflicted about this.↩
I asked him afterwards how he made the great animations in his slides, and sadly it seems he tediously constructed them using PowerPoint. Someday, it will be possible to do this with diagrams!↩
I prepared three questions for the exam. The first was fairly simple (“explain algorithm X and analyze its time complexity”) and I actually told the students ahead of time what it would be—to help them feel more comfortable and prepared. The other questions were a bit more open-ended:
The second question was of the form “I want to store X information and do operations Y and Z on it. What sorts of data structure(s) might you use, and what would be the tradeoffs?” There were then a couple rounds of “OK, now I want to add another operation W. How does that change your analysis?” In answering this I expected them to deploy metrics like code complexity, time and memory usage etc. to compare different data structures. I wanted to see them think about a lot of the different data structures we had discussed over the course of the semester and their advantages and disadvantages at a high level.
The final question was of the form “Here is some code. What does it do? What is its time complexity? Now please design a more efficient version that does the same thing.” With some students there was enough time to have them actually write code, with other students I just had them talk through the design of an algorithm. This question got more at their ability to design and analyze appropriate algorithms on data structures. The algorithm I asked them to develop was not something they had seen before, but it was similar to other things they had seen, put together in a new way.
Overall I was happy with the questions and the quality of the responses they elicited. If I do this again I would use similar sorts of questions.
You might well be wondering how long all of this took. I had about 30 students. I planned for the exam to take 30 minutes, and blocked out 45-minute chunks of time (to allow time for transitioning and for the exam to go a bit over 30 minutes if necessary; in practice the exams always went at least 40 minutes and I was scrambling at the end to jot down final thoughts before the next students showed up). I allowed them to choose whether to come in by themselves or with a partner (more on this later). As seems typical, about 1/3 of them chose to come by themselves, and the other 2/3 in pairs, for a total of about 20 exam slots. 20 slots at 45 minutes per slot comes out to 15 hours, or 3 hours per day for a week. This might sound like a lot, but if you compare it to the time required for a traditional written exam it compares quite favorably. First of all, I spent only two or three hours preparing the exam, whereas I estimate I would easily spend 5 or 10 hours preparing a written exam—a written exam has to be very precise in explaining what is wanted and in trying to anticipate potential questions and confusions. When you are asking the questions in person, it is easy to just clear up these confusions as they arise. Second, I was mostly grading students during their exam (more on this in the next section) so that by about five minutes after the end of their slot I had their exam completely graded. With a written exam, I could easily have spent at least 15 hours just grading all the exams.
So overall, the oral exam took up less of my time, and I can tell you, hands down, that my time was spent much more enjoyably than it would have been with a written exam. It was really fun to have each student come into my office, to get a chance to talk with them individually (or as a pair) and see what they had learned. It felt like a fitting end to the semester.
In order to assess the students, I prepared a detailed rubric beforehand, which was really critical. With a written exam you can just give the exam and then later come up with a rubric when you go to grade them (although I think even written exams are usually improved by coming up with a rubric beforehand, as part of the exam design process—it helps you to analyze whether your exam is really assessing the things you want it to). For an oral exam, this is impossible: there is no way to remember all of the responses that each student gives, and even if you write down a bunch of notes during or after each exam, you would probably find later that you didn’t write down everything that you should have.
In any case, it worked pretty well to have a rubric in front of me, where I could check things off or jot down quick notes in real time.
People are often surprised when I say that I allowed the students to come in pairs. My reasons were as follows:
Overall I was really happy with the result. Many of the students had been working with a particular partner on their labs for the whole semester and came to the exam with that same partner. For quite a few pairs this obviously worked well for them: it was really fun to watch the back-and-forth between them as they suggested different ideas, debated, corrected each other, and occasionally even seemed to forget that I was in the room.
One might worry about mismatched pairs, where one person does all of the talking and the other is just along for the ride. I only had this happen to some extent with one or two pairs. I told all the students up front that I would take points off in this sort of situation (I ended up taking off 10%). In the end this almost certainly meant that one member of the pair still ended up with a higher grade than they would have had they taken the exam individually. I decided I just didn’t care. I imagine I might rethink this for an individual class where there were many of these sorts of pairings going on during the semester—but in that case I would also try to do something about it before the final exam.
Another interesting social aspect of the process was figuring out what to do when students were floundering. One explicit thing one can do is to offer a hint in exchange for a certain number of points off, but I only ended up using this explicit option a few times. More often, after the right amount of time, I simply guided them on to the next part, either by suggesting that we move on in the interest of time, or by giving them whatever part of the answer they needed to move on to the next part of the question. I then took off points appropriately in my grading.
It was difficult figuring out how to verbally respond to students: on the one hand, stony-faced silence would be unnatural and unnerving; on the other hand, responding enthusiastically when they said something correct would give too much away (i.e. by the absence of such a response when they said something incorrect). As the exams went on I got better (I think) at giving interested-yet-non-committal sorts of responses that encouraged the students but didn’t give too much away. But I still found this to be one of the most perplexing challenges of the whole process.
One might wonder how much of the material from an entire semester can really be covered in a 30-minute conversation. Of course, you most certainly cannot cover every single detail. But you can actually cover quite a lot of the important ideas, along with enough details to get a sense for whether a student understands the details or not. In the end, after all, I don’t care whether a student remembers all the details from my course. Heck, I don’t even remember all the details from my course. But I care a great deal about whether they remember the big ideas, how the details fit, and how to re-derive or look up the details that they have forgotten. Overall, I am happy with the way the exam was able to cover the high points of the syllabus and to test students’ grasp of its breadth.
My one regret, content-wise, is that with only 30 minutes, it’s not really possible to put truly difficult questions on the exam—the sorts of questions that students might have to wrestle with for ten or twenty minutes before getting a handle on them.
Would I do this again? Absolutely, given the right circumstances. But there are probably a few things I would change or experiment with. Here are a few off the top of my head:
Again, I’m happy to answer questions in the comments or by email. If you are inspired to also try giving an oral exam, let me know how it goes!
As far as possible, I have tried to arrange the order so that each video only depends on concepts from earlier ones. (If you have any suggestions for improving the ordering, I would love to hear them!) Along with each video you can also find my cryptic notes; I make no guarantee that they will be useful to anyone (even me!), but hopefully they will at least give you an idea of what is in each video.
If and when they post any new videos (pretty please?) I will try to keep it updated.
species
package now has support for bracelets, i.e. equivalence classes of lists up to rotation and reversal. I show some examples of their use and then explain the (very interesting!) mathematics behind their implementation.
I recently released a new version of my species
package which adds support for the species of bracelets. A bracelet is a (nonempty) sequence of items which is considered equivalent up to rotation and reversal. For example, the two structures illustrated below are considered equivalent as bracelets, since you can transform one into the other by a rotation and a flip:
In other words, a bracelet has the same symmetry as a regular polygon—that is, its symmetry group is the dihedral group . (Actually, this is only true for —I’ll say more about this later.)
Bracelets came up for me recently in relation to a fun side project (more on that soon), and I am told they also show up in applications in biology and chemistry (for example, bracelet symmetry shows up in molecules with cycles, which are common in organic chemistry). There was no way to derive the species of bracelets from what was already in the library, so I added them as a new primitive.
Let’s see some examples (later I discuss how they work). First, we set some options and imports.
ghci> :set -XNoImplicitPrelude
ghci> :m +NumericPrelude
ghci> :m +Math.Combinatorics.Species
Unlabelled bracelets, by themselves, are completely uninteresting: there is only a single unlabelled bracelet shape of any positive size. (Unlabelled species built using bracelets can be interesting, however; we’ll see an example in just a bit). We can ask the library to tell us how many distinct size- unlabelled bracelets there are for :
ghci> take 10 $ unlabelled bracelets
[0,1,1,1,1,1,1,1,1,1]
Labelled bracelets are a bit more interesting. For there are labelled bracelets of size : there are cycles of size (there are lists, which counts each cycle times, once for each rotation), and counting cycles exactly double counts bracelets, since each bracelet can be flipped in one of two ways. For example, there are labelled bracelets of size .
ghci> take 10 $ labelled bracelets
[0,1,1,1,3,12,60,360,2520,20160]
In addition to counting these, we can exhaustively generate them (this is a bit annoying with the current API; I hope to improve it):
ghci> enumerate bracelets [0,1] :: [Bracelet Int]
[<<0,1>>]
ghci> enumerate bracelets [0..2] :: [Bracelet Int]
[<<0,1,2>>]
ghci> enumerate bracelets [0..3] :: [Bracelet Int]
[<<0,1,2,3>>,<<0,1,3,2>>,<<0,2,1,3>>]
And here are all of the size- bracelets, where I’ve used a different color to represent each label (see here for the code used to generate them):
As a final example, consider the species , the Cartesian product of bracelets with ordered pairs of sets. That is, given a set of labels, we simultaneously give the labels a bracelet structure and also partition them into two (distinguishable) sets. Considering unlabelled structures of this species—that is, equivalence classes of labelled structures under relabelling—means that we can’t tell the labels apart, other than the fact that we can still tell which are in the first set and which are in the second. So, if we call the first set “purple” and the second “green”, we are counting the number of bracelets made from (otherwise indistinguishable) purple and green beads. Let’s call these binary bracelets. Here’s how many there are of sizes through :
ghci> let biBracelets = bracelet >< (set * set)
ghci> take 15 $ unlabelled biBracelets
[0,2,3,4,6,8,13,18,30,46,78,126,224,380,687]
Let’s use the OEIS to check that we’re on the right track:
ghci> :m +Math.OEIS
ghci> let res = lookupSequence (drop 1 . take 10 $ unlabelled biBracelets)
ghci> fmap description res
Just "Number of necklaces with n beads of 2 colors, allowing turning over."
Unfortunately the species
library can’t currently enumerate unlabelled structures of species involving Cartesian product, though I hope to fix that. But for now we can draw these purple-green bracelets with some custom enumeration code. You can see the numbers show up here, and it’s not too hard to convince yourself that each row contains all possible binary bracelets of a given size.
If you’re just interested in what you can do with bracelets, you can stop reading now. If you’re interested in the mathematical and algorithmic details of how they are implemented, read on!
The exponential generating function (egf) associated to a combinatorial species is defined by
That is, the egf is an (infinite) formal power series where the coefficient of is the number of distinct labelled -structures on labels. We saw above that for there are labelled bracelets of size , and there is one bracelet each of sizes and . The egf for bracelets is thus:
(Challenge: show this is also equivalent to .) This egf is directly encoded in the species library, and this is what is being used to evaluate labelled bracelets
in the example above.
Incidentally, the reason only works for is in some sense due to the fact that the dihedral groups and are a bit weird: every dihedral group is a subgroup of the symmetric group except for and . The problem is that for , “flips” actually have no effect, as you can see below:
So, for example, has elements, corresponding to the identity, a 180 degree rotation, a flip, and a rotation + flip; but the symmetric group only has two elements, in this case corresponding to the identity and a 180 degree rotation. The reason doesn’t work, then, is that the division by two is superfluous: for , counting cycles doesn’t actually overcount bracelets, because every cycle is already a flipped version of itself. So it would also be correct (if rather baroque) to say that for there are actually bracelets.
I find this fascinating; it’s almost as if for bigger the dihedral symmetry has “enough room to breathe” whereas for small it doesn’t have enough space and gets crushed and folded in on itself, causing weird things to happen. It makes me wonder whether there are other sorts of symmetry with a transition from irregularity to regularity at even bigger . Probably this is an easy question for a group theorist to answer but I’ve never thought about it before.
The ordinary generating function (ogf) associated to a species is defined by
where is the equivalence relation on -structures induced by permuting the labels. That is, the coefficient of is the number of equivalence classes of -structures on labels up to relabelling. There is only one unlabelled bracelet of any size , that is, any bracelet of size can be transformed into any other just by switching labels around. The unique unlabelled bracelet of a given size can be visualized as a bracelet of uniform beads:
though it’s occasionally important to keep in mind the more formal definition as an equivalence class of labelled bracelets. Since there’s just one unlabelled bracelet of each size, the ogf for bracelets is rather boring:
.
This is encoded in the species
library too, and was used to compute unlabelled bracelets
above.
egfs are quite natural (in fact, species can be seen as a categorification of egfs), and the mapping from species to their associated egf is a homomorphism that preserves many operations such as sum, product, Cartesian product, composition, and derivative. ogfs, however, are not as nice. The mapping from species to ogfs preserves sum and product but does not, in general, preserve other operations like Cartesian product, composition or derivative. In some sense ogfs throw away too much information. Here’s a simple example to illustrate this: although the ogfs for bracelets and cycles are the same, namely, (there is only one unlabelled bracelet or cycle of each size), the ogfs for binary bracelets and binary cycles are different:
ghci> -- recall biBracelets = bracelet >< (set * set)
ghci> let biCycles = cycles >< (set * set)
ghci> take 15 $ unlabelled biBracelets
[0,2,3,4,6,8,13,18,30,46,78,126,224,380,687]
ghci> take 15 $ unlabelled biCycles
[0,2,3,4,6,8,14,20,36,60,108,188,352,632,1182]
(Puzzle: why are these the same up through ? Find the unique pair of distinct binary -cycles which are equivalent as bracelets.)
Clearly, there is no way to take equal ogfs, apply the same operation to both, and get different results out. So the species
library cannot be working directly with ogfs in the example above—something else must be going on. That something else is cycle index series, which generalize both egfs and ogfs, and retain enough information that they once again preserve many of the operations we care about.
Let denote the symmetric group of order , that is, the group of permutations on under composition. It is well-known that every permutation can be uniquely decomposed as a product of disjoint cycles. The cycle type of is the sequence of natural numbers where is the number of -cycles in the cycle decomposition of . For example, the permutation has cycle type since it has one -cycle, two -cycles, and one -cycle.
For a species and a permutation , let denote the number of -structures that are fixed by the action of , that is,
The cycle index series of a combinatorial species is a formal power series in an infinite set of variables defined by
We also sometimes write as an abbreviation for . As a simple example, consider the species of lists, i.e. linear orderings. For each , the identity permutation (with cycle type ) fixes all lists of length , whereas all other permutations do not fix any lists. Therefore
(This is not really that great of an example, though—since lists are regular species, that is, they have no nontrivial symmetry, their cycle index series, egf, and ogf are all essentially the same.)
Cycle index series are linked to both egfs and ogfs by the identities
To show the first, note that setting all to other than means that the only terms that survive are terms with only raised to some power. These correspond to permutations with only -cycles, that is, identity permutations. Identity permutations fix all -structures of a given size, so we have
To prove the link to ogfs, note first that for any permutation with cycle type we have . Thus:
where the final step is an application of Burnside’s Lemma.
The important point is that the mapping from species to cycle index series is again a homomorphism for many of the operations we care about, including Cartesian product and composition. So in order to compute an ogf for some species defined in terms of operations that are not compatible with ogfs, one can start out computing with cycle index series and then project down to an ogf at the end.
Let’s now see how to work out the cycle index series for the species of bracelets. For , the single bracelet is fixed by the only element of , giving a term of . For , the single bracelet is fixed by both elements of , one of which has cycle type and the other . Bracelets of size , as discussed previously, have the dihedral group as their symmetry group. That is, every one of the size- bracelets is fixed by the action of each element of , and no bracelets are fixed by the action of any other permutation. Putting this all together, we obtain
Our remaining task is thus to compute , that is, to compute the cycle types of elements of for . I don’t know whether there’s a nice closed form for , but for our purposes it doesn’t matter: it suffices to come up with a finite algorithm to generate all its terms with their coefficients. A closed form might be important if we want to compute with symbolically, but if we just want to generate coefficients, an algorithm is good enough.
In general, has elements corresponding to rotations (including the identity element, which we think of as a rotation by degrees) and elements corresponding to reflections across some axis. Below I’ve drawn illustrations showing the symmetries of bracelets of size and ; each symmetry corresponds to an element of .
The lines indicate reflections. You can see that in general there are lines of reflection. The curved arrows indicate clockwise rotations; taking any number of consecutive arrows from to gives a distinct rotational symmetry. Let’s label the rotations (for ), where indicates a rotation by of a turn (so is the identity element). We won’t bother labelling the reflections since it’s not clear how we would choose canonical names for them, and in any case (as we’ll see) we don’t have as much of a need to give them names as we do for the rotations. The only thing we will note is that for even there are two distinct types of reflections, as illustrated by the dark and light blue lines on the right: the dark blue lines pass through two vertices, and the light blue ones pass through two edges. In the odd case, on the other hand, every line of reflection passes through one vertex and one edge. If you haven’t studied dihedral groups before, you might want to take a minute to convince yourself that this covers all the possible symmetries. It’s clear that a rotation followed by a rotation is again a rotation; what may be less intuitively clear is that a reflection followed by a reflection is a rotation, and that a rotation followed by a reflection is a reflection.
So the name of the game is to consider each group element as a permutation of the labels, and compute the cycle type of the permutation. Let’s tackle the reflections first; we have to separately consider the cases when is odd and even. We saw above that when is odd, each line of reflection passes through exactly one vertex. As a permutation, that means the reflection will fix the label at the vertex it passes through, and swap the labels on other vertices in pairs, as shown in the leftmost diagram below:
So the permutation has cycle type . There is one -cycle, and the remaining elements are paired off in -cycles. There are of these reflections in total, yielding a term of (where ).
When is even, half of the reflections (the light blue ones) have no fixed points, as in the middle diagram above; they put everything in -cycles. The other half of the even reflections fix two vertices, with the rest in -cycles, as in the rightmost diagram above. In all, this yields terms .
Now let’s tackle the rotations. One could be forgiven for initially assuming that each rotation will just yield one big -cycle… a rotation is just cycling the vertices, right? But it is a bit more subtle than that. Let’s look at some examples. In each example below, the green curved arrow indicates a rotation applied to the bracelet. As you can check, the other arrows show the resulting permutation on the labels, that is, each arrow points from one node to the node where it ends up under the action of the rotation.
Do you see the pattern? In the case when (the first example above), or more generally when and are relatively prime (the second example above, with and ), indeed generates a single -cycle. But when and are not relatively prime, it generates multiple cycles. By symmetry the cycles must all be the same size; in general, the rotation generates cycles of size (where denotes the greatest common divisor of and ). So, for example, cycles are generated when and or (the next two examples above). The last example shows and ; we can see that three -cycles are generated. Note this even works when : we have , so we get cycles of size , i.e. the identity permutation.
So contributes a term . However, we can say something a bit more concise than this. Note, for example, when , as the contribution of all the we get
but we can collect like terms to get
For a given divisor , the coefficient of is the number of nonnegative integers less than whose with is equal to . For example, the coefficient of is , since there are two values of for which and hence generate a six-cycle, namely, and . So as the contribution of the we could write something like
but there is a better way. Note that
since multiplying and dividing by establishes a bijection between the two sets. For example, we saw that and are the two numbers whose with is ; this corresponds to the fact that and are relatively prime to .
But counting relatively prime numbers is precisely what Euler’s totient function (usually written ) does. So we can rewrite the coefficient of as
.
Finally, since we are adding up these terms for all divisors , we can swap and (divisors of always come in pairs whose product is ), and rewrite this as
.
To sum up, then, we have for each :
The only overlap is between (2) and (3): when both generate terms. Using Iverson brackets (the notation is equal to if the predicate is true, and if it is false), we can thus write the sum of the above for a particular as
.
Substituting this for yields a full definition of . You can see the result encoded in the species library here. Here’s the beginning of the full expanded series:
ghci> :m +Math.Combinatorics.Species.Types
ghci> take 107 $ show (bracelets :: CycleIndex)
"CI x1 + 1 % 2 x2 + 1 % 2 x1^2 + 1 % 3 x3 + 1 % 2 x1 x2 + 1 % 6 x1^3 + 1 % 4 x4 + 3 % 8 x2^2 + 1 % 4 x1^2 x2"
This, then, is how unlabelled biBracelets
(for example) is calculated, where biBracelets = bracelet >< (set * set)
. The cycle index series for bracelet
and set
are combined according to the operations on cycle index series corresponding to *
and ><
, and then the resulting cycle index series is mapped down to an ogf by substituting for each .
The final thing to mention is how bracelet generation works. Of course we can’t really generate actual bracelets, but only lists. Since bracelets can be thought of as equivalence classes of lists (under rotation and reversal), the idea is to pick a canonical representative element of each equivalence class, and generate those. A natural candidate is the lexicographically smallest among all rotations and reversals (assuming the labels have an ordering; if they don’t we can pick an ordering arbitrarily). One easy solution would be to generate all possible lists and throw out the redundant ones, but that would be rather inefficient. It is surprisingly tricky to do this efficiently. Fortunately, there is a series of papers by Joe Sawada (Generating bracelets with fixed content; A fast algorithm to generate necklaces with fixed content; Generating bracelets in constant amortized time) describing (and proving correct) some efficient algorithms for generating things like cycles and bracelets. In fact, they are as efficient as possible, theoretically speaking: they do only work per cycle or bracelet generated. One problem is that the algorithms are very imperative, so they cannot be directly transcribed into Haskell. But I played around with benchmarking various formulations in Haskell and got it as fast as I could. (Interestingly, using STUArray
was a lot slower in practice than a simple functional implementation, even though the imperative solution is asymptotically faster in theory—my functional implementation is at least per bracelet, and quite possibly , though since is typically quite small it doesn’t really matter very much. Of course it’s also quite possible that there are tricks to make the array version go faster that I don’t know about.) The result is released in the multiset-comb package; you can see the bracelet generation code here.
The Ally Skills Tutorial teaches men simple, everyday ways to support women in their workplaces and communities. Participants learn techniques that work at the office, in classrooms, at conferences, and online. The skills we teach are relevant everywhere, including skills particularly relevant to open technology and culture communities. At the end of the tutorial, participants will feel more confident in speaking up to support women, be more aware of the challenges facing women in their workplaces and communities, and have closer relationships with the other participants.
This sounds super helpful—I suspect there is often a large gap between the extent to which I want to support women and the extent to which I actually know, practically, how to do so. The workshop will be taught by Valerie Aurora, Linux filesystem developer and Ada Initiative co-founder; I expect it will be high quality!
The setup is that there are (distinct) friends who can talk to each other on the phone. Only two people can talk at a time (no conference calls). The question is to determine how many different “configurations” there are. Not everyone has to talk, so a configuration consists of some subset of the friends arranged in (unordered) conversational pairs.
Warning: spoilers ahead! If you’d like to play around with this yourself (and it is indeed a nice, accessible combinatorics problem to play with), stop reading now. My goal in this post is to have fun applying some advanced tools to this (relatively) simple problem.
Let’s start by visualizing some configurations. In her post, Denise illustrated the complete set of configurations for , which I will visualize like this:
Notice how I’ve arranged them: in the first row is the unique configuration where no one is talking (yes, that counts). In the second row are the six possible configurations with just a single conversation. The last row has the three possible configurations with two conversations.
One good approach at this point would be to derive some recurrences. This problem does indeed admit a nice recurrence, but I will let you ponder it. Instead, let’s see if we can just “brute-force” our way to a general formula, using our combinatorial wits. Later I will demonstrate a much more principled, mechanical way to derive a general formula.
Let’s start by coming up with a formula for , the number of configurations with people and conversations. The number of ways of choosing pairs out of a total of is the multinomial coefficient . However, that overcounts things: it actually distinguishes the first pair, second pair, and so on, but we don’t want to have any ordering on the pairs. So we have to divide by , the number of distinct orderings of the pairs. Thus,
Let’s do a few sanity checks. First, when , we have . We can also try some other small numbers we’ve already enumerated by hand: for example, , and . So this seems to work.
For people, there can be at most conversations. So, the total number of configurations is going to be
.
We can use this to compute for the first few values of :
At this point we could look up the sequence 1,1,2,4,10,26,76 on the OEIS and find out all sorts of fun things: e.g. that we are also counting self-inverse permutations, i.e. involutions, that these numbers are also called “restricted Stirling numbers of the second kind”, some recurrence relations, etc., as well as enough references to keep us busy reading for a whole year.
We can describe configurations as elements of the combinatorial species . That is, a configuration is an unordered set () of () things (), where each thing can either be an unordered pair () of people talking on the phone, or () a single person () who is not talking.
We can now use the Haskell species
library to automatically generate some counts and see whether they agree with our manual enumerations. First, some boilerplate setup:
ghci> :set -XNoImplicitPrelude
ghci> :m +NumericPrelude
ghci> :m +Math.Combinatorics.Species
Now we define the species of configurations:
ghci> let configurations = set `o` (set `ofSizeExactly` 2 + singleton)
We can ask the library to count the number of configurations for different :
ghci> take 10 (labelled configurations)
[1,1,2,4,10,26,76,232,764,2620]
Oh good, those numbers look familiar! Now, I wonder how many configurations there are for ?
ghci> labelled configurations !! 100
24053347438333478953622433243028232812964119825419485684849162710512551427284402176
Yikes!
We can also use the library to generate exhaustive lists of configurations, and draw them using diagrams. For example, here are all configurations for . (If you want to see the code used to generate this diagram, you can find it here.)
And just for fun, let’s draw all configurations for :
Whee!
Finally, I want to show how to use the species definition given above and the theory of generating functions to (somewhat) mechanically derive a general formula for the number of configurations. (Hopefully it will end up being equivalent to the formula we came up with near the beginning of the post!) Of course, this is also what the species
library is doing, but only numerically—we will do things symbolically.
First, note that we are counting labelled configurations (the friends are all distinct), so we want to consider exponential generating functions (egfs). Recall that the egf for a species is given by
,
that is, a (possibly infinite) formal power series where the coefficient of is the number of distinct labelled -structures of size . In our case, we need
,
since there is exactly one set structure of any size, and
,
which is just the restriction of to only the term. Of course, we also have . Putting this together, we calculate
Ultimately, we want something of the form , so we’ll need to collect up like powers of . To do that, we can do a bit of reindexing. Right now, the double sum is adding up a bunch of terms that can be thought of as making a triangle:
Each ordered pair in the triangle corresponds to a single term being added. Each column corresponds to a particular value of , with increasing to the right. Within each column, goes from up to .
The powers of in our double sum are given by . If we draw in lines showing terms that have the same power of , it looks like this:
So let’s choose a new variable , defined by . We can see that we will have terms for every . We will also keep the variable for our other index, and substitute to get rid of . In other words, instead of adding up the triangle by columns, we are going to add it up by diagonals.
Previously we had ; substituting for that now turns into . Adding to both sides and dividing by yields (we can round down since is an integer). Looking at the diagram above, this makes sense: the height of each diagonal line is indeed half its index. Rewriting our indices of summation and substituting for , we now have:
And hey, look at that! The coefficient of is exactly what we previously came up with for . Math works!
HTTP
package out of the guts of haxr
, replace it with http-streams
, and carefully sew everything back together around the edges. The result is that haxr
now finally supports making XML-RPC calls via HTTPS, which in turn means that BlogLiterately
once again works with WordPress, which no longer supports XML-RPC over HTTP. Happy blogging!
Well… I’m not so sure. What I do know is that the typical conversation around grade inflation frustrates me. At best, it often leaves many important assumptions unstated and unquestioned. Is grade inflation really bad? If so, why? What are the underlying assumptions and values that drive us to think of it in one way or another? At worst, the conversation is completely at the wrong level. Grade inflation is actually a symptom pointing at a much deeper question, one that gets at the heart of education and pedagogy: what do grades mean? Or, put another way, what do grades measure?
This will be a two-part series. In this first post, I consider the first question: is grade inflation bad? In most conversations I have been a part of, this is taken as given, but I think it deserves more careful thought. I don’t know of any reasons to think that grade inflation is good, but I also don’t buy many of the common arguments (often implied rather than explicitly stated) as to why it is bad; in this post I consider three common ones.
Just to make sure everyone is on the same page: by grade inflation I mean the phenomenon where average student grades are increasing over time, that is, the average student now receives higher grades than the average student of n years ago. (You could also think of it as the average value of a given grade going down over time.) This phenomenon is widespread in the US. I am only really familiar with the educational system in the US, so this post will of necessity be rather US-centric; I would be interested to hear about similarities and differences with other countries.
Let’s now consider some common arguments as to why grade inflation is bad.
This is not so much an “argument” as an attitude, and it goes something like this: “Back in MY day, a C really meant a C! These ungrateful, entitled young whippersnappers don’t understand the true value of grades…”
This is a caricature, of course, but I have definitely encountered variants of this attitude. This makes about as much sense to me as complaining “back in MY day, a dollar was really worth a dollar! And now my daughter is asking me for twenty dollars to go to a movie with her friends. TWENTY DOLLARS! These ungrateful, entitled young whippersnappers don’t understand the true value of money…” Nonsense, of course they do. It just costs $20 to go to a movie these days. A dollar is worth what it is now worth; a C is worth what it is now worth. Get over it. It’s not that students don’t understand “the true value of grades”, it’s just that the value of grades is different than it used to be.
There are a couple important caveats here: first, one can, of course, argue about what the value of a C ought to be, based on some ideas or assumptions about what grades (should) mean. I will talk about this at length in my next post. But you cannot blame students for not understanding your idea of what grades ought to mean! Second, it is certainly possible (even likely) that student attitudes towards grades have changed, and one can (and I do!) complain about those attitudes as compared to student attitudes in the past. But that is different than claiming that students don’t understand the value of grades.
If I may hazard a guess, I think what this often boils down to is that people blame grade inflation on student attitudes of entitlement. As a potential contributing factor to grade inflation (and insofar as we would like to teach students different attitudes), that is certainly worth thinking about. But grade inflation potentially being caused by something one dislikes is not an argument that grade inflation itself is bad.
Of course, there’s one important difference between money and grades: amounts of money have no upper limit, whereas grades are capped at A+. This brings us to what I often hear put forth as the biggest argument against grade inflation, that it compresses grades into a narrower and narrower band, squeezed from above by that highest possible A+. The problem with this, some argue, is that grade compression causes information to be lost. The “signal” of grades becomes noisier, and it becomes harder for, say, employers and grad schools to be able to distinguish between different students.
My first, more cynical reaction is this: well, cry me a river for those poor, poor employers and grad schools, who will now have to assess students on real accomplishments, skills, and personal qualities, or (more likely) find some other arbitrary measurement to use. Do we really think grades are such a high-quality signal in the first place? Do they really measure something important and intrinsic about a student? (More on this in my next post.) If the signal is noisy or arbitrary in the first place then compressing it really doesn’t matter that much.
Less cynically, let’s suppose the grade-signal really is that high-quality and important, and we are actually worried about the possibility of losing information. Consider the extreme situation, where grade inflation has progressed to such a degree that professors only give one of two possible grades: A (“outstandingly excellent”) or A+ (“superlatively superb”). An A- is so insultingly low that professors never give it (for fear of lawsuits, perhaps); for simplicity’s sake let’s suppose that no one ever fails, either. In this hypothetical scenario, at an institution like Williams where students take 32 courses, there are only 33 possible GPAs: you could get 32 A+’s, or one A and 31 A+’s, or two A’s and 30 A+’s… all the way down to getting all A’s (“straight-A student” means something rather different in this imaginary universe!).
But here’s the thing: I think 33 different GPAs would still be enough! I honestly don’t think companies or grad schools can meaningfully care about distinctions finer than having 33 different buckets of students. (If you think differently, I’d love to hear your argument.) If student GPAs are normally distributed, this even means that the top few buckets have much less than 1/33 of all the students. So if the top grad schools and companies want to only consider the top 1% of all students (or whatever), they can just look at the top bucket or two. You might say this is unfair for the students, but really, I can’t see how this would be any more or less fair than the current system.
Of course, under this hypothetical two-grade system, GPAs might not be normally distributed. For one thing, if grade inflation kept going, the distribution might become more and more skewed to the right, until, for example, half of all students were getting straight A+’s, or, in the theoretical limit, all students get only A+’s. But I really don’t think this would actually happen; I think you would see some regulating effects kick in far before this theoretical limit was reached. Professors would not actually be willing to give all A+’s (or even, for that matter, all A’s and A+’s).
The GPAs could also be very bimodal, if, for example, students are extremely consistent: a student who consistently scores in the top 40% of every class would get the same grades (all A+’s) as a student who consistently scores in the top 10%. However, I doubt this is how it would work (as any professor knows, “consistent” and “student” are a rare pairing). It would be interesting to actually work out what GPA distributions would result from various assumptions about student behavior.
The final argument against grade inflation that I sometimes hear goes like this: the problem is not so much that the average GPA is going up but simply that it is moving at all, which makes it harder for grad schools and employers to know how to calibrate their interpretations. But I don’t really buy this one either. The value of money is moving too, and yes, in some grand sense I suppose that makes it slightly harder for people to figure out how much things are worth. But somehow, everyone seems to manage just fine. I think employers and grad schools will manage just fine too. I don’t think GPAs are changing anywhere near fast enough for it to make much difference. And in any case, most of the time, the only thing employers and grad schools really care about is comparing the GPAs of students who graduated around the same time, in which case the absolute average GPA doesn’t matter at all. (One can make an argument about the difficulties caused by different schools having different average GPAs, but that is always going to be an issue, grade inflation or no.)
In the end, then, I am not so sure that grade inflation per se is such a terrible thing. However, it is well worth pondering the causes of grade inflation, and the deeper questions it leads to: what are grades? Why do we give them? What purposes do they serve, and what do they measure? I’ll take up these questions in a subsequent post.