Species definition clarification and exercises

[This is the fourth in a series of posts about combinatorial species. Previous posts: And now, back to your regularly scheduled combinatorial species; Decomposing data structures; Combinatorial species definition.]

In my previous post I neglected to mention something quite crucial, namely, that (at least for now) we are only talking about finite sets of labels and finite sets of structures. That is, we do not consider structures with infinitely many labels, and given a particular finite set of labels, there must be only a finite number of structures with those labels. The category \mathbb{B} is the category of finite sets and bijections, not of all sets as I said previously.

Of course, in practice we do want to think about infinite sets of structures and labels, especially in relation to a non-strict language like Haskell! But the theory was invented by mathematicians interested in counting things. I do intend to explore ways to extend the theory to encompass infinite structures, but for now we’ll stick to the finite.

Before moving on to talk about the algebraic approach to species, I also want to point out a few simple implications of the formal definition. Instead of spelling them out in detail, I will pose them as exercises: you can either take them on faith, or try working through the exercises to deepen your understanding.

  1. Let [n] denote the set \{0, \dots, n-1\}1, and write F[n] as an abbreviation for F[[n]], i.e. the application of the species F to the label set [n]. Show that given F[n] we can determine F[U] for any U with |U| = n.

(This shows that “size is all that matters”: in some sense species are really indexed not by sets of labels but by size.)

  1. [BLL2 Exercise 1.1.2] Show that we get an equivalent definition if we take species to be functors from \mathbb{B} to \mathbf{FinSet} (the category of finite sets and (total) functions) instead of endofunctors on \mathbb{B}.

(Apparently one can generalize the notion of species by replacing \mathbf{FinSet} with other categories, but at present I am not sure of the implications.)


  1. The species literature uses \{1, \dots, n\}, but (as every good computer scientist knows) counting ought to begin at zero.

  2. I will use “BLL” to refer to Bergeron, Labelle, and Leroux, Combinatorial Species and Tree-Like Structures (see the references at the end of the previous post for the full citation).

Posted in math, species | Tagged , , , | 2 Comments

Teaching abstraction

I’m just beginning to prepare for the third incarnation of CIS 194, Introduction to Haskell in the spring. It’s occasioned some general thoughts on teaching abstraction which seemed worth writing down.

Abstractions, of course, are everywhere in CS. By abstraction I mean anything with multiple “levels”, where upper levels hide details of lower levels. An operating system abstracts over the details of hardware; objects abstract over their internal state; functions abstract over their implementation; and so on. Let me just state up front my main thesis:

Students should be explicitly taught to think on multiple levels of abstraction.

Stated like that, perhaps it seems obvious; allow me to elaborate.

On the face of it, when learning about some abstraction, there are two things one can learn. One can learn how the abstraction is implemented (that is, the details of the lower level); and one can also learn how to use the abstraction (that is, how to work with the upper level while ignoring the details of the lower level).

In many cases, students already know one of the levels and learn about the other. For example, once they get around to learning about how compilers work, students already know how to write programs. Or perhaps they learn both levels as part of the same course, but at different times: for example, they might first learn about logic gates, and then later move on to talking about latches and adders.

It’s when teaching both levels at once that one must be extremely careful. Such a situation might arise, for example, when teaching a concept that is completely new or difficult to motivate—since it’s best to start with the concrete and move to the abstract, one might wish to begin with the lower level, but referencing the upper level is necessary to motivate the lower level in the first place. The problem with such a situation is that it is really easy for students to get the levels confused.

In particular, I have in mind the Applicative functor abstraction (and Monad as well). The last time teaching CIS 194, I didn’t do a very good job with this. I gave my students a homework assignment which had them first implement an Applicative interface to parser combinators, and then use it to write some parsers. Most students missed the point, however, and kept on using low-level implementation details when writing their parsers, instead of working in terms of the Applicative interface.

It’s tempting to just say “don’t do that”, i.e. don’t confuse students by teaching multiple levels at once! But I think the situation actually presents a great opportunity. In the “real world”, one seldom has the luxury of thinking on only one level at a time. Being able to explicitly switch between levels, and keep multiple levels straight, is an important real-world skill. I think the real solution is to explicitly teach students to think on multiple levels of abstraction: that is, be explicit about the fact that there are multiple levels, and teach them to consider carefully at each point which level they are thinking on, and why. I plan to do this in the spring and will report back on how it goes!

Posted in teaching | Tagged , , , | 8 Comments

Combinatorial species definition

Continuing from my previous post, recall that the goal of species is to have a unified theory of containers with labeled1 locations. So, how do we actually specify such things (leaving aside for the moment the question of how we compute with them)?

We might imagine specifying them by:

  • using any arbitrary set to represent some family of labeled structures (e.g. the set of labeled binary tree structures, the set of labeled list structures, …), together with
  • a function that takes a structure and computes its set of labels.

On the face of it this seems quite natural (at least, it does to me). However, it works better to instead use a function from sets of labels to the subset of all structures containing precisely those labels.

In my experience teaching people about species, this often seems to be a source of confusion—it seems “backwards”. More generally, when thinking about a set B indexed by some other set A (in this case, structures indexed by their sets of labels), one might think to model this by a function B \to A (which tells us the index), but it actually works better to model it by a function A \to B, which takes each “index” to the set of all things indexed by it.2 Hopefully as we work through the rest of the definition you’ll get a sense for why it works better this way. For now, I think the best advice is don’t assign computational significance to these functions from labels to structures. Just think of them as a convenient technical device to keep track of shapes indexed by labels.

In any case, the first half of the definition is:

  • A species F is a mapping from sets of labels to sets of structures.

(I deliberately chose the word mapping instead of function to emphasize, again, that we don’t particularly want to assign it computational significance.) Of course, the fact that a species takes sets “of labels” as input and outputs sets “of structures” doesn’t matter; any sets will do, so we might as well just say that a species maps sets to sets. We write F[U] for the species F applied to a set of labels U, and call F[U] the set of “F-structures with labels drawn from U”, or simply “F-structures on U”, or even (when U is clear from context) just “F-structures”.

So far, however, this is rather uninteresting, and moreover it fails to adequately capture our intuition for what “structures” are. Intuitively, the labels are incidental, just like the variable names used in lambda terms are incidental: we must use them to be able to distinguish locations, but the precise objects we use as labels really “shouldn’t matter”. That is, given two sets of labels of the same size, we ought to have “the same” family of structures indexed by each. Of course they can’t be literally the same, because they have different labels! But they should be the same “up to relabeling”. We want to rule out the ability to have two same-size sets of labels indexing wildly different sets of structures: a species shouldn’t be able to “look at” the individual labels in order to “decide” what sort of structures to produce, just like a polymorphic type in Haskell can’t “look at” its type argument. The major difference is that species are allowed to “look at” the size of the label set.

Making this intuition precise is the clever part, and is really the pivotal point around which the whole theory revolves. Here’s how we do it. We don’t work with sizes of label sets directly; instead we work with bijections between label sets. (Of course, if there is a bijection between two finite sets then they are necessarily the same size.)

Given two label sets U and V which are related by the bijection \sigma (sometimes referred to as a relabeling), there must be a relationship between F[U] and F[V]—in particular they must also be in bijection. Here, then, is the second part of the definition:

  • Given a bijection \sigma : U \leftrightarrow V, a species F must also “lift” \sigma to a bijection F[\sigma] : F[U] \leftrightarrow F[V].

(Note that we’re recycling notation here, using F[-] for the action of species on both label sets and bijections.) However, this still isn’t quite enough: we don’t want F[\sigma] to be just any bijection between F[U] and F[V]. It really should be the specific bijection that “applies” \sigma to the labels contained within the structures in F[U]. For example, it would be weird if the identity relabeling, when lifted through F, resulted in some nontrivial reshuffling of the structures in F[U]. It would also be strange if F didn’t respect composition, that is, if there were some \sigma, \tau such that F[\sigma] \circ F[\tau] \neq F[\sigma \circ \tau], since intuitively “applying \tau then applying \sigma” ought to be the same as “applying (\sigma \circ \tau)”. So we add these as conditions:

  • F must map every identity bijection id : U \leftrightarrow U to the identity id : F[U] \leftrightarrow F[U], and
  • F must preserve composition of bijections, that is, \forall \sigma, \tau. F[\sigma \circ \tau] = F[\sigma] \circ F[\tau].

Of course, all of this may look rather familiar if you know some basic category theory. Consider the category \mathbb{B} whose objects are sets and whose morphisms are bijections. Then all of the above can be summed up by the pithy

A species is an endofunctor on \mathbb{B}.

Whew! We finally made it to the definition. However, working directly with the definition is not very convenient. In my next post I’ll begin explaining the more usual algebraic approach.

At this point I should mention that species were first introduced by André Joyal in his thesis (1981). Unfortunately it is in French, which I cannot read. Fortunately Bergeron, Labelle, and Leroux (1998) wrote an excellent reference text on the subject of species. Unfortunately it is in French too. Fortunately, Margaret Readdy translated it into English!

References

Bergeron, F., G. Labelle, and P. Leroux. 1998. Combinatorial species and tree-like structures. Trans. Margaret Readdy. Encyclopedia of Mathematics and its Applications. Cambridge: Cambridge University Press.

Joyal, André. 1981. “Une Théorie Combinatoire des Séries Formelles.” Advances in Mathematics 42: 1–82.


  1. A note on spelling: generally, “labeled” is the American spelling and “labelled” British (though “labelled” is also in common American usage, according to Merriam-Webster). I try to consistently use the American spelling, but will probably slip up occasionally, and you should use whichever spelling makes you happiest.

  2. I’ve seen this pattern show up multiple times in different category-theoretic contexts, but I don’t feel qualified to comment on it more generally. If you have any pointers to more general discussion of this idea/phenomenon I’d appreciate it.

Posted in math, species | Tagged , , , , | 5 Comments

foldr is made of monoids

> import Data.Monoid

In his recent blog post What is foldr made of?, Tom Ellis made the clever observation that foldr is equivalent in power to the combination of map and compose, where

> compose :: [a -> a] -> a -> a
> compose []     = id
> compose (f:fs) = f . compose fs

We can then write

> foldr' :: (a -> b -> b) -> b -> [a] -> b
> foldr' g z xs = compose (map g xs) z

which (as Tom proves) is exactly equivalent to the usual foldr.

This is a really neat idea which I don’t remember ever seeing before. But now that I have, I realized that I could expand on it a bit—compose doesn’t come out of thin air quite as much as it might first appear.

Consider the standard function mconcat (from Data.Monoid), which combines a list of values from some Monoid:

mconcat :: Monoid m => [m] -> m
mconcat []     = mempty
mconcat (m:ms) = m `mappend` mconcat ms

I claim that in the presence of map, compose and mconcat are equivalent. Why is that? First, it’s easy to see that compose can be implemented in terms of mconcat—we just instantiate it to the monoid of endofunctions (where mempty = id and mappend = (.)), with a little bit of syntactic noise due to the need to convert in and out of the Endo newtype:

> compose' :: [a -> a] -> a -> a
> compose' = appEndo . mconcat . map Endo

Proving that compose' is the same as compose is not hard; I leave it to you as an exercise.

Implementing mconcat in terms of compose is a bit more interesting:

> mconcat' :: Monoid m => [m] -> m
> mconcat' = ($mempty) . compose . map mappend

The key idea is that we can turn any value from some monoidal type m into a function m -> m by partially applying mappend; composing these functions then corresponds to combining the original values, and the final value can be recovered by applying the resulting function to mempty.1 That is,

m_1 \diamond m_2 \diamond \dots \diamond m_n = ((m_1 \diamond) \circ (m_2 \diamond) \circ \dots \circ (m_n \diamond))\ \varepsilon

(where I have used \diamond and \varepsilon to represent mappend and mempty, respectively). Written out this way, I hope you can see why the equality holds by thinking about what the composition on the right evaluates to (and remembering the right identity law, x \diamond \varepsilon = x).

So we can also say that foldr = map + mconcat! This gets at the idea that lists are free (or initial) monoids, which intuitively means that of all monoids, they “preserve the most information”—up to the monoid laws, combining lists preserves all the information about the original lists and how you have combined them. This also means that there is a canonical way to “convert” a list into any other Monoid: that is, given a mapping f :: a -> m, there is a canonical way to take a list [a] and turn it into an m, namely, mconcat . map f.

Let’s make the connection to foldr even more explicit. First, let’s swap around the order of arguments and add some parentheses which aren’t strictly necessary:

foldr'' :: (a -> (b -> b)) -> [a] -> (b -> b)

Hmm, b -> b… why does this look so familiar? I claim we can actually pull a similar trick2 as with compose/mconcat, and replace b -> b with Monoid m => m to come up with a function equivalent to foldr'' (and hence foldr as well):

fold :: Monoid m => (a -> m) -> [a] -> m

Hmm, so how would this be implemented? Let’s see…

> fold :: Monoid m => (a -> m) -> [a] -> m
> fold f = mconcat . map f

So actually, fold itself (and hence, equivalently, foldr) is the canonical mapping from a list down to any monoid which I mentioned earlier! And here we can see quite directly that fold is indeed equivalent to mconcat + map.

In summary: foldr is equivalent to map plus compose/mappend, because lists are free monoids.


  1. Incidentally, this amounts to saying that mappend itself is a monoid homomorphism (a structure-preserving map between monoids), and this is where difference lists come from. For more, see my recent paper, Monoids: Theme and Variations.

  2. I leave this as an exercise too.

Posted in haskell, math | Tagged , , , , | 15 Comments

Using multiple versions of GHC in parallel with GNU stow

Do any of the following apply to you?

  • You sometimes hack on GHC.
  • You sometimes want to try out an unreleased GHC version.
  • You maintain a library and want to make sure it builds with several different versions of GHC.
  • You want to try out a newly released version of GHC but without blowing away your existing (working!) GHC install.

As you can see, these are roughly in order from most to least “hard core”, and I expect that the last two are fairly common, even if the first two are not. The common thread, of course, is having multiple versions of GHC installed simultaneously.

The solution

A solution that I’ve found which works quite well (on Linux, and I expect it would work on OSX as well) and deserves to be better known is GNU stow. Here’s how it works: instead of installing things directly in /usr/local (or $HOME/local, or whatever), you instead install in sandboxed locations like /usr/local/stow/ghc-7.6.1. You then tell stow that you want to “install” ghc-7.6.1, and it creates all the necessary symlinks to mirror the directory structure under the ghc-7.6.1 directory into /usr/local. But of course it tracks everything so that you can easily uninstall all these symlinks as well. (And yes, it’s smart about symlinking an entire directory when possible, instead of all the individual files, and later “splitting up” such big-scale links when necessary.)

The pros of this sort of scheme should be clear:

  1. Once you’ve “installed” a particular version you just use it as normal—no need to pass the right directories as flags to tools like cabal or ghc-pkg.
  2. Switching to a different version of GHC is relatively quick (see below).
  3. Installing new versions is pain-free; it’s just as easy to have twenty different versions as it is to have two.

Of course, there are a few downsides as well:

  1. Trusting a Perl script to munge your /usr/local. (Though for what it’s worth, I have been using it heavily for about a year now—I currently have six different versions of GHC installed—and have never once encountered a single problem.)
  2. You can’t use several versions of GHC at once—there is always exactly one “current” version. So if you want to, e.g., do test builds of a library under several versions of GHC in parallel, you need a different solution.

A step-by-step guide

So here’s how this works in more detail. First, of course, you have to install stow itself; I’ll assume you can get it from your OS package manager or know how to build it from source. What you get is simply an executable called stow.

Now, when installing GHC (whether a binary distribution or from source), give the configure step an extra prefix argument, for example:

./configure --prefix=/usr/local/stow/ghc-7.6.1

Just take the directory where you would usually install GHC, and add stow and then an additional subdirectory for the particular version you are installing. This directory doesn’t have to already exist; the installer will create it if it doesn’t.

(Note that if you already have GHC installed, you’ll have to uninstall it first and then reinstall it with stowstow won’t work unless it can manage all your GHC installs.)

After the installation process completes, you have an unusable ghc because, of course, .../stow/ghc-7.6.1 isn’t on any relevant paths. All you have to do to finish installing it is

cd /usr/local/stow && stow ghc-7.6.1

This instructs stow to make symlinks into ghc-7.6.1 from /usr/local. After this completes, that’s it! You can use your new GHC as usual.

To uninstall, follow the same procedure but give stow the -D option:

cd /usr/local/stow && stow -D ghc-7.6.1

To switch from one version of GHC to another just uninstall the current version and then install the new one. I actually have a script which automates this process: it looks in the stow directory to see which versions are availble, prompts me to choose one from a list, then uninstalls the current version (obtained with ghc --numeric-version) and installs the newly selected version. On my laptop this process takes just a couple seconds.

That’s it! Happy stowing!

Posted in haskell | Tagged , , , | 10 Comments

Decomposing data structures

So, what are combinatorial species? As a very weak first approximation, you can think of them as a generalization of algebraic data types.1 That doesn’t really say much about what they are, but at least it does explain why programmers might be interested in them.

The goal of species is to have a unified theory of structures, or containers. By a structure we mean some sort of “shape” containing locations (or positions). Here are two different structures, each with eight locations:

One thing that’s important to get straight from the beginning is that we are talking about structures with labeled locations. The numbers in the picture above are not data being stored in the structures, but names or labels for the locations. To talk about a data structure (i.e. a structure filled with data), we would have to also specify a mapping from locations to data, like \{ 0 \mapsto \texttt{'s'}, 1 \mapsto \texttt{'p'}, 2 \mapsto \texttt{'e'} \dots \}

Now go reread the above paragraph! For programmers I find that this is one of the most difficult things to grasp at first—or at least one of the things that is easiest to forget. The fact that the labels are often natural numbers (which are often also used as sample data) does not help.

One useful intuition is to think of the labels as memory addresses, which point off to some location where a data value is stored. This intuition has some particularly interesting consequences when we get to talking about operations like Cartesian product and functor composition, since it gives us a way to model sharing (albeit only in limited ways).

Why have labels at all? In the tree shown above, we can uniquely identify each location by a path from the root of the tree, without referencing their labels at all. However, the other structure illustrates one reason why labels are needed. The circle is supposed to indicate that the structure has rotational symmetry, so there would be no way to uniquely refer to any location other than by giving them labels.

The idea of decomposing data structures as shapes with locations combined with data is not unique to species. In the computer science community, the idea goes back, I think, to Jay and Cockett (1994) in their work on “shapely types” (their “locations” are always essentially natural numbers, since they work in terms of shapes and lists of data) and more recently Abbott, Altenkirch, and Ghani (2003) with their definition of “containers” (which, like the theory of species, has a much more general notion of locations). However, it should be noted that the literature on species never actually talks about mappings from labels to data: combinatorialists don’t care about data structures, they only care about structures!

Now that we have some motivation, and with the requisite disclaimers about labels out of the way, in my next post I’ll motivate and explain the formal definition of species.

References

Abbott, Michael, Thorsten Altenkirch, and Neil Ghani. 2003. “Categories of Containers.” In Foundations of Software Science and Computation Structures, 23–38. http://dx.doi.org/10.1007/3-540-36576-1_2.

Jay, C. Barry, and J. Robin B. Cockett. 1994. “Shapely Types and Shape Polymorphism.” In ESOP ’94: Proceedings of the 5th European Symposium on Programming, 302–316. London, UK: Springer-Verlag.

Posted in math, species | Tagged , , , | 9 Comments

And now, back to your regularly scheduled combinatorial species

I’ve already mentioned this to people here and there, but haven’t yet announced it publically, so here it is: Stephanie Weirich and I have been awarded a grant from the NSF to study the intersection of combinatorial species and (functional) programming, and so I’ll be returning the topic for my dissertation.

I’ve always found blogging to be an excellent way to organize my thoughts, and it often prompts great feedback and insights from readers which fuel further exploration. So as one aspect of my research, I plan to write a series of blog posts explaining the theory of combinatorial species and its relationship to algebraic data types. I’ll start with the very basics and (hopefully) progress to some deeper results, pulling together references to related things along the way.

I’ve written about species on this blog before (here, here, here, here, and here), and I published a paper in the 2010 Haskell Symposium on the topic, so I’ll certainly end up duplicating some of that content. But it’s worth starting over from the beginning, for several reasons:

  • I want as many people as possible to be able to follow along, without having to tell them “first go back and read these blog posts from 2009”.
  • I’m not completely happy with the way I presented some of that material in the past; in the intervening years I feel I’ve had some better insights into how everything fits together.
  • Those previous posts—and my Haskell Symposium paper—conflated explaining species with explaining my Haskell library for computing with species,1 which I now think is not all that helpful, because it glosses over too many subtle issues with the relationship of species to algebraic data types.

So, in my next post, I’ll begin by defining species—but with some extra context and insight that I hope you’ll find enlightening, even if you already know the definition.


  1. It’s on Hackage here, but I haven’t touched it in a long time and it doesn’t build with recent versions of GHC. I plan to fix that soon.

Posted in combinatorics, math, writing | Tagged , | 10 Comments