Random binary trees with a size-limited critical Boltzmann sampler

Today I’d like to talk about generating random trees. First, some imports and such (this post is literate Haskell).

> {-# LANGUAGE GeneralizedNewtypeDeriving #-}
> 
> module BoltzmannTrees where
> 
> import           Control.Applicative
> import           Control.Arrow                  ((&&&))
> import           Control.Lens                   ((??))
> import           Control.Monad.Random
> import           Control.Monad.Reader
> import           Control.Monad.State
> import           Control.Monad.Trans.Maybe
> import           Data.List                      (sort)
> import           Data.Maybe                     (fromJust)
> import           System.Environment             (getArgs)

So here’s a simple type of binary tree shapes, containing no data:

> data Tree = Leaf | Branch Tree Tree
>   deriving Show

We’ll count each constructor (Leaf or Branch) as having a size of 1:

> size :: Tree -> Int
> size Leaf = 1
> size (Branch l r) = 1 + size l + size r

Now, suppose we want to randomly generate these trees. This is an entirely reasonable and useful thing to do: perhaps we want to, say, randomly test properties of functions over Tree using QuickCheck. Here’s the simplest, most naïve way to do it:

> randomTree :: (Applicative m, MonadRandom m) => m Tree
> randomTree = do
>   r <- getRandom
>   if r < (1/2 :: Double)
>     then return Leaf
>     else Branch <$> randomTree <*> randomTree

We choose each of the constructors with probability 1/2, and recurse in the Branch case.

Now, as is well-known, this works rather poorly. Why is that? Let’s generate 100 random trees and print out their sizes in descending order:

ghci> reverse . sort . map size <$> replicateM 100 randomTree
  [118331,7753,2783,763,237,203,195,163,159,73,65,63,49,41,39,29,29,23,23,21,19,19,15,11,9,9,9,9,7,7,7,5,5,5,5,5,5,5,5,5,3,3,3,3,3,3,3,3,3,3,3,3,3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]

As you can see, this is a really weird distribution of sizes. For one thing, we get lots of trees that are very small—in fact, it’s easy to see that we expect about 50 of them to be single leaf nodes. The other weird thing, however, is that we also get some really humongous trees. The above output gets randomly regenerated every time I process this post—so I don’t know exactly what sizes you’ll end up seeing—but it’s a good bet that there is at least one tree with a size greater than 10^4. To get an intuitive idea of why this happens, imagine generating the tree in a breadth-first manner. At each new level we have a collection of “active” nodes corresponding to pending recursive calls to randomTree. Each active node generates zero or two new active nodes on the next level with equal probability, so on average the number of active nodes remains the same from level to level. So if we happen to make a lot of Branch choices right off the bat, it may take a long time before the tree “thins out” again. And if this distribution didn’t seem weird enough already, it turns out (though it is far from obvious how to prove this) that the expected size of the generated trees is infinite!

The usual solution with QuickCheck is to use the sized combinator to limit the size of generated structures, but this does not help with the problem of having too many very small trees.

Here’s a (seemingly!) stupid idea. Suppose we want to generate trees of size approximately 100 (say, within 10%). Let’s simply use the above algorithm, but with the following modifications:

  1. If we generate a tree of size < 90, throw it away and start over.
  2. If we generate a tree of size > 110, throw it away and start over. As an optimization, however, we will stop as soon as the size goes over 110; that is, we will keep track of the current size while generating and stop early if the size gets too big.

Here’s some code. First, a monad onion:

> newtype GenM a = GenM 
>     { unGenM :: ReaderT (Int,Int) (StateT Int (MaybeT (Rand StdGen))) a }
>   deriving (Functor, Applicative, Monad, MonadPlus, MonadRandom,
>             MonadState Int, MonadReader (Int,Int))

The ReaderT holds the min and max allowed sizes; the StateT holds the current size; the MaybeT allows for possible failure (if the tree gets too big or ends up too small), and the Rand StdGen is, of course, for generating random numbers. To run a computation in this monad we take a target size and a tolerance and use them to compute minimum and maximum sizes. (The (??) in the code below is an infix version of flip, defined in the lens package.)

> runGenM :: Int -> Double -> GenM a -> IO (Maybe a)
> runGenM targetSize eps m = do
>   let wiggle  = floor $ fromIntegral targetSize * eps
>       minSize = targetSize - wiggle
>       maxSize = targetSize + wiggle
>   g <- newStdGen
>   return . (evalRand ?? g) . runMaybeT . (evalStateT ?? 0)
>          . (runReaderT ?? (minSize, maxSize)) . unGenM
>          $ m

Here’s the code to try generating a tree: we call the atom function to record the increase in size, and choose between the two constructors with equal probability. atom, in turn, handles failing early if the size gets too big.

> genTreeUB :: GenM Tree
> genTreeUB = do
>   r <- getRandom
>   atom
>   if r <= (1/2 :: Double)
>     then return Leaf
>     else Branch <$> genTreeUB <*> genTreeUB
> 
> atom :: GenM ()
> atom = do
>   (_, maxSize) <- ask
>   curSize <- get
>   when (curSize >= maxSize) mzero
>   put (curSize + 1)

genTreeLB calls genTreeUB and then performs the lower bound check on the size.

> genTreeLB :: GenM Tree
> genTreeLB = do
>   put 0
>   t <- genTreeUB
>   tSize <- get
>   (minSize, _) <- ask
>   guard $ tSize >= minSize
>   return t

Finally, genTree just calls genTreeLB repeatedly until it succeeds.

> genTree :: GenM Tree
> genTree = genTreeLB `mplus` genTree

Let’s make sure it works:

ghci> map size . fromJust <$> runGenM 100 0.1 (replicateM 30 genTree)
  [105,91,105,103,107,101,105,93,93,93,95,91,103,91,91,107,105,103,97,95,105,107,93,97,93,103,91,103,101,95]

Neat! Okay, but surely this is really, really slow, right? We spend a bunch of time just throwing away trees of the wrong size. Before reading on, would you care to guess the asymptotic time complexity to generate a tree of size n using this algorithm?

And while you think about that, here is a random binary tree of size approximately 1000.

And the answer is… it is linear! That is, it takes O(n) time to generate a tree of size n. This is astounding—it’s the best we could possibly hope for, because of course it takes at least O(n) time to generate an object of size O(n). If you don’t believe me, I invite you to run some experiments with this code yourself. I did, and it sure looks linear:

main = do
  [sz] <- getArgs
  Just ts <- runGenM (read sz) 0.1 $ replicateM 1000 genTree
  print . (/fromIntegral n) . fromIntegral . sum . map size $ ts

archimedes :: research/species/boltzmann » time ./GenTree 50
49.682
./GenTree 50  1.37s user 0.01s system 99% cpu 1.387 total
archimedes :: research/species/boltzmann » time ./GenTree 100
99.474
./GenTree 100  3.11s user 0.02s system 99% cpu 3.152 total
archimedes :: research/species/boltzmann » time ./GenTree 200
198.494
./GenTree 200  6.82s user 0.04s system 99% cpu 6.876 total
archimedes :: research/species/boltzmann » time ./GenTree 400
398.798
./GenTree 400  13.08s user 0.08s system 99% cpu 13.208 total
archimedes :: research/species/boltzmann » time ./GenTree 800
795.798
./GenTree 800  25.99s user 0.16s system 99% cpu 26.228 total

The proof of this astounding fact uses some complex analysis which I do not understand; I wish I was joking. Of course, the constant factor can be big, depending on how small you set the “epsilon” allowing for wiggle room around the target size.1 But it is still quite feasible to generate rather large trees (with, say, 10^5 nodes).

There is much, much more to say on this topic. I just wanted to start out with a simple example before jumping into more of the technical details and generalizations, which I plan to write about in future posts. I also hope to package this and a bunch of other stuff into a library. In the meantime, you can read Duchon et. al2 if you want the details.


  1. Actually, if you set epsilon to zero, the asymptotic complexity jumps to O(n^2).

  2. Duchon, Philippe, et al. “Boltzmann samplers for the random generation of combinatorial structures.” Combinatorics Probability and Computing 13.4-5 (2004): 577-625.

Posted in combinatorics, haskell, math, species | Tagged , , , , , | 7 Comments

Beeminding for fun and profit

I’ve been using Beeminder (which I’ve mentioned once before) for a little over six months now. The verdict?

Beeminder has changed my life.

That sounds dramatic, but I’m not kidding. I am far more productive than I’ve ever been. I’m taking better care of myself. I’m finally taking the initiative to act on various long-held intentions (e.g. learning Hebrew). And I no longer have a constant nagging sense of guilt over all the big goals and projects that I ought to be working on more. It’s not for everyone, but I’m sure there are many others for whom it could be similarly transformative.

So, what is Beeminder? The basic idea is that it helps you keep track of progress on any quantifiable goals, and gives you short-term incentive to stay on track: if you don’t, Beeminder takes your money. But it’s not just about the fear of losing money. Shiny graphs tracking your progress coupled with helpfully concrete short-term goals (“today you need to write 1.3 pages of that paper”) make for excellent positive motivation, too. Another somewhat intangible but important reason it works is that the Beeminder developers are really awesome and responsive, and are sincerely dedicated to helping their users meet goals, not just to making money. (They recently introduced some paid premium plans which I happily signed up for, not because I need the premium features, but because I want to support continued development—in fact, I’ve otherwise paid Beeminder only $5 over the past six months!) If you want to know more, I encourage you to read Beeminder’s own overview, which does a much better job of explaining how and why it works.

Six months: quite long enough for the initial “shiny new toy” enthusiasm to wear off, and long enough, I think, to get a good sense of what works for me and what doesn’t. So I’m writing this post in the hope that my experience will be useful or inspiring to others.

So here are some of the ways I’m using it, which I have found to work well. (You can see all my beeminder goals here.1) I hope some of these may inspire you with ways to make yourself more productive, whether you use Beeminder or not.

  • Big projects

    Consistently spending time on big, long-term projects is really hard—at least, it was hard before I started using Beeminder! Now I just make a goal for each project requiring me to spend a certain amount of time on it each week. This helps me stay on track and also gets rid of that nagging guilt—once I’ve done enough to stay on track, I can stop and do other things and not feel guilty about it! I’ve used this to spend a certain amount of time preparing for courses I’m going to teach; I use it for getting research done, and for working on diagrams. A year or so ago I posted on Google+ complaining that I needed a scheduling algorithm for my life, and in many ways Beeminder has filled that role. It’s also a great way to get some cold, hard data on how much time I actually spend on various projects (you can start with a “flat” goal and just record data for a while if you don’t know what a reasonable rate for the goal is).

  • Reading and writing projects

    There’s no way I ever would have gotten my thesis proposal written without Beeminder. The important thing to note is that the goal was based on page count rather than time spent. (The “Odometer” goal type is useful for this sort of thing.) This forced me to actually get real writing done, rather than frittering time away adjusting the kerning or whatever. Interestingly, it also forced me to get creative about padding the page count, by pasting in text I’d already written before (from blog posts, grant proposals, etc.). In the end, reusing text I’d written before and then editing it was a much better use of my time than writing everything from scratch, but for whatever reason I’m not sure my perfectionist self would have done it without the pressure of “you have to write two pages in the next three hours OR ELSE”.

    I’ve also made goals for reviews I’ve been asked to do, again using an “Odometer” goal to track page numbers.

    I also have a goal to write blog posts with a certain frequency on either of my two blogs. (In fact, I’m finally finishing this blog post because otherwise in about an hour I’m going to owe Beeminder $5!)

  • Learning

    I have long intended to learn to read Hebrew, but it never seemed like the “right time”. I finally admitted that there will never be a “right time”, and just started2. Starting is one thing; continuing to regularly study after the initial excitement has worn off is only possible because of my Beeminder goal, which also serves as a check on discouragement. It will be a long time before I am any good at reading Hebrew; but in the meantime I am motivated by logging time on my goal.

    I use anki for memorizing all sorts of things—ancient Greek and Hebrew vocabulary, recipes, emacs commands, and names and faces of students. To help me stay on track reviewing flash cards, I have a Beeminder goal to review 100 Anki cards a day. Recently, the number of cards coming due each day started dropping significantly below 100, so instead of lessening the Beeminder goal I decided to start learning some geography (flags, countries, capitals, etc.) which has been a lot of fun.

  • Productivity

    I have a number of goals directly intended to increase my productivity.

    • I use FogBugz to keep track of all my tasks and todos. I use three different Beeminder goals in relation to FogBugz:

      • As described in previous blog post, I have one goal to close a certain number of cases per day (currently 4 per day, which is historically about average for me). This goal is automatically updated every time I close a case in FogBugz.
      • When I get an email requiring me to act or respond in some way, I very often just forward it to FogBugz to deal with later. So I have a goal to spend a certain amount of time dealing with cases in my FogBugz inbox; otherwise it’s too easy to just let these rot.
      • It’s way too easy to ignore todo items which have no real deadline and are somehow distasteful, intimidating, or both. To help overcome this inertia, I’ve come up with something that works fairly well. I have a Beeminder goal to spend a certain amount of time doing “FogBugz review”. It works like this: I have a certain query defined in FogBugz which shows me the five least recently edited open tickets. When working on review, I must pick one of these five cases and make some sort of progress on it (it’s perfectly fine if I don’t complete it). After making some progress I add a note to the ticket explaining what I did. This both helps me pick up where I left off next time I come to work on the ticket, and makes the ticket automatically drop out of the review query, since it has now been edited. I then look at the top five tickets again (including some new ticket that has now moved into the top five), choose one, and repeat.
    • I have found that I am much more productive if at the start of each day I intentionally plan out the rest of the day, recalling the things I have scheduled and deciding how to spend the remaining unscheduled time—consulting FogBugz and Beeminder to decide what my priorities should be for the day and how much time to spend on each. To force myself to do this consistently, I of course made a Beeminder goal to do this planning a certain number of days each week. The catch is that I have to do the planning before checking email, Facebook, or IRC, or else the planning only counts for half a day.3

    • Another thing which I’ve found helps my productivity is to turn off my computer before going to bed. The choices I make when I first get up tend to have a ripple effect on the rest of the day. If my computer is on when I first get up, it’s very tempting to immediately start aimlessly checking email; if it’s off, it’s that much easier to make deliberate choices about how to begin my day. The important point here is that I’ve made a positive goal (to turn off my computer) instead of negative goal (to spend less than X amount of time checking email, etc., in the morning). I’ve found that negative goals don’t work nearly as well: they are far less motivating, and psychologically speaking it’s too easy to lie to Beeminder by neglecting to report data—by contrast, actively lying by submitting false data is much more difficult.

  • Personal goals

    Last but not least, I now take better care of myself and my stuff in some simple but important ways. I have flossed more in the last six months than the rest of my life put together. I trim my beard and my toenails more regularly, take allergy medication almost every day, take care of my bike by inflating the tires and greasing the chain, and clean around the house (which my wife loves).

So there you have it. If you end up trying Beeminder, or come up with some cool goal-based life hacks, or just have questions, I’d love to hear from you!


  1. You’ll notice that some of my goals are private/hidden. Mostly these are personal or relate to religious commitments, and for various reasons I’d rather not broadcast them to the whole Internet—but at the same time, I have no secrets and would be glad to discuss them with anyone who’s interested.

  2. Well, at the time it was a way of procrastinating from working on my thesis proposal

  3. This is completely self-enforced, of course, but it’s ten times harder to actively choose to lie to Beeminder (which I have never done) than it was to “just check a few emails first” before I had any sort of external accountability.

Posted in grad school, meta | Tagged , , , | 16 Comments

Introducing diagrams-haddock

I am quite pleased to announce the release of diagrams-haddock, a tool enabling you to easily include programmatically generated diagrams in your Haddock documentation. Why might you want to do this? “A picture is worth a thousand words”—in many instances a diagram or illustration can dramatically increase the comprehension of users reading your library’s documentation. The diagrams project itself will be using this for documentation, beginning with the diagrams-contrib package (for example, check out the documentation for Diagrams.TwoD.Path.IteratedSubset). But inline images can benefit the documentation of just about any library.

Before jumping into a more detailed example, here are the main selling points of diagrams-haddock:

  1. You get to create arbitrary images to enhance your documentation, using the powerful diagrams framework.

  2. The code for your images goes right into your source files themselves, alongside the documentation—there is no need to maintain a bunch of auxiliary files, or (heaven forbid) multiple versions of your source files.

  3. Images are regenerated when, and only when, their definition changes—so you can include many diagrams in your documentation without having to recompile all of them every time you make a change to just one.

  4. You have to do a little bit of work to integrate the generated images into your Cabal package, but it’s relatively simple and you only have to do it once per package. No one else needs to have diagrams-haddock installed in order to build your documentation with the images (this includes Hackage).

So, how does it work? (For full details, consult the diagrams-haddock documentation.) Suppose we have some Haddock documentation that looks like this:

-- | The foozle function takes a widget and turns it into an
--   infinite list of widgets which alternate between red and
--   yellow.
--
foozle :: Widget -> [Widget]
foozle = ...

It would be really nice to illustrate this with a picture, don’t you think? First, we insert an image placeholder like so:

-- | The foozle function takes a widget and turns it into an
--   infinite list of widgets which alternate between red and
--   yellow.
--
--   <<dummy#diagram=foozleDia&width=300>>
--
foozle :: Widget -> [Widget]
foozle = ...

It doesn’t matter what we put in place of dummy; diagrams-haddock is going to shortly replace it anyway. The stuff following the # is a list of parameters to diagrams-haddock: we tell it to insert here an image built from the diagram called foozleDia, and that it should have a width of 300 pixels.

Now we just have to give a definition for foozleDia, which we do simply by creating a code block (set off with bird tracks) in a comment:

-- | The foozle function takes a widget and turns it into an
--   infinite list of widgets which alternate between red and
--   yellow.
--
--   <<dummy#diagram=foozleDia&width=300>>
--
foozle :: Widget -> [Widget]
foozle = ...

-- > widget =
-- >   (  stroke (circle 1.25 <> circle 0.75 # reversePath)
-- >   <> mconcat (iterateN 10 (rotateBy (1/10)) (square 0.5 # translateX 1.3))
-- >   )
-- >   # lw 0
-- >
-- > foozleDia =
-- >   hcat' with {sep = 2}
-- >   [ widget # fc black
-- >   , hrule 4 # alignR <> triangle 1 # rotateBy (-1/4) # fc black
-- >   , hcat' with {sep = 0.5} (zipWith fc (cycle [red, yellow]) (replicate 6 widget))
-- >   ]

Note that this definition for foozleDia isn’t in a Haddock comment, so it won’t be typeset in the Haddock output. (However, if you want users reading your documentation to see the code used to generate the pictures—as, e.g., we often do in the documentation for diagrams itself—it’s as simple as sticking the definitions in a Haddock comment.) It also doesn’t have to go right after the definition of foozle—for example, we could stick it all the way at the end of the source file if we didn’t want it cluttering up the code.

Now we simply run diagrams-haddock on our file (or on the whole Cabal project), and it will generate an appropriate SVG image and replace <<dummy#...>> with something like <<diagrams/foozleDia.svg#...>>. The Haddock documentation now displays something like

after the documentation for foozle. Hooray! Note that diagrams-haddock only replaces the stuff before the # (the clever bit is that browsers will ignore everything after the #). Running diagrams-haddock again at this point will do nothing. If we change the definition of foozleDia and then rerun diagrams-haddock, it will regenerate the image.

Okay, but how will others (or, for that matter, Hackage) be able to see the diagram for foozle when they build the documentation, without needing diagrams-haddock themselves? It’s actually fairly straightforward—we simply include the generated images in the source tarball, and tell cabal to copy the images in alongside the documentation when it is built, using either a custom Setup.hs, or (once it is released and sufficiently ubiquitous) the new extra-html-files: field in the .cabal file. The diagrams-haddock documentation has full details with step-by-step instructions.

I hope this silly example has piqued your interest; again, for full details please consult the diagrams-haddock documentation. Now go forth and illustrate your documentation!

Posted in diagrams, haskell, writing | Tagged , , , | 4 Comments

BlogLiterately 0.6

I’m very proud to announce the release of BlogLiterately version 0.6, a tool for formatting and uploading blog posts, including syntax highlighting, generation of ghci sessions, LaTeX support, automatic image uploading, and more.

tl;dr: Instead of cumbersomely specifying all options on the command-line, you can now specify options using a combination of “profiles” (e.g. for common sets of options such as blog URL and password) and options embedded within the .markdown or .lhs documents themselves (e.g. for post-specific options like title, tags, and categories).

There are a few other changes and improvements as well. For more information, see the documentation or keep reading below!

Specifying options

With previous releases, uploading a post usually went something like this:

BlogLiterately MyPost.md --blog "http://my.blog.url/xmlrpc.php \
  --user me --password 1234567 --postid 9999 --title "My awesome post" \
  --tag tag1 --tag tag2 --tag tag3 --category Stuff \
  --category OtherStuff --ghci --wplatex

which is incredibly tedious and error-prone. Now we do things the Right Way ™. First, you can create one or more profiles, specifying a common set of options that can be referred to by name. For example, you might have a profile for a particular blog, or a profile for a particular type of post which always needs the same options. Suppose we put this in $HOME/.BlogLiterately/foo.cfg (or in something like C:/Documents And Settings/user/Application Data/BlogLiterately/foo.cfg on Windows):

blog        = http://my.blog.url/xmlrpc.php
user        = me
password    = 1234567
wplatex     = true

Now the previous command line is reduced to

BlogLiterately MyPost.md -P foo --postid 9999 --title "My awesome post" \
  --tag tag1 --tag tag2 --tag tag3 --category Stuff \
  --category OtherStuff --ghci

which is already a big improvement! But it doesn’t stop there. The title, tags, categories, and other such things are really inherent to the post itself; there’s no reason they should go on the command line. So, we add this indented block somewhere in MyPost.md (probably near the top, though it doesn’t matter):

    [BLOpts]
    profile    = foo
    postid     = 9999
    title      = "My awesome post"
    tags       = tag1, tag2, tag3
    categories = Stuff, OtherStuff
    ghci       = true

And now we only have to write

BlogLiterately MyPost.md

with no options on the command line at all! Notice how we can even specify which profile to use in the [BLOpts] block. When we’re satisfied with the post we can publish it with

BlogLiterately MyPost.md --publish

Generating HTML only

In the past, to get a “preview” version of the HTML output written to stdout, all you had to do was omit a --blog option. However, if you specify a profile with a blog field as in the above example, this is more problematic. For this reason, a new option --html-only has been added. When this option is specified, nothing will be uploaded, and the HTML output written to stdout.

Changes to Transforms

In order to make the above features possible, the definition of Transform has changed. This only affects those users who have created their own custom transformations. The definition used to be

data Transform
  = Transform
    { getTransform :: BlogLiterately -> Kleisli IO Pandoc Pandoc
    , xfCond       :: BlogLiterately -> Bool
    }

that is, a Transform was a transformation on Pandoc documents, parameterized by an options record and able to have effects in the IO monad. The definition is now

data Transform
  = Transform
    { getTransform :: StateT (BlogLiterately, Pandoc) IO ()
    , xfCond       :: BlogLiterately -> Bool
    }

meaning that a Transform is able to transform both a Pandoc document and the options record. This is crucial for being able to do things like embedding options within the document itself, because we don’t know all the options until we start processing the document! Also, I switched from using Kleisli arrows to using StateT, since I find it simpler to work with, especially now that multiple pieces of state are involved. For more information and help upgrading, see the documentation for Text.BlogLiterately.Transform.

Move to github

The other change is that I have moved the BlogLiterately repository from darcshub to github. In general, for small personal projects and miscellaneous sorts of things I use darcs and hub.darcs.net; for larger projects where I want to raise the visibility and encourage contributions from other users, I use github. At some point BlogLiterately crossed the line.

Learning more, and contacting me

For more information, see the full documentation. I’m always happy to receive comments, questions, feature requests, bug reports, and so on, via the bug tracker on github, IRC (byorgey on freenode), or email (the same as my IRC nick, at gmail).

Posted in haskell, writing | Tagged , , | Leave a comment

The Dawn of Software Engineering

The Dawn of Software Engineering: From Turing to Dijkstra
Edgar G. Daylight

Edgar sent me a review copy of his book a while back—it made for quite interesting reading and gave me new perspective on the historical origins of my field. I daresay many readers of this blog might be interested in giving it a read.

Alan Turing is widely regarded today as the father of digital computers. But as Daylight argues in this fascinating historical account of the development of computer programming as a discipline in the 1950s and 60s, the real story is much more complicated. Turing’s ideas didn’t actually have much influence on the building of the first computers themselves—but did gradually come to influence the practice of writing computer programs.

It will be interesting to compare and contrast this book with George Dyson’s book Turing’s Cathedral, which I have just begun reading—though I’m not far enough along to make any comparisons yet.

As an aside, I find it interesting that the subfield Dijkstra called “software engineering” —the subfield that was influenced by Turing’s ideas—really seems to comprise what is now known as “programming languages”. “Software engineering” now means something completely different, focusing on the business-oriented, large-scale aspects of building software systems. It’s difficult to imagine “software engineering” these days being influenced by Turing’s ideas (or abstract mathematical ideas, period—though perhaps I am being uncharitable).

Posted in haskell | 3 Comments

The algebra of species: primitives

[This is the fifth in a series of posts about combinatorial species. Previous posts: And now, back to your regularly scheduled combinatorial species; Decomposing data structures; Combinatorial species definition, Species definition clarification and exercises.]

Recall that a species is a functor from \mathbb{B}, the category of finite sets and bijections, to \mathbb{E},1 the category of finite sets and total functions. (Equivalently, species are endofunctors on \mathbb{B}, but in this post I’m going to want to think about them as the former.) That is, a species F is a mapping sending every set of labels U to a set of structures F[U], which also lifts relabelings \sigma : U \leftrightarrow V to functions F[\sigma] : U \to V in a way that respects the compositional structure of bijections.

However, as I hinted in a previous post, it’s inconvenient to work directly with this definition in practice. Instead, we use an algebraic theory that lets us compositionally build up certain species from a collection of primitive species and species operations. (It’s important to note that it does not allow us to build all species, but it does allow us to build many of the ones we care about.)

In this post we’ll begin by examining a few natural species to take as primitive.

  • The zero or empty species, denoted \mathbf{0}, is the unique species with no structures whatsoever; that is,

    \mathbf{0}[U] = \emptyset

    and

    \mathbf{0}[\sigma : U \leftrightarrow V] = id_{\emptyset} : \mathbf{0}[U] \to \mathbf{0}[V].

    Of course, \mathbf{0} will turn out to be the identity element for species sum (which I’ll define in my next post, though it’s not hard to figure out what it should mean).

  • The unit species, denoted \mathbf{1}, is defined by

    \begin{array}{lcl}\mathbf{1}[\emptyset] &=& \{\star\} \\ \mathbf{1}[U] &=& \emptyset \qquad (U \neq \emptyset)\end{array}

    That is, there is a unique \mathbf{1}-structure indexed by the empty set of labels, and no \mathbf{1}-structures with any positive number of labels. \mathbf{1} lifts bijections in the obvious way, sending every bijection to the appropriate identity function.

    Some people initially find this definition surprising, expecting something like \mathbf{1}[U] = \{ \star \} for all U instead. That is indeed a valid species, and we will meet it below; but as I hope you will come to see, it doesn’t deserve the name \mathbf{1}.

    Of course we should also verify that this definition satisfies the requisite functoriality properties, which is not difficult.

    More abstractly, for those who know some category theory, it’s worth mentioning that \mathbf{1} can be defined as \mathbb{B}(\emptyset, -) : \mathbb{B} \to \mathbb{E}, that is, the covariant hom-functor sending each finite set U \in \mathbb{B} to the (finite) set of bijections \emptyset \leftrightarrow U. (This is why I wanted to think of species as functors \mathbb{B} \to \mathbb{E}. I learned this fact from Yeh (1986).) There is, of course, a unique bijection \emptyset \leftrightarrow \emptyset and no bijections \emptyset \leftrightarrow U for nonempty U, thus giving rise to the definition above.

    As you might expect, \mathbf{1} will be the identity element for species product. Like \mathbf{1} itself, species product isn’t defined quite as most people would initially guess. If you haven’t seen it before, you might like to try working out how product can be defined in order to make \mathbf{1} an identity element.

  • The singleton species, denoted \mathbf{X}, is defined by

    \mathbf{X}[U] = \begin{cases} U & |U| = 1 \\ \emptyset & \text{otherwise} \end{cases}

    with lifting of bijections defined in the evident manner. That is, there is a single \mathbf{X}-structure on a label set of size 1 (which we identify with the label itself, though we could have also defined \mathbf{X}[U] = \{\star\} when |U| = 1), and no \mathbf{X}-structures indexed by any other number of labels.

    As with \mathbf{1}, we may equivalently define \mathbf{X} as a hom-functor, namely \mathbf{X} = \mathbb{B}(\{\star\}, -).

    It’s worth noting again that although \mathbf{1} and \mathbf{X} do “case analysis” on the label set U, they actually only depend on the size of U; indeed, as we noted previously, by functoriality this is all they can do.

  • The species of bags2, denoted \mathbf{E}, is defined by

    \mathbf{E}[U] = \{U\},

    that is, there is a single \mathbf{E}-structure on any set of labels U, which we usually identify with the set of labels itself (although we could equivalently define \mathbf{E}[U] = \{\star\}). The idea is that an \mathbf{E}-structure consists solely of a collection of labels, with no imposed ordering whatsoever.

    If you want to abuse types slightly, one can define \mathbf{E} as a hom-functor too, namely \mathbb{E}(-,\{\star\}). (Yeh (1986) actually has \mathbb{B}(-, \{\star\}), but that’s wrong.)

As a summary, here’s a graphic showing \mathbf{0}-, \mathbf{1}-, \mathbf{X}-, and \mathbf{E}-structures arranged by size (i.e., the size of the underlying set of labels U): a dot indicates a single structure, and the size of the label set increases as you move to the right.

Just as a teaser, it turns out that \mathbf{X} and \mathbf{E} are identity elements for certain binary operations on species as well, though you’ll have to wait to find out which ones!

Next up, addition!

References

Yeh, Yeong-Nan. 1986. “The calculus of virtual species and K-species.” In Combinatoire énumérative, ed. Gilbert Labelle and Pierre Leroux, 1234:351–369. Springer Berlin Heidelberg. http://dx.doi.org/10.1007/BFb0072525.


  1. Last time I called this category \mathbf{FinSet}, but \mathbb{E} is more concise and matches the species literarure.

  2. The species literature calls this the species of sets, but that’s misleading to computer scientists, who expect the word “set” to imply that elements cannot be repeated.

Posted in math, species | Tagged , , , , , , | 5 Comments

Diagrams 0.6

I am pleased to announce the release of version 0.6 of diagrams, a full-featured framework and embedded domain-specific language for declarative drawing. Check out the gallery for examples of what it can do!

Highlights of this release include:

  • Diagrams now comes with a native-Haskell SVG backend by default. If you were holding off on trying diagrams because you couldn’t install cairo, you no longer have an excuse!

  • Proper support for subdiagrams: previous versions of diagrams-core had a mechanism for associating names with a pair of a location and an envelope. Now, names are associated with actual subdiagrams (including their location and envelope, along with all the other information stored by a diagram). This enables cool techniques like constructing a diagram in order to position its subelements and then taking it apart again, or constructing animations via keyframing.

  • Traces: in addition to an envelope, each diagram now stores a “trace”, which is like an embedded raytracer: given any ray (represented by a base point and a vector), the trace computes the closest point of intersection with the diagram along the ray. This is useful for determining points on the boundary of a diagram, e.g. when drawing arrows between diagrams.

  • The core data structure underlying diagrams has been completely refactored and split out into its own separate package, dual-tree.

  • Support for GHC 7.6.

  • Many more new features, bug fixes, and improvements! See the release notes for complete details, and the diagrams wiki for help migrating from 0.5 to 0.6.

Try it out

For the truly impatient:

cabal install diagrams

Diagrams is supported under GHC 7.0 through 7.6, with the exception that the cairo and gtk backends do not build under GHC 7.0 (but the SVG backend does), and the gtk backend does not build under GHC 7.6.

To get started with diagrams, read the quick tutorial, which will introduce you to the fundamentals of the framework.

For those who are less impatient and want to really dig in and use the power features, read the user manual.

Get involved

Subscribe to the project mailing list, and/or come hang out in the #diagrams IRC channel on freenode.org for help and discussion. Make some diagrams. Fix some bugs. Submit your cool examples for inclusion in the gallery or your cool code for inclusion in the diagrams-contrib package!

Happy diagramming!

Brought to you by the diagrams team:

  • Michael Sloan
  • Ryan Yates
  • Brent Yorgey

with contributions from:

  • Sam Griffin
  • Niklas Haas
  • Peter Hall
  • Claude Heiland-Allen
  • Deepak Jois
  • John Lato
  • Felipe Lessa
  • Chris Mears
  • Ian Ross
  • Vilhelm Sjöberg
  • Jim Snavely
  • Luite Stegeman
  • Kanchalai Suveepattananont
  • Michael Thompson
  • Scott Walck
Posted in diagrams, haskell, projects | Tagged , , , | 6 Comments