Come visit the FARM!

Registration is now open for the first (!) ACM SIGPLAN Workshop on Functional Art, Music, Modeling and Design (FARM), to be held in Boston on September 28 (the day after ICFP). I’m really excited—it’s shaping up to be a really awesome event. Check out the list of accepted papers and demos:

  • Samuel Aaron and Alan F. Blackwell. From Sonic Pi to Overtone: Creative Musical Experiences with Domain-Specific and Functional Languages
  • Henrik Bäärnhielm, Mikael Vejdemo-Johansson and Daniel Sundström. Using Haskell as DSL for controlling immersive media experiences (Demo)
  • Guillaume Baudart, Louis Mandel and Marc Pouzet. Programming Mixed Music in ReactiveML
  • Jean Bresson, Raphael Foulon and Marco Stroppa. Reduction as a Transition Controller for Sound Synthesis Events
  • Kelsey D’Souza. PySTEMM – A STEM Learning Tool for Exploring and Building Executable Concept Models (Demo)
  • Andy Gill and Brent A. Yorgey. Functional active animation (Demo)
  • Jason Hemann and Eric Holk. Visualizing the Turing Tarpit
  • Paul Hudak. Euterpea: From Signals to Symphonies (Demo)
  • David Janin, Florent Berthaut, Myriam Desainte-Catherine, Yann Orlarey and Sylvain Salvati. The T-Calculus : towards a structured programing of (musical) time and space
  • David Janin and Florent Berthaut. LiveTuiles for tiled composition of audio patterns (Demo)
  • Thomas Jordan. Spontaneous Musical Explorations of Visible Symmetric Structures (Demo)
  • Hendrik Vincent Koops, José Pedro Magalhães and W. Bas de Haas. A Functional Approach To Automatic Melody Harmonisation
  • José Pedro Magalhães, Bas De Haas, Gijs Bekenkamp, Dion ten Heggeler and Tijmen Ruizendaal. Chordify: Chord Transcription for the Masses (Demo)
  • Donya Quick and Paul Hudak. Grammar-Based Automated Music Composition in Haskell
  • Chung‐chieh Shan and Dylan Thurston. Braiding in circles (Demo)

Note the early registration deadline is August 22, so don’t delay! In case you’re not already convinced, here’s a short description of the workshop and what it’s trying to accomplish:

The functional programming community is largely interested in writing beautiful programs. This workshop is intended to gather researchers and practitioners interested in writing beautiful programs that generate beautiful artifacts. Such artifacts may include visual art, music, 3D sculptures, animations, GUIs, video games, physical models, architectural models, choreographies for dance, poetry, and even physical objects such as VLSI layouts, GPU configurations, or mechanical engineering designs.

The goal of FARM is to gather together researchers, practitioners, and educators in this interdisciplinary field, as well as anyone else with even a casual interest in the area. We wish to share ideas, look for common ground, and encourage more activity. We also hope to legitimize work in the field and facilitate potential collaboration among the participants.

Posted in haskell | Leave a comment

FARM 2013: call for demonstration proposals

Do you enjoy writing beautiful code to produce beautiful artifacts? Have something cool to show off at the intersection of functional programming and visual art, music, sound, modeling, visualization, or design?

The deadline for submitting a paper has passed, but the Workshop on Functional Art, Music, Modeling and Design (FARM 2013) is currently seeking proposals for 10-20 minute demonstrations to be given during the workshop. For example, a demonstration could consist of a short tutorial, an exhibition of some work, or even a livecoding performance. Slots for demonstrations will be shorter than slots for accepted papers, and will not be published as part of the formal proceedings, but can be a great way to show off interesting work and get feedback from other workshop participants. A demonstration slot could be a particularly good way to get feedback on work-in-progress.

A demo proposal should consist of a 1 page abstract, in PDF format, explaining the proposed content of the demonstration and why it would be of interest to the attendees of FARM. Proposals will be judged on interest and relevance to the stated goals and themes of the workshop.

Submissions can be made via EasyChair.

Posted in meta | Tagged , , , , , , , , | Leave a comment

Workshop on Functional Art, Music, Modeling and Design

I’m helping organize a new workshop, FARM, to be held in Boston this September (right after ICFP). Many readers of this blog may have already seen the announcement, but I thought it worth saying a bit more about it here, in the spirit of trying to spread the word as widely as possible.

The short of it is—it should be super interesting and a lot of fun. If you are at all interested in the intersection of functional programming and design, art, music—anything that has to do with using beautiful code to produce beautiful artifacts—you should consider submitting a paper, or planning to attend! Papers can be submitted in two categories, full papers novel research contribution) and “aesthetic applications” (which should describe some sort of beautiful way to produce something beautiful). The deadline for submissions is June 14. See the website for more details.

Posted in meta | Tagged , , , , , , , , | 2 Comments

Monad transformers: a cautionary tale

When writing the code in my previous post, I wanted to have a monad which combined the ability to generate random numbers with the ability to fail. Naturally, I decided to use RandT Maybe. But when I tried to write a failing computation of this type, I got a type error:

    No instance for (MonadPlus (RandT StdGen Maybe))
      arising from a use of `mzero'

It seems that no one ever bothered to add a MonadPlus instance for RandT. Well, that’s easy to fix. Since RandT is just a newtype wrapper around StateT we can even derive a MonadPlus instance automatically using -XGeneralizedNewtypeDeriving. So I modified the MonadRandom package, and everything worked great.

…That is, everything worked great until I started to get some strange behavior—sometimes computations would hang when I expected them to complete quickly. I finally was able to boil it down to the following minimal example. foo succeeds or fails with equal probability; bar reruns foo until it succeeds.

foo :: RandT StdGen Maybe ()
foo = do
  r <- getRandomR (0,1)
  if r < 1/2 then return () else mzero

bar :: RandT StdGen Maybe ()
bar = foo `mplus` bar

Seems straightforward, right? bar should always succeed pretty much instantly, since there’s only a 1/2^n chance that it will have to call foo n times.

However, this is not what happens: some of the time bar returns instantly as expected, and some of the time it hangs in what seems like an infinite loop! What gives?

Have you figured it out yet? (If you like these sorts of puzzles you might want to stop and see if you can figure out what was going on.) The problem is that the mplus operation for RandT StdGen Maybe runs both of its arguments with the same random seed! In other words, when a computation fails the generator state gets thrown away. And if we think about how monad transformers work this is actually not surprising. We have the following isomorphisms:

   RandT StdGen Maybe ()
== StateT StdGen Maybe ()
== StdGen -> Maybe ((), StdGen)

So when a computation fails you just get Nothing—in particular you don’t get to see what the new StdGen value would have been, so you can’t (say) pass it along to the second argument of mplus. The upshot is that bar succeeds if the first call to foo happens to succeed; otherwise it simply keeps calling foo with the exact same seed and foo keeps failing every time.

The general principle here is that “the effects of inner monad transformers take precedence over the effects of outer transformers”—in this case the failure effect of the inner Maybe takes precedence and causes the random generator state to be lost.

So what I really wanted was MaybeT (Rand StdGen), which—after adding a MonadRandom instance for MaybeT, now released as MonadRandom-0.1.9—works perfectly.

The moral of the story: monad transformers aren’t (in general) commutative! Think carefully about what order you want. (I actually wrote about this once before; you’d think I would have learned my lesson.)

Posted in haskell | Tagged , , , | Leave a comment

Random binary trees with a size-limited critical Boltzmann sampler

Today I’d like to talk about generating random trees. First, some imports and such (this post is literate Haskell).

> {-# LANGUAGE GeneralizedNewtypeDeriving #-}
> 
> module BoltzmannTrees where
> 
> import           Control.Applicative
> import           Control.Arrow                  ((&&&))
> import           Control.Lens                   ((??))
> import           Control.Monad.Random
> import           Control.Monad.Reader
> import           Control.Monad.State
> import           Control.Monad.Trans.Maybe
> import           Data.List                      (sort)
> import           Data.Maybe                     (fromJust)
> import           System.Environment             (getArgs)

So here’s a simple type of binary tree shapes, containing no data:

> data Tree = Leaf | Branch Tree Tree
>   deriving Show

We’ll count each constructor (Leaf or Branch) as having a size of 1:

> size :: Tree -> Int
> size Leaf = 1
> size (Branch l r) = 1 + size l + size r

Now, suppose we want to randomly generate these trees. This is an entirely reasonable and useful thing to do: perhaps we want to, say, randomly test properties of functions over Tree using QuickCheck. Here’s the simplest, most naïve way to do it:

> randomTree :: (Applicative m, MonadRandom m) => m Tree
> randomTree = do
>   r <- getRandom
>   if r < (1/2 :: Double)
>     then return Leaf
>     else Branch <$> randomTree <*> randomTree

We choose each of the constructors with probability 1/2, and recurse in the Branch case.

Now, as is well-known, this works rather poorly. Why is that? Let’s generate 100 random trees and print out their sizes in descending order:

ghci> reverse . sort . map size <$> replicateM 100 randomTree
  [118331,7753,2783,763,237,203,195,163,159,73,65,63,49,41,39,29,29,23,23,21,19,19,15,11,9,9,9,9,7,7,7,5,5,5,5,5,5,5,5,5,3,3,3,3,3,3,3,3,3,3,3,3,3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]

As you can see, this is a really weird distribution of sizes. For one thing, we get lots of trees that are very small—in fact, it’s easy to see that we expect about 50 of them to be single leaf nodes. The other weird thing, however, is that we also get some really humongous trees. The above output gets randomly regenerated every time I process this post—so I don’t know exactly what sizes you’ll end up seeing—but it’s a good bet that there is at least one tree with a size greater than 10^4. To get an intuitive idea of why this happens, imagine generating the tree in a breadth-first manner. At each new level we have a collection of “active” nodes corresponding to pending recursive calls to randomTree. Each active node generates zero or two new active nodes on the next level with equal probability, so on average the number of active nodes remains the same from level to level. So if we happen to make a lot of Branch choices right off the bat, it may take a long time before the tree “thins out” again. And if this distribution didn’t seem weird enough already, it turns out (though it is far from obvious how to prove this) that the expected size of the generated trees is infinite!

The usual solution with QuickCheck is to use the sized combinator to limit the size of generated structures, but this does not help with the problem of having too many very small trees.

Here’s a (seemingly!) stupid idea. Suppose we want to generate trees of size approximately 100 (say, within 10%). Let’s simply use the above algorithm, but with the following modifications:

  1. If we generate a tree of size < 90, throw it away and start over.
  2. If we generate a tree of size > 110, throw it away and start over. As an optimization, however, we will stop as soon as the size goes over 110; that is, we will keep track of the current size while generating and stop early if the size gets too big.

Here’s some code. First, a monad onion:

> newtype GenM a = GenM 
>     { unGenM :: ReaderT (Int,Int) (StateT Int (MaybeT (Rand StdGen))) a }
>   deriving (Functor, Applicative, Monad, MonadPlus, MonadRandom,
>             MonadState Int, MonadReader (Int,Int))

The ReaderT holds the min and max allowed sizes; the StateT holds the current size; the MaybeT allows for possible failure (if the tree gets too big or ends up too small), and the Rand StdGen is, of course, for generating random numbers. To run a computation in this monad we take a target size and a tolerance and use them to compute minimum and maximum sizes. (The (??) in the code below is an infix version of flip, defined in the lens package.)

> runGenM :: Int -> Double -> GenM a -> IO (Maybe a)
> runGenM targetSize eps m = do
>   let wiggle  = floor $ fromIntegral targetSize * eps
>       minSize = targetSize - wiggle
>       maxSize = targetSize + wiggle
>   g <- newStdGen
>   return . (evalRand ?? g) . runMaybeT . (evalStateT ?? 0)
>          . (runReaderT ?? (minSize, maxSize)) . unGenM
>          $ m

Here’s the code to try generating a tree: we call the atom function to record the increase in size, and choose between the two constructors with equal probability. atom, in turn, handles failing early if the size gets too big.

> genTreeUB :: GenM Tree
> genTreeUB = do
>   r <- getRandom
>   atom
>   if r <= (1/2 :: Double)
>     then return Leaf
>     else Branch <$> genTreeUB <*> genTreeUB
> 
> atom :: GenM ()
> atom = do
>   (_, maxSize) <- ask
>   curSize <- get
>   when (curSize >= maxSize) mzero
>   put (curSize + 1)

genTreeLB calls genTreeUB and then performs the lower bound check on the size.

> genTreeLB :: GenM Tree
> genTreeLB = do
>   put 0
>   t <- genTreeUB
>   tSize <- get
>   (minSize, _) <- ask
>   guard $ tSize >= minSize
>   return t

Finally, genTree just calls genTreeLB repeatedly until it succeeds.

> genTree :: GenM Tree
> genTree = genTreeLB `mplus` genTree

Let’s make sure it works:

ghci> map size . fromJust <$> runGenM 100 0.1 (replicateM 30 genTree)
  [105,91,105,103,107,101,105,93,93,93,95,91,103,91,91,107,105,103,97,95,105,107,93,97,93,103,91,103,101,95]

Neat! Okay, but surely this is really, really slow, right? We spend a bunch of time just throwing away trees of the wrong size. Before reading on, would you care to guess the asymptotic time complexity to generate a tree of size n using this algorithm?

And while you think about that, here is a random binary tree of size approximately 1000.

And the answer is… it is linear! That is, it takes O(n) time to generate a tree of size n. This is astounding—it’s the best we could possibly hope for, because of course it takes at least O(n) time to generate an object of size O(n). If you don’t believe me, I invite you to run some experiments with this code yourself. I did, and it sure looks linear:

main = do
  [sz] <- getArgs
  Just ts <- runGenM (read sz) 0.1 $ replicateM 1000 genTree
  print . (/fromIntegral n) . fromIntegral . sum . map size $ ts

archimedes :: research/species/boltzmann » time ./GenTree 50
49.682
./GenTree 50  1.37s user 0.01s system 99% cpu 1.387 total
archimedes :: research/species/boltzmann » time ./GenTree 100
99.474
./GenTree 100  3.11s user 0.02s system 99% cpu 3.152 total
archimedes :: research/species/boltzmann » time ./GenTree 200
198.494
./GenTree 200  6.82s user 0.04s system 99% cpu 6.876 total
archimedes :: research/species/boltzmann » time ./GenTree 400
398.798
./GenTree 400  13.08s user 0.08s system 99% cpu 13.208 total
archimedes :: research/species/boltzmann » time ./GenTree 800
795.798
./GenTree 800  25.99s user 0.16s system 99% cpu 26.228 total

The proof of this astounding fact uses some complex analysis which I do not understand; I wish I was joking. Of course, the constant factor can be big, depending on how small you set the “epsilon” allowing for wiggle room around the target size.1 But it is still quite feasible to generate rather large trees (with, say, 10^5 nodes).

There is much, much more to say on this topic. I just wanted to start out with a simple example before jumping into more of the technical details and generalizations, which I plan to write about in future posts. I also hope to package this and a bunch of other stuff into a library. In the meantime, you can read Duchon et. al2 if you want the details.


  1. Actually, if you set epsilon to zero, the asymptotic complexity jumps to O(n^2).

  2. Duchon, Philippe, et al. “Boltzmann samplers for the random generation of combinatorial structures.” Combinatorics Probability and Computing 13.4-5 (2004): 577-625.

Posted in combinatorics, haskell, math, species | Tagged , , , , , | 7 Comments

Beeminding for fun and profit

I’ve been using Beeminder (which I’ve mentioned once before) for a little over six months now. The verdict?

Beeminder has changed my life.

That sounds dramatic, but I’m not kidding. I am far more productive than I’ve ever been. I’m taking better care of myself. I’m finally taking the initiative to act on various long-held intentions (e.g. learning Hebrew). And I no longer have a constant nagging sense of guilt over all the big goals and projects that I ought to be working on more. It’s not for everyone, but I’m sure there are many others for whom it could be similarly transformative.

So, what is Beeminder? The basic idea is that it helps you keep track of progress on any quantifiable goals, and gives you short-term incentive to stay on track: if you don’t, Beeminder takes your money. But it’s not just about the fear of losing money. Shiny graphs tracking your progress coupled with helpfully concrete short-term goals (“today you need to write 1.3 pages of that paper”) make for excellent positive motivation, too. Another somewhat intangible but important reason it works is that the Beeminder developers are really awesome and responsive, and are sincerely dedicated to helping their users meet goals, not just to making money. (They recently introduced some paid premium plans which I happily signed up for, not because I need the premium features, but because I want to support continued development—in fact, I’ve otherwise paid Beeminder only $5 over the past six months!) If you want to know more, I encourage you to read Beeminder’s own overview, which does a much better job of explaining how and why it works.

Six months: quite long enough for the initial “shiny new toy” enthusiasm to wear off, and long enough, I think, to get a good sense of what works for me and what doesn’t. So I’m writing this post in the hope that my experience will be useful or inspiring to others.

So here are some of the ways I’m using it, which I have found to work well. (You can see all my beeminder goals here.1) I hope some of these may inspire you with ways to make yourself more productive, whether you use Beeminder or not.

  • Big projects

    Consistently spending time on big, long-term projects is really hard—at least, it was hard before I started using Beeminder! Now I just make a goal for each project requiring me to spend a certain amount of time on it each week. This helps me stay on track and also gets rid of that nagging guilt—once I’ve done enough to stay on track, I can stop and do other things and not feel guilty about it! I’ve used this to spend a certain amount of time preparing for courses I’m going to teach; I use it for getting research done, and for working on diagrams. A year or so ago I posted on Google+ complaining that I needed a scheduling algorithm for my life, and in many ways Beeminder has filled that role. It’s also a great way to get some cold, hard data on how much time I actually spend on various projects (you can start with a “flat” goal and just record data for a while if you don’t know what a reasonable rate for the goal is).

  • Reading and writing projects

    There’s no way I ever would have gotten my thesis proposal written without Beeminder. The important thing to note is that the goal was based on page count rather than time spent. (The “Odometer” goal type is useful for this sort of thing.) This forced me to actually get real writing done, rather than frittering time away adjusting the kerning or whatever. Interestingly, it also forced me to get creative about padding the page count, by pasting in text I’d already written before (from blog posts, grant proposals, etc.). In the end, reusing text I’d written before and then editing it was a much better use of my time than writing everything from scratch, but for whatever reason I’m not sure my perfectionist self would have done it without the pressure of “you have to write two pages in the next three hours OR ELSE”.

    I’ve also made goals for reviews I’ve been asked to do, again using an “Odometer” goal to track page numbers.

    I also have a goal to write blog posts with a certain frequency on either of my two blogs. (In fact, I’m finally finishing this blog post because otherwise in about an hour I’m going to owe Beeminder $5!)

  • Learning

    I have long intended to learn to read Hebrew, but it never seemed like the “right time”. I finally admitted that there will never be a “right time”, and just started2. Starting is one thing; continuing to regularly study after the initial excitement has worn off is only possible because of my Beeminder goal, which also serves as a check on discouragement. It will be a long time before I am any good at reading Hebrew; but in the meantime I am motivated by logging time on my goal.

    I use anki for memorizing all sorts of things—ancient Greek and Hebrew vocabulary, recipes, emacs commands, and names and faces of students. To help me stay on track reviewing flash cards, I have a Beeminder goal to review 100 Anki cards a day. Recently, the number of cards coming due each day started dropping significantly below 100, so instead of lessening the Beeminder goal I decided to start learning some geography (flags, countries, capitals, etc.) which has been a lot of fun.

  • Productivity

    I have a number of goals directly intended to increase my productivity.

    • I use FogBugz to keep track of all my tasks and todos. I use three different Beeminder goals in relation to FogBugz:

      • As described in previous blog post, I have one goal to close a certain number of cases per day (currently 4 per day, which is historically about average for me). This goal is automatically updated every time I close a case in FogBugz.
      • When I get an email requiring me to act or respond in some way, I very often just forward it to FogBugz to deal with later. So I have a goal to spend a certain amount of time dealing with cases in my FogBugz inbox; otherwise it’s too easy to just let these rot.
      • It’s way too easy to ignore todo items which have no real deadline and are somehow distasteful, intimidating, or both. To help overcome this inertia, I’ve come up with something that works fairly well. I have a Beeminder goal to spend a certain amount of time doing “FogBugz review”. It works like this: I have a certain query defined in FogBugz which shows me the five least recently edited open tickets. When working on review, I must pick one of these five cases and make some sort of progress on it (it’s perfectly fine if I don’t complete it). After making some progress I add a note to the ticket explaining what I did. This both helps me pick up where I left off next time I come to work on the ticket, and makes the ticket automatically drop out of the review query, since it has now been edited. I then look at the top five tickets again (including some new ticket that has now moved into the top five), choose one, and repeat.
    • I have found that I am much more productive if at the start of each day I intentionally plan out the rest of the day, recalling the things I have scheduled and deciding how to spend the remaining unscheduled time—consulting FogBugz and Beeminder to decide what my priorities should be for the day and how much time to spend on each. To force myself to do this consistently, I of course made a Beeminder goal to do this planning a certain number of days each week. The catch is that I have to do the planning before checking email, Facebook, or IRC, or else the planning only counts for half a day.3

    • Another thing which I’ve found helps my productivity is to turn off my computer before going to bed. The choices I make when I first get up tend to have a ripple effect on the rest of the day. If my computer is on when I first get up, it’s very tempting to immediately start aimlessly checking email; if it’s off, it’s that much easier to make deliberate choices about how to begin my day. The important point here is that I’ve made a positive goal (to turn off my computer) instead of negative goal (to spend less than X amount of time checking email, etc., in the morning). I’ve found that negative goals don’t work nearly as well: they are far less motivating, and psychologically speaking it’s too easy to lie to Beeminder by neglecting to report data—by contrast, actively lying by submitting false data is much more difficult.

  • Personal goals

    Last but not least, I now take better care of myself and my stuff in some simple but important ways. I have flossed more in the last six months than the rest of my life put together. I trim my beard and my toenails more regularly, take allergy medication almost every day, take care of my bike by inflating the tires and greasing the chain, and clean around the house (which my wife loves).

So there you have it. If you end up trying Beeminder, or come up with some cool goal-based life hacks, or just have questions, I’d love to hear from you!


  1. You’ll notice that some of my goals are private/hidden. Mostly these are personal or relate to religious commitments, and for various reasons I’d rather not broadcast them to the whole Internet—but at the same time, I have no secrets and would be glad to discuss them with anyone who’s interested.

  2. Well, at the time it was a way of procrastinating from working on my thesis proposal

  3. This is completely self-enforced, of course, but it’s ten times harder to actively choose to lie to Beeminder (which I have never done) than it was to “just check a few emails first” before I had any sort of external accountability.

Posted in grad school, meta | Tagged , , , | 17 Comments

Introducing diagrams-haddock

I am quite pleased to announce the release of diagrams-haddock, a tool enabling you to easily include programmatically generated diagrams in your Haddock documentation. Why might you want to do this? “A picture is worth a thousand words”—in many instances a diagram or illustration can dramatically increase the comprehension of users reading your library’s documentation. The diagrams project itself will be using this for documentation, beginning with the diagrams-contrib package (for example, check out the documentation for Diagrams.TwoD.Path.IteratedSubset). But inline images can benefit the documentation of just about any library.

Before jumping into a more detailed example, here are the main selling points of diagrams-haddock:

  1. You get to create arbitrary images to enhance your documentation, using the powerful diagrams framework.

  2. The code for your images goes right into your source files themselves, alongside the documentation—there is no need to maintain a bunch of auxiliary files, or (heaven forbid) multiple versions of your source files.

  3. Images are regenerated when, and only when, their definition changes—so you can include many diagrams in your documentation without having to recompile all of them every time you make a change to just one.

  4. You have to do a little bit of work to integrate the generated images into your Cabal package, but it’s relatively simple and you only have to do it once per package. No one else needs to have diagrams-haddock installed in order to build your documentation with the images (this includes Hackage).

So, how does it work? (For full details, consult the diagrams-haddock documentation.) Suppose we have some Haddock documentation that looks like this:

-- | The foozle function takes a widget and turns it into an
--   infinite list of widgets which alternate between red and
--   yellow.
--
foozle :: Widget -> [Widget]
foozle = ...

It would be really nice to illustrate this with a picture, don’t you think? First, we insert an image placeholder like so:

-- | The foozle function takes a widget and turns it into an
--   infinite list of widgets which alternate between red and
--   yellow.
--
--   <<dummy#diagram=foozleDia&width=300>>
--
foozle :: Widget -> [Widget]
foozle = ...

It doesn’t matter what we put in place of dummy; diagrams-haddock is going to shortly replace it anyway. The stuff following the # is a list of parameters to diagrams-haddock: we tell it to insert here an image built from the diagram called foozleDia, and that it should have a width of 300 pixels.

Now we just have to give a definition for foozleDia, which we do simply by creating a code block (set off with bird tracks) in a comment:

-- | The foozle function takes a widget and turns it into an
--   infinite list of widgets which alternate between red and
--   yellow.
--
--   <<dummy#diagram=foozleDia&width=300>>
--
foozle :: Widget -> [Widget]
foozle = ...

-- > widget =
-- >   (  stroke (circle 1.25 <> circle 0.75 # reversePath)
-- >   <> mconcat (iterateN 10 (rotateBy (1/10)) (square 0.5 # translateX 1.3))
-- >   )
-- >   # lw 0
-- >
-- > foozleDia =
-- >   hcat' with {sep = 2}
-- >   [ widget # fc black
-- >   , hrule 4 # alignR <> triangle 1 # rotateBy (-1/4) # fc black
-- >   , hcat' with {sep = 0.5} (zipWith fc (cycle [red, yellow]) (replicate 6 widget))
-- >   ]

Note that this definition for foozleDia isn’t in a Haddock comment, so it won’t be typeset in the Haddock output. (However, if you want users reading your documentation to see the code used to generate the pictures—as, e.g., we often do in the documentation for diagrams itself—it’s as simple as sticking the definitions in a Haddock comment.) It also doesn’t have to go right after the definition of foozle—for example, we could stick it all the way at the end of the source file if we didn’t want it cluttering up the code.

Now we simply run diagrams-haddock on our file (or on the whole Cabal project), and it will generate an appropriate SVG image and replace <<dummy#...>> with something like <<diagrams/foozleDia.svg#...>>. The Haddock documentation now displays something like

after the documentation for foozle. Hooray! Note that diagrams-haddock only replaces the stuff before the # (the clever bit is that browsers will ignore everything after the #). Running diagrams-haddock again at this point will do nothing. If we change the definition of foozleDia and then rerun diagrams-haddock, it will regenerate the image.

Okay, but how will others (or, for that matter, Hackage) be able to see the diagram for foozle when they build the documentation, without needing diagrams-haddock themselves? It’s actually fairly straightforward—we simply include the generated images in the source tarball, and tell cabal to copy the images in alongside the documentation when it is built, using either a custom Setup.hs, or (once it is released and sufficiently ubiquitous) the new extra-html-files: field in the .cabal file. The diagrams-haddock documentation has full details with step-by-step instructions.

I hope this silly example has piqued your interest; again, for full details please consult the diagrams-haddock documentation. Now go forth and illustrate your documentation!

Posted in diagrams, haskell, writing | Tagged , , , | 4 Comments