Adventures in enumerating balanced brackets

Since I’ve been coaching my school’s ACM ICPC programming team, I’ve been spending a bit of time solving programming contest problems, partly to stay sharp and be able to coach them better, but also just for fun.

I recently solved a problem (using Haskell) that ended up being tougher than I thought, but I learned a lot along the way. Rather than just presenting a solution, I’d like to take you through my thought process, crazy detours and all.

Of course, I should preface this with a big spoiler alert: if you want to try solving the problem yourself, you should stop reading now!

> {-# LANGUAGE GADTs #-}
> {-# LANGUAGE DeriveFunctor #-}
> module Brackets where
> import Data.List (sort, genericLength)
> import Data.MemoTrie (memo, memo2)
> import Prelude hiding ((++))

The problem

There’s a lot of extra verbiage at the official problem description, but what it boils down to is this:

Find the mth element of the lexicographically ordered sequence of all balanced bracketings of length n.

There is a longer description at the problem page, but hopefully a few examples will suffice. A balanced bracketing is a string consisting solely of parentheses, in which opening and closing parens can be matched up in a one-to-one, properly nested way. For example, there are five balanced bracketings of length 6:

((())), (()()), (())(), ()(()), ()()()

By lexicographically ordered we just mean that the bracketings should be in “dictionary order” where ( comes before ), that is, bracketing x comes before bracketing y if and only if in the first position where they differ, x has ( and y has ). As you can verify, the list of length-6 bracketings above is, in fact, lexicographically ordered.

A first try

Oh, this is easy, I thought, especially if we consider the well-known isomorphism between balanced bracketings and binary trees. In particular, the empty string corresponds to a leaf, and (L)R (where L and R are themselves balanced bracketings) corresponds to a node with subtrees L and R. So the five balanced bracketings of length 6 correspond to the five binary trees with three nodes:

We can easily generate all the binary trees of a given size with a simple recursive algorithm. If n = 0, generate a Leaf; otherwise, decide how many nodes to put on the left and how many on the right, and for each such distribution recursively generate all possible trees on the left and right.

> data Tree where
>   Leaf :: Tree
>   Node :: Tree -> Tree -> Tree
>   deriving (Show, Eq, Ord)
> allTrees :: Int -> [Tree]
> allTrees 0 = [Leaf]
> allTrees n =
>   [ Node l r
>   | k <- [0 .. n-1]
>   , l <- allTrees ((n-1) - k)
>   , r <- allTrees k
>   ]

We generate the trees in “left-biased” order, where we first choose to put all n-1 nodes on the left, then n-2 on the left and 1 on the right, and so on. Since a subtree on the left will result in another opening paren, but a subtree on the right will result in a closing paren followed by an open paren, it makes intuitive sense that this corresponds to generating bracketings in sorted order. You can see that the size-3 trees above, generated in left-biased order, indeed have their bracketings sorted.

Writing allTrees is easy enough, but it’s definitely not going to cut it: the problem states that we could have up to n = 1000. The number of trees with 1000 nodes has 598 digits (!!), so we can’t possibly generate the entire list and then index into it. Instead we need a function that can more efficiently generate the tree with a given index, without having to generate all the other trees before it.

So I immediately launched into writing such a function, but it’s tricky to get right. It involves computing Catalan numbers, and cumulative sums of products of Catalan numbers, and divMod, and… I never did get that function working properly.

The first epiphany

But I never should have written that function in the first place! What I should have done first was to do some simple tests just to confirm my intuition that left-biased tree order corresponds to sorted bracketing order. Because if I had, I would have found this:

> brackets :: Tree -> String
> brackets Leaf       = ""
> brackets (Node l r) = mconcat ["(", brackets l, ")", brackets r]
> sorted :: Ord a => [a] -> Bool
> sorted xs = xs == sort xs
ghci> sorted (map brackets (allTrees 3))

ghci> sorted (map brackets (allTrees 4))

As you can see, my intuition actually led me astray! n = 3 is a small enough case that left-biased order just happens to be the same as sorted bracketing order, but for n = 4 this breaks down. Let’s see what goes wrong:

In the top row are the size-4 trees in “left-biased” order, i.e. the order generated by allTrees. You can see it is nice and symmetric: reflecting the list across a vertical line leaves it unchanged. On the bottom row are the same trees, but sorted lexicographically by their bracketings. You can see that the lists are almost the same except the red tree is in a different place. The issue is the length of the left spine: the red tree has a left spine of three nodes, which means its bracketing will begin with (((, so it should come before any trees with a left spine of length 2, even if they have all their nodes in the left subtree (whereas the red tree has one of its nodes in the right subtree).

My next idea was to try to somehow enumerate trees in order by the length of their left spine. But since I hadn’t even gotten indexing into the original left-biased order to work, it seemed hopeless to get this to work by implementing it directly. I needed some bigger guns.

Building enumerations

At this point I had the good idea to introduce some abstraction. I defined a type of enumerations (a la FEAT or data/enumerate):

> data Enumeration a = Enumeration
>   { fromNat :: Integer -> a
>   , size    :: Integer
>   }
>   deriving Functor
> enumerate :: Enumeration a -> [a]
> enumerate (Enumeration f n) = map f [0..n-1]

An Enumeration consists of a size along with a function Integer -> a, which we think of as being defined on [0 .. size-1]. That is, an Enumeration is isomorphic to a finite list of a given length, where instead of explicitly storing the elements, we have a function which can compute the element at a given index on demand. If the enumeration has some nice combinatorial structure, then we expect that this on-demand indexing can be done much more efficiently than simply listing all the elements. The enumerate function simply turns an Enumeration into the corresponding finite list, by mapping the indexing function over all possible indices.

Note that Enumeration has a natural Functor instance, which GHC can automatically derive for us. Namely, if e is an Enumeration, then fmap f e is the Enumeration which first computes the element of e for a given index, and then applies f to it before returning.

Now, let’s define some combinators for building Enumerations. We expect them to have all the nice algebraic flavor of finite lists, aka free monoids.

First, we can create empty or singleton enumerations, or convert any finite list into an enumeration:

> empty :: Enumeration a
> empty = Enumeration (const undefined) 0
> singleton :: a -> Enumeration a
> singleton a = Enumeration (\_ -> a) 1
> list :: [a] -> Enumeration a
> list as = Enumeration (\n -> as !! fromIntegral n) (genericLength as)
ghci> enumerate (empty :: Enumeration Int)

ghci> enumerate (singleton 3)

ghci> enumerate (list [4,6,7])

We can form the concatenation of two enumerations. The indexing function compares the given index against the size of the first enumeration, and then indexes into the first or second enumeration appropriately. For convenience we can also define union, which is just an iterated version of (++).

> (++) :: Enumeration a -> Enumeration a -> Enumeration a
> e1 ++ e2 = Enumeration
>   (\n -> if n < size e1 then fromNat e1 n else fromNat e2 (n - size e1))
>   (size e1 + size e2)
> union :: [Enumeration a] -> Enumeration a
> union = foldr (++) empty
ghci> enumerate (list [3, 5, 6] ++ empty ++ singleton 8)

Finally, we can form a Cartesian product: e1 >< e2 is the enumeration of all possible pairs of elements from e1 and e2, ordered so that all the pairs formed from the first element of e1 come first, followed by all the pairs with the second element of e1, and so on. The indexing function divides the given index by the size of e2, and uses the quotient to index into e1, and the remainder to index into e2.

> (><) :: Enumeration a -> Enumeration b -> Enumeration (a,b)
> e1 >< e2 = Enumeration
>   (\n -> let (l,r) = n `divMod` size e2 in (fromNat e1 l, fromNat e2 r))
>   (size e1 * size e2)
ghci> enumerate (list [1,2,3] >< list [10,20])

ghci> let big = list [0..999] >< list [0..999] >< list [0..999] >< list [0..999]
ghci> fromNat big 2973428654

Notice in particular how the fourfold product of list [0..999] has 1000^4 = 10^{12} elements, but indexing into it with fromNat is basically instantaneous.

Since Enumerations are isomorphic to finite lists, we expect them to have Applicative and Monad instances, too. First, the Applicative instance is fairly straightforward:

> instance Applicative Enumeration where
>   pure    = singleton
>   f <*> x = uncurry ($) <$> (f >< x)
ghci> enumerate $ (*) <$> list [1,2,3] <*> list [10, 100]

pure creates a singleton enumeration, and applying an enumeration of functions to an enumeration of arguments works by taking a Cartesian product and then applying each pair.

The Monad instance works by substitution: in e >>= k, the continuation k is applied to each element of the enumeration e, and the resulting enumerations are unioned together in order.

> instance Monad Enumeration where
>   return  = pure
>   e >>= f = union (map f (enumerate e))
ghci> enumerate $ list [1,2,3] >>= \i -> list (replicate i i)

Having to actually enumerate the elements of e is a bit unsatisfying, but there is really no way around it: we otherwise have no way to know how big the resulting enumerations are going to be.

Now, that function I tried (and failed) to write before that generates the tree at a particular index in left-biased order? Using these enumeration combinators, it’s a piece of cake. Basically, since we built up combinators that mirror those available for lists, it’s just as easy to write this indexing version as it is to write the original allTrees function (which I’ve copied below for comparison):

allTrees :: Int -> [Tree]
allTrees 0 = [Leaf]
allTrees n =
  [ Node l r
  | k <- [0 .. n-1]
  , l <- allTrees ((n-1) - k)
  , r <- allTrees k
> enumTrees :: Int -> Enumeration Tree
> enumTrees 0 = singleton Leaf
> enumTrees n = union
>   [ Node <$> enumTrees (n-k-1) <*> enumTrees k
>   | k <- [0 .. n-1]
>   ]

(enumTrees and allTrees look a bit different, but actually allTrees can be rewritten in a very similar style:

allTrees :: Int -> [Tree]
allTrees 0 = [Leaf]
allTrees n = concat
  [ Node <$> allTrees ((n-1) - k) <*> r <- allTrees k
  | k <- [0 .. n-1]

Doing as much as possible using the Applicative interface gives us added “parallelism”, which in this case means the ability to index directly into a product with divMod, rather than scanning through the results of calling a function on enumerate until we have accumulated the right size. See the paper on the GHC ApplicativeDo extension.)

Let’s try it out:

ghci> enumerate (enumTrees 3)
  [Node (Node (Node Leaf Leaf) Leaf) Leaf,Node (Node Leaf (Node Leaf Leaf)) Leaf,Node (Node Leaf Leaf) (Node Leaf Leaf),Node Leaf (Node (Node Leaf Leaf) Leaf),Node Leaf (Node Leaf (Node Leaf Leaf))]

ghci> enumerate (enumTrees 3) == allTrees 3

ghci> enumerate (enumTrees 7) == allTrees 7

ghci> brackets $ fromNat (enumTrees 7) 43

It seems to work! Though actually, if we try larger values of n, enumTrees just seems to hang. The problem is that it ends up making many redundant recursive calls. Well… nothing a bit of memoization can’t fix! (Here I’m using Conal Elliott’s nice MemoTrie package.)

> enumTreesMemo :: Int -> Enumeration Tree
> enumTreesMemo = memo enumTreesMemo'
>   where
>     enumTreesMemo' 0 = singleton Leaf
>     enumTreesMemo' n = union
>       [ Node <$> enumTreesMemo (n-k-1) <*> enumTreesMemo k
>       | k <- [0 .. n-1]
>       ]
ghci> size (enumTreesMemo 10)

ghci> size (enumTreesMemo 100)

ghci> size (enumTreesMemo 1000)

ghci> brackets $ fromNat (enumTreesMemo 1000) 8234587623904872309875907638475639485792863458726398487590287348957628934765

That’s better!

A second try

At this point, I thought that I needed to enumerate trees in order by the length of their left spine. Given a tree with a left spine of length s, we enumerate all the ways to partition the remaining n-s elements among the right children of the s spine nodes, preferring to first put elements as far to the left as possible. As you’ll see, this turns out to be wrong, but it’s fun to see how easy it is to write this using the enumeration framework.

First, we need an enumeration of the partitions of a given n into exactly k parts, in lexicographic order.

> kPartitions :: Int -> Int -> Enumeration [Int]

There is exactly one way to partition 0 into zero parts.

> kPartitions 0 0 = singleton []

We can’t partition anything other than 0 into zero parts.

> kPartitions _ 0 = empty

Otherwise, pick a number i from n down to 0 to go in the first spot, and then recursively enumerate partitions of n-i into exactly k-1 parts.

> kPartitions n k = do
>   i <- list [n, n-1 .. 0]
>   (i:) <$> kPartitions (n-i) (k-1)

Let’s try it:

ghci> let p43 = enumerate $ kPartitions 4 3
ghci> p43

ghci> all ((==3) . length) p43

ghci> all ((==4) . sum) p43

ghci> sorted (reverse p43)

Now we can use kPartitions to build our enumeration of trees:

> spinyTrees :: Int -> Enumeration Tree
> spinyTrees = memo spinyTrees'
>   where
>     spinyTrees' 0 = singleton Leaf
>     spinyTrees' n = do
>       -- Pick the length of the left spine
>       spineLen  <- list [n, n-1 .. 1]
>       -- Partition the remaining elements among the spine nodes
>       bushSizes <- kPartitions (n - spineLen) spineLen
>       bushes <- traverse spinyTrees bushSizes
>       return $ buildSpine (reverse bushes)
>     buildSpine :: [Tree] -> Tree
>     buildSpine []     = Leaf
>     buildSpine (b:bs) = Node (buildSpine bs) b

This appears to give us something reasonable:

ghci> size (spinyTrees 7) == size (enumTreesMemo 7)

But it’s pretty slow—which is to be expected with all those monadic operations required. And there’s more:

ghci> sorted . map brackets . enumerate $ spinyTrees 3

ghci> sorted . map brackets . enumerate $ spinyTrees 4

ghci> sorted . map brackets . enumerate $ spinyTrees 5

Foiled again! All we did was stave off failure a bit, until n=5. I won’t draw all the trees of size 5 for you, but the failure mode is pretty similar: picking subtrees for the spine based just on how many elements they have doesn’t work, because there are cases where we want to first shift some elements to a later subtree, keeping the left spine of a subtree, before moving the elements back and having a shorter left spine.

The solution: just forget about trees, already

It finally occurred to me that there was nothing in the problem statement that said anything about trees. That was just something my overexcited combinatorial brain imposed on it: obviously, since there is a bijection between balanced bracketings and binary trees, we should think about binary trees, right? …well, there is also a bijection between balanced bracketings and permutations avoiding (231), and lattice paths that stay above the main diagonal, and hundreds of other things, so… not necessarily.

In this case, I think trees just end up making things harder. Let’s think instead about enumerating balanced bracket sequences directly. To do it recursively, we need to know how to enumerate possible endings to the start of any balanced bracket sequence. That is, we need to enumerate sequences containing n opening brackets and c extra closing brackets (so n+c closing brackets in total), which can be appended to a sequence of brackets with c more opening brackets than closing brackets.

Given this idea, the code is fairly straightforward:

> enumBrackets :: Int -> Enumeration String
> enumBrackets n = enumBracketsTail n 0
> enumBracketsTail :: Int -> Int -> Enumeration String
> enumBracketsTail = memo2 enumBracketsTail'
>   where

To enumerate a sequence with no opening brackets, just generate c closing brackets.

>     enumBracketsTail' 0 c = singleton (replicate c ')')

To enumerate balanced sequences with n opening brackets and an exactly matching number of closing brackets, start by generating an opening bracket and then continue by generating sequences with n-1 opening brackets and one extra closing bracket to match the opening bracket we started with.

>     enumBracketsTail' n 0 = ('(':) <$> enumBracketsTail (n-1) 1

In general, a sequence with n opening and c extra closing brackets is either an opening bracket followed by an (n-1, c+1)-sequence, or a closing bracket followed by an (n, c-1)-sequence.

>     enumBracketsTail' n c =
>         (('(':) <$> enumBracketsTail (n-1) (c+1))
>         ++
>         ((')':) <$> enumBracketsTail n (c-1))

This is quite fast, and as a quick check, it does indeed seem to give us the same size enumerations as the other tree enumerations:

ghci> fromNat (enumBrackets 40) 16221270422764920820

ghci> size (enumBrackets 100) == size (enumTreesMemo 100)

But, are they sorted? It would seem so!

ghci> all sorted (map (enumerate . enumBrackets) [1..10])

At this point, you might notice that this can be easily de-abstracted into a fairly simple dynamic programming solution, using a 2D array to keep track of the size of the enumeration for each (n,c) pair. I’ll leave the details to interested readers.

Posted in combinatorics, haskell | Tagged , , , | Leave a comment

ICFP roundup

ICFP 2016 in Nara, Japan was a blast. Here are a few of my recollections.

The Place

Although I was a coathor on an ICFP paper in 2011, when it was in Tokyo, I did not go since my son was born the same week. So this was my first time in Japan, or anywhere in Asia, for that matter. (Of course, this time I missed my son’s fifth birthday…)

I’ve been to Europe multiple times, and although it is definitely foreign, the culture is similar enough that I feel like I basically know how to behave. I did not feel that way in Japan. I’m pretty sure I was constantly being offensive without realizing it, but most of the time people were polite and accommodating.

…EXCEPT for that one time I was sitting in a chair chatting with folks during a break between sessions, with my feet up on a (low, plain) table, and an old Japanese guy WHACKED his walking stick on the table and shouted angrily at me in Japanese. That sure got my adrenaline going. Apparently putting your feet on the table is a big no-no, lesson learned.

The food was amazing even though I didn’t know what half of it was. I was grateful that I (a) am not vegetarian, (b) know how to use chopsticks decently well, and (c) am an adventurous eater. If any one of those were otherwise, things might have been more difficult!

On my last day in Japan I had the whole morning before I needed to head to the airport, so Ryan Yates and I wandered around Nara and saw a bunch of temples, climbed the hill, and such. It’s a stunningly beautiful place with a rich history.

The People

As usual, it’s all about the people. I enjoyed meeting some new people, including (but not limited to):

  • Pablo Buiras and Marco Vassena were my hotel breakfast buddies, it was fun getting to know them a bit.
  • I finally met Dominic Orchard, though I feel like I’ve known his name and known about some of his work for a while.
  • I don’t think I had met Max New before but we had a nice chat about the Scheme enumerations library he helped develop and combinatorial species. I hope to be able to follow up that line of inquiry.
  • As promised, I met everyone who commented on my blog post, including Jürgen Peters (unfortunately we did not get a chance to play go), Andrey Mokhov (who nerd-sniped me with a cool semiring-ish thing with some extra structure — perhaps that will be another blog post), and Jay McCarthy (whom I had actually met before, but we had some nice chats, including one in the airport while waiting for our flight to LAX).
  • I don’t think I had met José Manuel Calderón Trilla before; we had a great conversation over a meal together (along with Ryan Yates) in the Osaka airport while waiting for our flights.
  • I met Diogenes Nunez, who went to my alma mater Williams College. When I taught at Williams a couple years ago I’m pretty sure I heard Diogenes mentioned by the other faculty, so it was fun to get to meet him.
  • Last but certainly not least, I met my coauthor, Piyush Kurur. We collaborated on a paper through the magic of the Internet (Github in particular), and I actually met him in person for the first time just hours before he presented our paper!

My student Ollie Kwizera came for PLMW—it was fun having him there. I only crossed paths with him three or four times, but I think that was all for the best, since he made his own friends and had his own experiences.

Other people who I enjoyed seeing and remember having interesting conversations with include (but I am probably forgetting someone!) Michael Adams, Daniel Bergey, Jan Bracker, Joachim Breitner, David Christiansen, David Darais, Stephen Dolan, Richard Eisenberg, Kenny Foner, Marco Gaboardi, Jeremy Gibbons, John Hughes, David Janin, Neel Krishnaswami, Dan Licata, Andres Löh, Simon Marlow, Tom Murphy, Peter-Michael Osera, Jennifer Paykin, Simon Peyton Jones, Ryan Scott, Mary Sheeran, Mike Sperber, Luite Stegeman, Wouter Swierstra, David Terei, Ryan Trinkle, Tarmo Uustalu, Stephanie Weirich, Nick Wu, Edward Yang, and Ryan Yates. My apologies if I forgot you, just remind me and I’ll add you to the list! I’m amazed and grateful I get to know all these cool people.

The Content

Here are just a few of my favorite talks:

  • I’m a sucker for anything involving geometry and/or random testing and/or pretty pictures, and Ilya Sergey’s talk Growing and Shrinking Polygons for Random testing of Computational Geometry had them all. In my experience, doing effective random testing in any domain beyond basic functions usually requires some interesting domain-specific insights, and Ilya had some cool insights about ways to generate and shrink polygons in ways that were much more likely to generate small counterexamples for computational geometry algorithms.

  • Idris gets more impressive by the day, and I always enjoy David Christiansen’s talks.

  • Sandra Dylus gave a fun talk, All Sorts of Permutations, with the cute observation that a sorting algorithm equipped with a nondeterministic comparison operator generates permutations (though it goes deeper than that). During the question period someone asked whether there is a way to generate all partitions, and someone sitting next to me suggested using the group function—and indeed, I think this works. I wonder what other sorts of combinatorial objects can be enumerated by this method. In particular I wonder if quicksort with nondeterministic comparisons can be adapted to generate not just all permutations, but all binary trees.

  • I greatly enjoyed TyDe, especially Jeremy Gibbons’ talk on APLicative Programming with Naperian Functors (I don’t think the video is online yet, if there is one). I’ll be serving as co-chair of the TyDe program committee next year, so start thinking about what you would like to submit!

  • There were also some fun talks at FARM, for example, Jay McCarthy’s talk on Bithoven. But I don’t think the FARM videos are uploaded yet. Speaking of FARM, the performance evening was incredible. It will be hard to live up to next year.

Posted in haskell | Tagged , , | 3 Comments

The generic-random library, part 1: simple generic Arbitrary instances

In a previous post I pointed out that we know all the theory to make nice, principled, practical random generators for recursive algebraic data types; someone just needed to step up and do the work. Well, Li-yao Xia took up the challenge and produced a brilliant package, generic-random, available on Hackage right now for you to use!

However, although the package does include some Haddock documentation, it is probably difficult for someone with no experience or background in this area to navigate. So I thought it would be worth writing a few blog posts by way of a tutorial and introduction to the package.

> {-# LANGUAGE GADTSyntax           #-}
> {-# LANGUAGE DeriveGeneric        #-}
> {-# LANGUAGE FlexibleContexts     #-}
> {-# LANGUAGE UndecidableInstances #-}
> import GHC.Generics
> import Test.QuickCheck
> import Generic.Random.Generic

The problem

First, a quick recap of the problem we are trying to solve: the obvious, naive way of generating random instances of some recursive algebraic data type often produces really terrible distributions. For example, one might generate really tiny structures most of the time and then occasionally generate a humongous one. For more background on the problem, see this post or this one.

A first example: generating generic Arbitrary instances

As a first example, consider the following algebraic data type:

> data Foo where
>   Bar  :: Char -> Int -> String -> Foo
>   Baz  :: Bool -> Bool -> Foo
>   Quux :: [Woz] -> Foo
>   deriving (Show, Generic)
> data Woz where
>   Wiz :: Int -> Woz
>   Waz :: Bool -> Woz
>   deriving (Show, Generic)

You have probably noticed by now that this is not recursive (well, except for the embedded lists). Patience! We’ll get to recursive ADTs in due time, but it turns out the library has some nice things to offer for non-recursive ADTs as well, and it makes for an easier introduction.

Now, suppose we wanted to use QuickCheck to test some properties of a function that takes a Foo as an argument. We can easily make our own instances of Arbitrary for Foo and Woz, like so:

instance Arbitrary Foo where
  arbitrary = oneof
    [ Bar <$> arbitrary <*> arbitrary <*> arbitrary
    , Baz <$> arbitrary <*> arbitrary
    , Quux <$> arbitrary

instance Arbitrary Woz where
  arbitrary = oneof
    [ Wiz <$> arbitrary
    , Waz <$> arbitrary

This works reasonably well:

λ> sample (arbitrary :: Gen Foo)
Baz True True
Baz False True
Baz True True
Quux []
Baz False True
Bar '<' 3 "zy\\\SOHpO_"
Baz False True
Bar '\SOH' 0 "\"g\NAKm"
Bar 'h' (-9) "(t"
Quux [Wiz (-2),Waz False]
Baz False True

The only problem is that writing those instances is quite tedious. There is no thought required at all. Isn’t this exactly the sort of thing that is supposed to be automated with generic programming?

Why yes, yes it is. And the generic-random package can do exactly that. Notice that we have derived Generic for Foo and Woz. We can now use the genericArbitrary function from Generic.Random.Generic to derive completely standard Arbitrary instances, just like the ones we wrote above:

> instance Arbitrary Foo where
>   arbitrary = genericArbitrary
> instance Arbitrary Woz where
>   arbitrary = genericArbitrary
λ> sample (arbitrary :: Gen Foo)
Quux []
Bar '\159' (-2) ""
Baz True True
Baz False False
Baz True True
Baz True False
Quux [Wiz 9,Wiz 7,Waz True,Waz True,Waz False]
Quux [Wiz (-10),Waz False,Waz False,Waz True,Waz True,Wiz (-14),Wiz 13,Waz True,Wiz (-8),Wiz 12,Wiz (-13)]
Bar '\130' 10 "FN\222j?\b=\237(\NULW\231+ts\245"
Bar 'n' 14 ""
Bar '\205' 4 "\SYN"

Seems about the same, except we wrote way less code! Huzzah!

If we want certain constructors to occur more frequently, we can also control that using genericArbitraryFrequency, which takes a list of Ints (each Int specifies the weight for one constructor).

A few notes:

  • Using the Generic.Random.Generic module is the quickest and simplest way to generate random instances of your data type, if it works for your use case.

  • It has some limitations, namely:

    • It only generates Arbitrary instances for QuickCheck. It can’t create more general random generators.

    • It probably won’t work very well for recursive data types.

However, these limitations are addressed by other parts of the library. Intrigued? Read on!

Recursive types, the simple way

Let’s now consider a simple recursive type:

> data Tree a where
>   Leaf   :: a                -> Tree a
>   Branch :: Tree a -> Tree a -> Tree a
>   deriving (Show, Generic)
> treeSize :: Tree a -> Int
> treeSize (Leaf _)     = 1
> treeSize (Branch l r) = 1 + treeSize l + treeSize r

We can try using genericArbitrary:

instance Arbitrary a => Arbitrary (Tree a) where
  arbitrary = genericArbitrary

The problem is that this tends to generate some tiny trees and some enormous trees, with not much in between:

λ> map treeSize  replicateM 50 (generate (arbitrary :: Gen (Tree Int)))

And this is not a problem specific to trees; this kind of thing is likely to happen for any recursive type.

Before we get to more interesting/complicated tools, it’s worth noting that random-generics provides a simple mechanism to limit the size of the generated structures: the genericArbitrary' function works like genericArbitrary but uses QuickCheck’s sized mechanism to cut off the recursion when it gets too big. The available size is partitioned among recursive calls, so it does not suffer from the exponential growth you might see if only the depth was limited. When the size counter reaches zero, the generator tries to terminate the recursion by picking some finite, non-recursive value(s). The parameter to genericArbitrary' is a natural number specifying how deep the finite, recursion-terminating values can be. Z (i.e zero) means the generator will only be willing to terminate the recursion with nullary constructors. In our case, Tree does not have any nullary constructors, so we should not use Z: if we do, the generator will be unable to terminate the recursion when the size reaches zero and we will get the same behavior as genericArbitrary. Instead, we should use S Z, which means it will be able to pick the depth-1 term Leaf x (for some arbitrary x) to terminate the recursion.

Let’s try it:

> instance (Arbitrary a, Generic a, BaseCases Z (Rep a)) => Arbitrary (Tree a) where
>   arbitrary = genericArbitrary' (S Z)
λ> sample (arbitrary :: Gen (Tree Int))
Leaf 0
Branch (Leaf 0) (Branch (Leaf 0) (Branch (Leaf 0) (Leaf 0)))
Branch (Leaf (-1)) (Leaf 1)
Leaf (-3)
Leaf 7
Branch (Leaf (-4)) (Branch (Branch (Leaf 1) (Leaf (-1))) (Leaf (-1)))
Branch (Leaf (-2)) (Branch (Leaf 1) (Branch (Leaf 0) (Branch (Leaf 0) (Leaf 0))))
Leaf 14
Branch (Branch (Leaf 2) (Leaf 2)) (Branch (Branch (Branch (Leaf 1) (Branch (Branch (Leaf 0) (Branch (Leaf 0) (Leaf 0))) (Branch (Leaf 0) (Leaf 0)))) (Branch (Branch (Branch (Leaf 0) (Leaf 0)) (Leaf 0)) (Leaf 0))) (Leaf (-3)))
Leaf 4
Leaf 9

Ah, that’s much better.

Finally, genericArbitraryFrequency' is the same as genericArbitraryFrequency but limits the recursion depth as genericArbitrary' does.

If you have a recursive data type you want to use with QuickCheck, it’s worth trying this, since it is quick and simple. The main problem with this approach is that it does not generate a uniform distribution of values. (Also, it is limited in that it is specifically tied to QuickCheck.) In this example, although you can’t necessarily tell just by looking at the sample random trees, I guarantee you that some kinds of trees are much more likely to be generated than others. (Though I couldn’t necessarily tell you which kinds.) This can be bad if the specific trees that will trigger a bug are in fact unlikely to be generated.

Next time, we’ll look at how we can actually have efficient, size-limited, uniform random generators using Boltzmann samplers.

Posted in combinatorics, haskell | Tagged , , , | 3 Comments

Meeting people at ICFP in Nara

In less than 24 hours I’m getting on a plane to Japan (well, technically, Dallas, but I’ll get to Japan eventually). As I did last year, I’m making an open offer here: leave a comment on this post, and I will make a point of finding and meeting you sometime during the week! One person took me up on the offer last year and we had a nice chat over dinner.

Posted in meta | Tagged | 6 Comments

Deep work and email habits

Lately I have been enjoying Cal Newport’s writing on work, and particularly his new book Deep Work which I am in the middle of reading (definitely recommended). His basic thesis is about the power of sustained, focused, distraction-free work on cognitively demanding tasks—what he calls deep work. It takes intentional effort to make the time and space for this kind of work, but Newport argues cogently that doing so can have enormous benefits.

Newport’s ideas have really resonated with me—I think I was already converging (albeit slowly, with little clarity) on similar ideas and practices over the last few years—and I’ve begun trying to put some of them more deliberately into practice. First, I have scheduled two large (4 hour) blocks of time for deep work each week. These blocks are sacrosanct: I won’t meet with students, schedule committee meetings, or do anything else during those times. I physically go somewhere other than my office—usually the library, occasionally my favorite coffee shop, somewhere relatively quiet and interruption-free where students and colleagues won’t find me. I first do as much as possible without turning on my laptop: course planning, reading, brainstorming, a lot of longhand writing (blog posts, papers, grant proposals, whatever—for example, I wrote this blog post itself longhand during my deep work session this morning). Sometimes if I need to write a longer, thoughtful email response, I will even print out the message beforehand and write my response longhand. Only towards the end of the session will I pull out my laptop, if I have specific projects to work on deeply that require a computer, like some sort of coding project.

Anecdotally at least, so far this feels incredibly successful—I get a lot done during these deep work sessions and always come away feeling accomplished and energized. The thing that feels especially good is that I’m not just getting a large amount of stuff done, but I’m getting important, difficult stuff done.

Another related practice I have recently adopted is that I do not read or write any email before 4pm. I have literally blocked myself from accessing email on my computers and phone between midnight and 4pm. Perhaps this sounds heretical, but that’s just the point—“because doing otherwise would be heresy” is a terrible reason for doing anything, and the fact is that not many of us really stop to consider and consciously choose the way we make use of technologies like email and social media. It’s taken some getting used to, but by now I don’t think I am ever going back. At 4pm I triage my inbox—respond to things that need a quick response, archive or delete stuff I don’t care about, and forward other things to my personal bug tracker for dealing with later. I am typically able to totally clear out my inbox before going home for the day. Over the course of the day I keep a list of emails I want to write later, and I write those at the same time that I triage my inbox, or sometimes later in the evening before going to bed. It feels way more efficient to batch most of my email processing into a focused session like this, and freeing to not be distracted by it the rest of the day. But do I ever miss it? Yes, all the time—and that’s exactly the point! Left to my natural tendencies I distract myself silly checking my email constantly.

Time will tell how much of this sticks and how my approach might change over time—I’ve scheduled a reminder for myself to write a followup post six months from now. As always, I’m happy to hear and respond to thoughts, reactions, questions, etc. in the comments.

Posted in meta | Tagged , , | 6 Comments

Academic integrity: context and concrete steps

Continuing from my previous post, I wanted to write a bit about why I have been thinking about academic integrity, and what, concretely, I plan to do about it.

So, why have I been thinking about this? For one thing, my department had its fair share of academic integrity violations last year. On the one hand, it is right for students to be held accountable for their actions. On the other, in the face of a spate of violations, it is also right for us to reevaluate what we are doing and why, what sort of environmental factors may be pushing students to violate academic integrity, and how we can create a better environment. Environment does not excuse behavior, but it can shape behavior in profound ways.

Another reason for thinking about academic integrity is that starting this fall, I will be a member of the committee that hears and makes a determination in formal academic integrity cases at my institution. It seems no one wants to be on this committee, and to a certain extent I can understand why. But I chose it, for several reasons. For one, I think it is important to have someone on the committee from the natural sciences (I will be the only one), who understands issues of plagiarism in the context of technical subjects. I also care a lot about ensuring that academic integrity violations are handled carefully and thoughtfully, so that students actually learn something from the experience, and more importantly, so that they come through with their sense of belonging intact. When a student (or anyone, really) does something that violates the standards of a community and is subject to consequences, it is all too easy for them to feel as though they are now a lesser member or even excluded from the community. It takes much more intentional communication to make clear to them that although they may have violated a community standard—which necessarily comes with a consequence—they are still a valued member. (Thanks to Leslie Zorwick for explaining about the power of belonging, and for relating recent research showing that communicating belonging can make a big difference for students on academic probation—which seems similar to students accused or convicted of academic integrity violations. I would cite it but I think it is not actually published yet.)

Thinking about all of this is well and good, but what will I do about it? How do I go about communicating all of this to my students, and creating the sort of environment I want? Here are the concrete things I plan to do starting this fall:

  • In all my courses where it makes sense, I plan to require students to have at least one citation (perhaps three, if I am bold) on every assignment turned in—whether they cite web pages, help from TAs or classmates, and so on. The point is to get them thinking regularly about the resources and help that they make use of on every single assignment, to foster a spirit of thankfulness. I hope it will also make it psychologically harder for students to plagiarize and lie about it. Finally, I hope it will lead to better outcomes in cases where a student makes inappropriate use of an online resource—i.e. when they “consult” a resource, perhaps even deceiving themselves into thinking that they are really doing the work, but end up essentially copying the resource. If they don’t cite the resource in such a case, I have a messy academic integrity violation case on my hands; if they do, there is no violation, even though the student didn’t engage with the assignment as I would have hoped, and I can have a simple conversation with them about my expectations and their learning (and perhaps lower their grade).

  • I will make sure to communicate to my students how easy it is for me to detect plagiarism, and how dire the consequences can be. A bit of healthy fear never hurt!

  • But beyond that, I want to make sure my students also understand that I care much more about them, as human beings, than I do about their grade or whether they turn in an assignment. I suspect that a lot of academic integrity violations happen at 2am, the night before a deadline, when the student hasn’t even started the assignment and they are riddled with anxiety and running on little sleep—but they feel as though they have to turn something in and this urge overrides whatever convictions they might have about plagiarism. To the extent their decision is based on anxiety about grades, there’s not much I can do about it. However, if their decision stems from a feeling of shame at not turning something in and disappointing their professor, I can make a difference: in that moment, I want my students to remember that their value in my eyes as human beings is not tied to their academic performance; that I will be much more impressed by their honesty than by whether they turn something in.

  • As a new member of the academic integrity committee, I plan to spend most of my time listening and learning from the continuing members of the committee; but I do hope to make sure our communication with both accused and convicted students emphasizes that they are still valued members of our community.

Other concrete suggestions, questions, experiences to relate, etc. are all most welcome!

Posted in teaching | Tagged , , , , , | 1 Comment

Academic integrity and other virtues

I have been thinking a lot recently about academic integrity. What does it mean? Why do we care—what is it we fundamentally want students to do and to be? And whatever it is, how do we go about helping them become like that?

As a general principle, I think we ought to focus not just on prohibiting certain negative behaviors, but rather on encouraging positive behaviors (which are in a suitable sense “dual” to the negative behaviors we want to prohibit). Mere prohibitions leave a behavioral vacuum—“OK, don’t do this, so what should I do?”—and incentivize looking for loopholes, seeing how close one can toe the line without breaking the letter of the law. On the other hand, a positive principle actively guides behavior, and in actively striving towards the ideal of the positive principle, one (ideally) ends up far away from the prohibited negative behavior.

In the case of academic integrity, then, it is not enough to say “don’t plagiarize”. In fact, if one focuses on the prohibition itself, this is a particularly difficult one to live by, because academic life is not lived in a vacuum: ideas and accomplishments never spring forth ex nihilo, owing nothing to the ideas and accomplishments of others. In reality, one is constantly copying in big and small ways, explicitly and implicitly, consciously and unconsciously. In fact, this is how learning works! We just happen to think that some forms of copying are acceptable and some are not. Now, there are good reasons for distinguishing acceptable and unacceptable copying; the point is that this is often more difficult and ambiguous for students than we care to admit.

So what is the “dual” of plagiarism? What are the positive virtues which we should instill in our students? One can, of course, say “integrity”, but I don’t think this goes far enough: to have integrity is to adhere to a particular set of moral principles, but which ones? Integrity means being truthful, but truthful about what? It seems this is just another way of saying “don’t plagiarize”, i.e. don’t lie about the source of an idea. I have come up with two other virtues, however, which I think really get at the heart of the issue: thankfulness and generosity. (And in the spirit of academic thankfulness, I should say that Vic Norman first got me thinking along these lines with his paper How Will You Practice Virtue Witout Skill?: Preparing Students to be Virtuous Computer Programmers, published in the 2014-2015 Journal of the ACMS; I was also influenced by a discussion of Vic’s paper with several others at the ACMS luncheon at SIGCSE 2016.)

Academic thankfulness has to do with recognizing one’s profound debt to the academic context: to all those thinkers and doers who have come before, and to all those who help you along your journey as a learner, whether professors, other students, or random strangers on the Internet. A thankful student is naturally driven to cite anything and everything, to give credit where credit is due, even to give credit where credit is not technically necessary but can serve as a token of thanks. A thankful student recognizes the hard work and unique contributions of others, rather than seeing others as mere means to their own ends. A thankful student never plagiarizes, since taking something from someone else and claiming it for one’s own is the height of ingratitude.

Academic generosity is about freely sharing one’s own ideas, sacrificing one’s time and energy to help others, and allowing others to share in credit and recognition. Being academically generous is harder than being thankful, because it opens you up to the potential ingratitude of others, but in some sense it is the more important of the two virtues: if no one were generous, no one would have anything to be thankful for. A generous student is naturally driven to cite anything and everything, to give credit and recognition to others, whether earned or not. A generous student recognizes others as worthy collaborators rather than as means to an end. A generous student never plagiarizes, since they know how it would feel to have their own generosity taken advantage of.

There’s more to say—about the circumstances that have led me to think about this, and about how one might actually go about instilling these virtues in students, but I think I will leave that for another post.

Posted in teaching | Tagged , , , , , | 1 Comment