Signed sets and ballots, part 2

Recall, from my previous post, that our goal is to find a combinatorial proof showing the correspondence between signed sets and signed ballots, where a signed set is just a set of n elements, considered positive or negative according to the parity of n, and a signed ballot is an ordered list of sets, considered positive or negative according to the parity of the number of sets.

So, how should such a proof look? For a given number of labels n, there is a single signed set structure, which is just the set of labels itself (with a sign depending on the parity of n). On the other hand, there are lots of ballots on n labels; the key is that some are positive and some are negative, since the sign of the ballots depends on the parity of the number of parts, not the number of labels. For example, consider n = 3. There is a single (negative) signed set structure:

(I will use a dashed blue line to indicate negative things, and a solid black line for positive things.)

On the other hand, as we saw last time, there are 13 ballot structures on 3 labels, some positive and some negative:

In this example, it is easy to see that most of the positives and negatives cancel, with exactly one negative ballot left over, which corresponds with the one negative set. As another example, when n = 4, there is a single positive set, and 75 signed ballots:

This time it is not quite so easy to tell at a glance (at least not the way I have arranged the ballots in the above picture!), but in fact one can verify that there are exactly 37 negative ballots and 38 positive ones, again cancelling to match the one positive set.

What we need to show, then, is that we can pair up the ballots in such a way that positive ballots are matched with negative ballots, with exactly one ballot of the appropriate sign left to be matched with the one signed set. This is known as a signed involution: an involution is a function which is its own inverse, so it matches things up in pairs; a signed involution sends positive things to negative things and vice versa, except for any fixed points.

In order to do this, we will start by assuming the set of labels is linearly ordered. In one sense this is no big deal, since for any finite set of labels we can always just pick an arbitrary ordering, if there isn’t an “obvious” ordering to use already. On the other hand, it means that the correspondence will be specific to the chosen linear ordering. All other things being equal, we would prefer a correspondence that depends solely on the structure of the ballots, and not on any structure inherent to the labels. I will have quite a bit more to say about this in my third and (probably) final post on the topic. But for today, let’s just see how the correspondence works, given the assumption of a linear order on the labels. I came up with this proof independently while contemplating Anders Claesson’s post, though it turns out that the exact same proof is already in a paper by Claesson and Hannah (in any case it is really just a small lemma, the sort of thing you might give as a homework problem in an undergraduate course on combinatorics).

Given some ballot, find the smallest label. For example, if the labels are \{1, \dots, n\} as in the examples so far, we will find the label 1.

  • If the smallest label is contained in some part together with at least one other label, separate it out into its own part by itself, and put it to the right of its former part. Like this:

  • On the other hand, if the smallest label is in a part by itself, merge it with the part on the left (if one exists). This is clearly the inverse of the above operation.

  • The only case we haven’t handled is when the smallest label is in a part by itself which is the leftmost part in the ballot. In that case, we leave that part alone, switch to considering the second-smallest label, and recursively carry out the involution on the remainder of the ballot.

    For example:

    In this case we find the smallest label (1) in a part by itself in the leftmost position, so we leave it where it is and recurse on the remainder of the ballot. Again, we find the smallest remaining label (2) by itself and leftmost, so we recurse again. This time, we find the smallest remaining label (3) in a part with one other label, so we separate it out and place it to the right.

This transformation on ballots is clearly reversible. The only ballots it doesn’t change are ballots with each label in its own singleton part, sorted from smallest to biggest, like this:

In this case the algorithm recurses through the whole ballot and finds each smallest remaining label in the leftmost position, ultimately doing nothing. Notice that a sorted ballot of singletons has the same sign as the signed set on the same labels, namely, (-1)^n. In any other case, we can see that the algorithm matches positive ballots to negative and vice versa, since it always changes the number of parts by 1, either splitting one part into two or merging two parts into one.

Here’s my implementation of the involution in Haskell:

type Ballot = [[Int]]

ballotInv :: Ballot -> Ballot
ballotInv = go 1
  where
    go _ [] = []
    go s ([a]:ps)
      | s == a = [a] : go (s+1) ps
    go s (p:ps)
      | s `elem` p = delete s p : [s] : ps
    go s (p:[a]:ps)
      | s == a = sort (a:p) : ps
    go s (p:ps) = p : go s ps

(The call to sort is not strictly necessary, but I like to keep each part canonically sorted.)

Here again are the 13 signed ballots for n = 3, this time arranged so that the pair of ballots in each row correspond to each other under the involution, with the leftover, sorted ballot by itself at the top.

If you’d like to see an illustration of the correspondence for n = 4, you can find it here (I didn’t want to include inline since it’s somewhat large).

This completes the proof that signed sets and signed ballots correspond. But did we really need that linear order on the labels? Tune in next time to find out!

Posted in combinatorics, math, species | Tagged , , , , , , , | Leave a comment

Signed sets and ballots, part 1

The other day, Anders Claesson wrote a very nice blog post explaining a more combinatorial way to understand multiplicative inverses of virtual species (as opposed to the rather algebraic way I explained it in my previous post). In the middle of his post he makes an offhanded assumption which I stubbornly refused to take at face value; after thinking about it for a while and discussing it with Anders, I’m very glad I did, because there’s definitely more going on here than meets the eye and it’s given me a lot of interesting things to think about.

Recall that E denotes the species of sets, defined by E[U] = \{U\}, that is, the only E-structure on a given label set U is the set of labels itself. Recall also that the exponential generating function of a species F is given by

\displaystyle F(x) = \sum_{n \geq 0} f_n \frac{x^n}{n!}

where f_n counts the number of labelled F-structures of size n. In the case of E, we have f_n = 1 for all n, so

\displaystyle E(x) = \sum_{n \geq 0} \frac{x^n}{n!} = e^x.

(This is why E is such a good name for the species of sets—though in a fantastic coincidence, it seems to originally come from the French word for set, ensemble, rather than from the fact that E(x) = e^x (though on the other hand calling it a “coincidence” is probably too strong, since Joyal must surely have picked the notation with the generating function already in mind!).)

Now, from my previous post we know that

\displaystyle E^{-1} = (1 + E_+)^{-1} = \sum_{k \geq 0} (-1)^k (E_+)^k.

Let’s first consider \sum_k (E_+)^k (without the (-1)^k). This means that we have, for some k \geq 0, a k-ary product of E_+ structures—in other words, a list of nonempty sets. This is the species of ballots, also known as ordered set partitions, and can also be written L \circ E_+. As an example, here is a ballot on the set of labels \{1, \dots, 8\}:

The order of the parts matters, so this is a different ballot:

But the order of labels within each part doesn’t matter (since each part is a set). As another example, here is the complete collection of ballot structures on the labels \{1,2,3\}:

We can see that there are 13 in total: six where the labels are each in their own separate part (corresponding to the six possible permutations of the labels); six where two labels share a part and the other label is a singleton part (corresponding to the three ways to choose the solitary label, times the two ways to order the parts); and one final ballot where all three labels are grouped in the same part. (As an exercise, can you verify that there are 75 different ballot structures on a set of four labels?)

Returning to E^{-1} = \sum_k (-1)^k (E_+)^k, we can see that it consists of signed ballots, where the sign of a ballot is the parity of its number of parts, that is, a ballot with k parts has sign (-1)^k. The second half of Anders’ post gives a nice combinatorial proof that E \cdot E^{-1} = 1, via a sign-reversing involution: if we consider E \cdot E^{-1}-structures, i.e. pairs of sets and signed ballots, there is a natural1 way to pair them up, matching positive and negative structures so everything cancels (except in the case of the empty label set, which is why we get 1 instead of 0).

However, Anders is trying to do more than that. Note first that since multiplication of EGFs corresponds to multiplication of species, the EGF for E^{-1} is of course 1/e^x = e^{-x}. But this ought to also be the EGF for the virtual species E(-X), and the rest of his post hinges on identifying E(-X) and E^{-1}. As Anders and I discovered, however, this is precisely the point where it is worth being more careful.

First of all, what is E(-X)? Intuitively, an E(-X) structure consists of a set of negative atoms; since each set can be thought of as an (unordered) product of atoms, the whole set acquires a sign given by the parity of the number of atoms. In other words, intuitively it seems that E(-X) should be the species of signed sets, where an even-sized set is considered positive and an odd-sized set negative. That is,

\displaystyle E(-X) = \sum_{n \geq 0} (-1)^n E_n,

where E_n denotes the species of sets of size exactly n. As a sanity check, this makes sense as an EGF equation too, since the EGF of E_n is just x^n/n! and indeed

\displaystyle e^{-x} = \sum_{n \geq 0} \frac{(-x)^n}{n!} = \sum_{n \geq 0} (-1)^n \frac{x^n}{n!}.

But hold on a minute, what does E(-X) really mean, formally? It is the composition of the species E with the virtual species -X, and it turns out that it is not at all a priori obvious how to define composition for virtual species! We can find the definition on p. 127 of Bergeron et al. A special case (which is enough for our present purposes) is

\displaystyle \Phi(X - Y) = \Phi(X + Y) \times (E(X)E^{-1}(Y))

where X and Y are two sorts of atoms, and \times denotes Cartesian product of species. In our case,

\displaystyle E(0 - X) = E(0 + X) \times ((E(0) E^{-1}(X)) = E(X) \times E^{-1}(X) = E^{-1}(X)

since E is the identity for Cartesian product (overlaying an additional E structure on a set of labels does not add any structure, since there is only one possible E-structure).

All of this is to say, E(-X) is actually defined as E^{-1}! So at first glance it may seem we actually have nothing to prove: E(-X) and E^{-1} are the same by definition, end of story. …but in fact, all we have done is shift the burden of proof elsewhere: now it is our intuitive idea of E(-X) representing signed sets that requires proof!

To sum up, we know that E(-X) = E^{-1} = \sum_k (-1)^k (E_+)^k is the species of signed ballots, with sign given by parity of the number of parts; and intuitively, we also believe that E(-X) should correspond to parity-signed sets, \sum_n (-1)^n E_n. So, is there a nice combinatorial proof showing the correspondence between signed sets and signed ballots?

One can use the law of excluded middle to show that the answer must be “yes”: suppose the answer were “no”; but then I would not be writing this blog post, which is a contradiction since I am writing this blog post. But is there a constructive proof? Fear not! This blog post has gotten long enough, so I will stop here for now and let interested readers puzzle over it; in my next post I will explain what I came up with, along with some musings on linear orders and naturality.


  1. I am indeed using the word natural in a technical, categorical sense here! This will play an important role in my second post…

Posted in combinatorics, math, species | Tagged , , , , , , | 1 Comment

Virtual species suffice

Over six years ago, I wrote a post explaining how virtual species are defined. Ever since then (time flies!) I’ve been meaning to write a follow-up post explaining a bit more about virtual species and how they actually suffice to give us not just additive inverses, but also (somewhat surprisingly) multiplicative inverses.

Recall that the intuitive idea of a combinatorial species is a family of labelled structures which are invariant under relabelling. If you’ve never seen the formal definition before, don’t worry: just think “data structures” or “algebraic data types” for now.

The basic idea of virtual species is to work with pairs of species (P,N) where P is considered “positive” and N “negative”. Formally, we consider equivalence classes of such pairs under the equivalence relation defined by (P_1, N_1) \cong (P_2, N_2) iff P_1 + N_2 = P_2 + N_1.1 This parallels the way one typically gives a formal definition of the integers starting from the natural numbers (the “Grothendieck construction”); see my previous post for more details.

Intuition

How can we build intuition for virtual species, and for additive inverses of species in particular? To be honest I have been struggling with this question for many years.

Multiplicative inverses are much simpler to think about: they are like matter and antimatter. Having both an F-structure and an F^{-1} structure is the same as having nothing; they annihilate each other. By “having nothing” we mean “having no information”, that is, having a unit value: F F^{-1} = 1.

What about additive inverses? Note first that the 0 species does not correspond to having nothing; the word “nothing” corresponds to the 1 (i.e. unit) species. Instead the 0 (i.e. uninhabited) species corresponds to (logical) impossibility. So to interpret -F we have to imagine something where having either F or -F is impossible.

…yeah, me neither. This seems deeply strange. If someone says, “I either have an F or a -F”, you can confidently call them a liar, because it is impossible to have either an F or a -F; that is, F - F = 0. But surely if you actually have an F-structure, it should also be true to say “I have either an F or a G”? Well, that works for normal, positive species—in which case we can define a canonical injection F \to F + G. But once we introduce negative species this completely breaks down. As another example, if someone truthfully says, “I have either a tree or a negative non-empty tree”, you should be able to say, “Aha! I know what you have—it must be an empty tree.” In general, it’s strange that expressing a disjunction can cause some possibilities to be ruled out. Normally, we are used to disjunctions only increasing the number of possibilities.

Inspired by James and Sabry’s really cool paper The Two Dualities of Computation: Negative and Fractional Types, I have thought a bit about whether there is some plausible interpretation involving travelling backwards in time, but I haven’t been able to come up with one. I can’t quite seem to make the formalism of the paper match up with my intuition about species (though this may just be a failure of my imagination or intuition).

Multiplicative Inverses

In any case, let’s see why the ring of virtual species actually has multiplicative inverses—at least, all the ones we could possibly hope for. This is somewhat surprising, since when we build integers from natural numbers by considering equivalence classes of pairs, we certainly don’t get any multiplicative inverses, only additive ones. To get multiplicative inverses we have to do the same process a second time, building the rational numbers as equivalence classes of pairs of integers. But species already have enough extra structure that throwing in additive inverses is all it takes.

First, a caveat: we don’t get multiplicative inverses for all species, but only those species G such that G(0) = 1: that is, species G with only a single structure of size zero, which are of the form G = 1 + X(\dots). With any constant term other than 1, we clearly have no hope of finding another species H such that GH = 1, since the constant term of GH will be a multiple of G’s constant term.

So given such a G, write G = 1 + G_+, where G_+ denotes “non-empty G-structures”. Then we can define the multiplicative inverse of G as follows:

\displaystyle G^{-1} = \sum_{k \geq 0} (-1)^k (G_+)^k = 1 - G_+ + G_+^2 - G_+^3 + \dots

That is, a G^{-1}-structure consists of a list of nonempty G-structures, except that even-length lists are considered “positive” and odd-length lists considered “negative”.

We can easily check that this indeed defines a multiplicative inverse for G:

\displaystyle \begin{array}{rcl}G G^{-1} &=& (1 + G_+) (1 - G_+ + G_+^2 - G_+^3 + \dots) \\[0.5em] &=& (1 - G_+ + G_+^2 - G_+^3 + \dots) + (G_+ - G_+^2 + G_+^3 - G_+^4 + \dots) \\[0.5em] &=& 1 \end{array}

The infinite sums telescope down to leave only 1. Notice this really isn’t about species in particular, but really about infinite power series (of which species are the categorification): any infinite power series with integer coefficients and a constant term of 1 has a multiplicative inverse which is also such a power series.

As an example, consider 1/(1-X) = (1-X)^{-1}. We know this is “supposed” to be the species of lists (since it results from solving L = 1 + XL for L), but let’s see what happens. In this case G = 1-X and G_+ = -X. So the inverse ought to be

\displaystyle (1-X)^{-1} = \sum_{k \geq 0} (-1)^k (-X)^k = \sum_{k \geq 0} X^k = 1 + X + X^2 + X^3 + \dots

And hey, look at that! Lists!

A field of species?

So what would we need to get a true field, i.e. a multiplicative inverse for every nonzero species? Well, for that we would need to throw in rational coefficients. I forget exactly where I read this—some paper by Baez and Dolan, most likely—but I believe the proper way to interpret this would be as groupoid-valued species, since there is a sense in which the “cardinality” of groupoids can be interpreted as rational numbers. But to be honest I have no idea where this leads.


  1. Note that species sum is cancellative—that is, if A + C = B + C then A = B—so this is a coherent definition. This cancellative property is probably worth another post of its own since the reason for it is not entirely trivial.

Posted in combinatorics | Tagged , | 23 Comments

Advent of code #16 solution: an algebra of bitstrings

I had fun this past December solving Advent of Code problems in Haskell. I was particularly proud of my solution to one particular problem involving generating and processing large bitstrings, which I’d like to share here. I think it really shows off the power of an algebraic, DSL-based approach to problem solving.

This post is literate Haskell—download it and play along!

> {-# LANGUAGE GADTs #-}
> 
> import Control.Arrow   ((***))
> import Data.Bits       (xor)
> import Data.List       (unfoldr)
> import Data.List.Split (chunksOf)
> import Data.Maybe      (fromJust)

The problem

You can go read the problem description if you like, but it’s rather verbose—I’ll try to give a more concise description here, illustrated with Haskell code.

The problem is concerned with strings of bits:

> type BitString = [Bool]

We’ll start just by defining a few utility functions to view and input bitstrings conveniently.

> readbits :: String -> BitString
> readbits = map (=='1')
> 
> showbits :: BitString -> String
> showbits = map (\b -> if b then '1' else '0')
> 
> withbits :: (BitString -> BitString) -> String -> String
> withbits f = showbits . f . readbits

Now on to the problem proper. There is a central operation—which I’ll call the “dragon transform”—which makes a longer bitstring from a shorter one. Given a bitstring s, append a 0 to the end, and then append a reversed and inverted version of s (where “invert” means to flip all the bits). Like so:

> invert :: BitString -> BitString
> invert = map not
> 
> dragon :: BitString -> BitString
> dragon s = s ++ [False] ++ invert (reverse s)

For example,

ghci> withbits dragon "1"
  "100"

ghci> withbits dragon "1101111"
  "110111100000100"

(Incidentally, this operation is called dragon since it is related to the classic dragon curve. Hint: interpret 0 as a left turn and 1 as a right turn.)

Given a starting bitstring, and a target length, we are supposed to iterate dragon until we have at least the number of target bits, and then truncate the string to the desired length:

> fill :: Int -> BitString -> BitString
> fill len = take len . head . dropWhile ((< len) . length) . iterate dragon

For example, if we start with 1, we have to iterate dragon three times to end up with at least ten bits.

ghci> map showbits . take 4 $ iterate dragon [True]
  ["1","100","1000110","100011001001110"]

ghci> withbits (fill 10) "1"
  "1000110010"

Finally, after extending an initial bitstring to a given length, we perform a checksum operation:

  • If there are an odd number of bits, we are done.
  • Otherwise, take the bits two at a time and compute the negation of their exclusive or: that is, 1 if the bits are the same and 0 if they are different (otherwise known as (==)). This results in a bitstring half as long. Now repeat the process, continuing to halve the length until we have an odd number of bits remaining.

In code:

> checksum :: BitString -> BitString
> checksum a
>   | odd (length a) = a
>   | otherwise = checksum . map xnor . chunksOf 2 $ a
>   where
>     xnor [x,y] = x == y

The first task

So, we now have a simple reference implementation that directly follows the specification. We can use this to solve the first task, which just asks to start with a given short bitstring, extend it to length 272, and then compute the checksum. I think different logged-in users get different starting strings, but mine was 01000100010010111:

> input = "01000100010010111"
ghci> withbits (checksum . fill 272) input
  "10010010110011010"

Notice that 272 = 17 \cdot 2^4, so after expanding to that length and then repeatedly halving the length, we end up with a checksum of length 17.

The second task

That was easy. Bring on the second task! Well… of course, it is much bigger. It asks to use the same starting bitstring, but this time extend it to length 35651584 = 17 \cdot 2^{21} before computing the checksum (which will again end up having length 17). Using this naive, unoptimized implementation completely blows up: it turns out that generating a list of 35 million booleans is really not a good idea. Using actual lists with a cons cell for each bit incurs a whole lot of memory and allocation overhead; it just made my computer grind to a halt.

As you may realize, there is a lot of low-hanging fruit here: for example, we can use an unboxed Vector instead of a list, or even do some deforestation to avoid allocation (the former code is by Eric Mertens aka glguy, the latter by Daniel Wagner aka dmwit). Using techniques like that, it’s possible to get the runtime and memory requirements down to something reasonable. But that’s not what I want to talk about. Though more efficient, those solutions are still actually computing every single bit. It seemed to me we shouldn’t have to do that: the computation has a lot of nice structure, and seemingly a lot of opportunity for sharing intermediate results. I went off in search of a way to compute the correct checksum without actually generating the entire intermediate bitstring.

Interlude: xnor

The first order of business was to work out an algebraic understanding of the xnor operation, which I will denote \overline{x \oplus y} (the circled plus operator \oplus denotes xor, and the overbar denotes logical negation). One fundamental fact is that

\displaystyle \overline{x \oplus y} = \overline{x} \oplus y = x \oplus \overline{y}

(checking whether x and y are equal is the same as first negating one and then checking whether they are unequal). From this, and the fact that \oplus is associative, we can prove associativity of xnor:

\displaystyle \overline{\overline{x \oplus y} \oplus z} = (\overline{x} \oplus y) \oplus \overline{z} = \overline{x} \oplus (y \oplus \overline{z}) = \overline{x \oplus \overline{y \oplus z}}

Associativity, along with the fact that 1 is an identity for the operation, means it forms a monoid. When we repeatedly take the xnor of adjacent bits, we are therefore basically doing an mconcat using a strictly balanced combining scheme. But associativity means we can be freer about the order in which we do the combining. If we start with a bitstring of length n \cdot 2^k, the checksumming operation iterates k times, and each consecutive sequence of 2^k bits gets folded down into a single bit via mconcat. In other words, the checksum operation can be reimplemented like this:

> checksum2 :: BitString -> BitString
> checksum2 a = map combine . chunksOf (powTwo (length a)) $ a
>   where
>     combine = foldr (==) True
> 
>     -- Find the biggest power of two that divides n
>     powTwo n
>       | odd n     = 1
>       | otherwise = 2 * powTwo (n `div` 2)

Let’s check that this works:

ghci> withbits (checksum2 . fill 272) input
  "10010010110011010"

ghci> let bits = fill 272 (readbits input) in checksum bits == checksum2 bits
  True

Now, this isn’t really any faster yet; but this idea will be important later!

There’s one more thing we can observe about xnor: if we fold an odd number of bits with xnor, it’s the same as taking the xor of all the bits; if we fold an even number of bits, it’s the same as taking the xor of all the bits and then negating the result. That is,

\displaystyle \begin{array}{rcl} \overline{x_1 \oplus x_2} &=& \overline{x_1 \oplus x_2} \\[0.5em] \overline{x_1 \oplus \overline{x_2 \oplus x_3}} &=& x_1 \oplus x_2 \oplus x_3 \\[0.5em] \overline{x \oplus \overline{x_2 \oplus \overline{x_3 \oplus x_4}}} &=& \overline{x_1 \oplus x_2 \oplus x_3 \oplus x_4} \\[0.5em] \overline{x \oplus \overline{x_2 \oplus \overline{x_3 \oplus \overline{x_4 \oplus x_5}}}} &=& x_1 \oplus x_2 \oplus x_3 \oplus x_4 \oplus x_5 \end{array}

and so on. The proof is a simple induction argument, making use of the relation \overline{x \oplus y} = \overline{x} \oplus y we noted before. So when folding xnor, as a simple optimization, we can avoid doing a lot of negations by just computing the xor and then negating appropriately based on the parity of the number of bits.

The algebra of bitstrings

With that under our belts, we can move on to the real meat of the solution. The central idea is that instead of representing bitstrings directly as lists (or vectors, or whatever) of bits, we represent them using a deep embedding of a little bitstring algebra (aka DSL). That is, we represent each bitstring operation as a constructor of an algebraic data type, which allows us to directly manipulate bitstring expressions. The point is that this algebra/DSL has a lot of nice structure that allows us to work at an abstract, algebraic level instead of working directly with bits.

There’s one more twist to note before actually seeing the data type definition. We know that we will need to talk about the length of bitstrings as well as their xnor/xor. Instead of having to recalculate these every time we need them, we can cache them at each node of a bitstring expression. We’ll see how these cached values come in handy later.

> data BitExpr where

So, what does our algebra of bitstrings need? First, it’s useful to have an explicit representation of the empty bitstring, as well as a singleton bit. We don’t need to cache length or xor values here, since they are obvious and can be computed in constant time.

>   Emp :: BitExpr
>   Bit :: Bool -> BitExpr

Next, we need to be able to append bitstrings. Notice the Bool, which represents the cached xor of the entire bitstring, as well as the Integer which represents the length.

>   App :: !Bool -> !Integer -> BitExpr -> BitExpr -> BitExpr

Finally, we need three unary operations on bitstrings: invert, reverse, and dragon. Each also carries a cached length and xor.

>   Inv :: !Bool -> !Integer -> BitExpr -> BitExpr
>   Rev :: !Bool -> !Integer -> BitExpr -> BitExpr
>   Drg :: !Bool -> !Integer -> BitExpr -> BitExpr
> 
>   deriving Show

Note that Drg is redundant in some sense, since the dragon transform can be encoded in terms of append, inverse, and reverse. However, it’s critical that we include it explicitly: since the dragon transform uses the input bitstring twice, expanding an iterated application of Drg in terms of the other constructors would result in an exponential blowup in the size of the expression.

To be concrete, let’s write a straightforward interpreter which formally connects a bitstring expression with its intended semantics as a bitstring. This comes in handy for testing, but other than testing, the whole point is that we will not use this—we want to solve the problem at the level of bitstring expressions, without ever actually generating their corresponding bitstrings.

> toBits :: BitExpr -> BitString
> toBits Emp = []
> toBits (Bit b) = [b]
> toBits (App _ _ s1 s2) = toBits s1 ++ toBits s2
> toBits (Inv _ _ s) = invert  (toBits s)
> toBits (Rev _ _ s) = reverse (toBits s)
> toBits (Drg _ _ s) = dragon  (toBits s)

Next, let’s write some simple utility functions to extract the cached length or xor from the root of a bitstring expression:

> bsLen :: BitExpr -> Integer
> bsLen Emp           = 0
> bsLen (Bit _)       = 1
> bsLen (App _ l _ _) = l
> bsLen (Inv _ l _)   = l
> bsLen (Rev _ l _)   = l
> bsLen (Drg _ l _)   = l
> 
> bsXor :: BitExpr -> Bool
> bsXor Emp           = False
> bsXor (Bit b)       = b
> bsXor (App b _ _ _) = b
> bsXor (Inv b _ _)   = b
> bsXor (Rev b _ _)   = b
> bsXor (Drg b _ _)   = b

Next, we’ll write some smart constructors which automatically take care of properly computing the cached length and xor.

> bit :: Bool -> BitExpr
> bit = Bit

Appending combines xor values with xor and adds lengths. app also does a bit of optimization when appending with the empty bitstring. For convenience, we can also use app to create a function bits to convert a literal bitstring into a BitExpr.

> app :: BitExpr -> BitExpr -> BitExpr
> app s1 Emp = s1
> app s1 s2 = App (bsXor s1 `xor` bsXor s2) (bsLen s1 + bsLen s2) s1 s2
> 
> bits :: String -> BitExpr
> bits = foldr (app . bit . (=='1')) Emp

Inverting a bitstring preserves the xor when it has even length, and inverts the xor when it has odd length. Note how we make use of both the cached xor and length values to compute the new cached xor.

> inv :: BitExpr -> BitExpr
> inv s = Inv (if even (bsLen s) then bsXor s else not (bsXor s))
>             (bsLen s)
>             s

Reversing preserves xor and length.

> rev :: BitExpr -> BitExpr
> rev s = Rev (bsXor s) (bsLen s) s

Finally, the dragon operation: the xor of dragon s is the xor of s combined with the xor of inv s; the length is one more than twice the length of s.

> drg :: BitExpr -> BitExpr
> drg s = Drg (bsXor s `xor` bsXor (inv s)) (2*(bsLen s) + 1) s

We can test these:

ghci> let t = drg (bits "11" `app` inv (bits "10000"))
ghci> showbits . toBits $ t
  "110111100000100"

ghci> bsLen t
  15

Splitting

Remember that our high-level goal is to take the expanded version of our bitstring, split it into blocks of length 2^k, and then separately reduce each block with xnor. It turns out that we have enough information to split a bitstring expression into two bitstring expressions which correspond to splitting off a block of a given size from the beginning of the corresponding bitstring. That is, we will write a function splitBits :: Integer -> BitExpr -> (BitExpr, BitExpr) which works like splitAt, but on bitstring expressions instead of bitstrings. In other words, it will satisfy the property

splitAt n . toBits == (toBits *** toBits) . splitBits n

We’ll go through the implementation case by case. You might like to try implementing splitBits yourself before peeking at mine; it makes for a nice exercise.

> splitBits :: Integer -> BitExpr -> (BitExpr, BitExpr)

In the base cases, to split zero bits off the front of a bitstring, or if we are asked to split off more bits than there are, just generate the empty bitstring expression.

> splitBits 0 s                = (Emp, s)
> splitBits n s | n >= bsLen s = (s, Emp)

To split an App node, compare the number of bits we want to split off with the length of the first bitstring, and recursively split in either the left or right side appropriately, remembering to subtract the length of the first bitstring from the number of bits to split if we recurse on the right side.

> splitBits n (App _ _ s1 s2)
>   | n < bsLen s1
>     = let (s1a, s1b) = splitBits n s1 in (s1a, s1b `app` s2)
>   | otherwise
>     = let (s2a, s2b) = splitBits (n - bsLen s1) s2 in (s1 `app` s2a, s2b)

Inverting commutes with splitting, so to split an Inv node, we can just split recursively and then rewrap the results with inv.

> splitBits n (Inv _ _ s) = (inv *** inv) $ splitBits n s

To split Rev and Drg nodes, we expand the expressions a bit to get rid of the top-level constructor before re-calling splitBits.

> splitBits n (Rev _ _ s) = splitBits n (pushRev s)
> splitBits n (Drg _ _ s) = splitBits n (expandDragon s)

In the case of Rev, we can “push the reverse through” one level, transforming it into an equivalent expression which no longer has a Rev node at the top. We make use of some nice algebraic properties governing the interaction of reverse with the other operations:

  • Reversing an empty or singleton bitstring does nothing.
  • reverse (s1 ++ s2) == reverse s2 ++ reverse s1
  • reverse . invert = invert . reverse
  • reverse . reverse = id
  • Finally, reverse . dragon = dragon . invert, which can be easily proved by expanding dragon in terms of the other operations and then applying the above algebraic laws.

Using these properties, we can implement pushRev as follows:

> pushRev :: BitExpr -> BitExpr
> pushRev Emp     = Emp
> pushRev (Bit b) = Bit b
> pushRev (App _ _ s1 s2) = rev s2 `app` rev s1
> pushRev (Inv _ _ s) = inv (rev s)
> pushRev (Rev _ _ s) = s
> pushRev (Drg _ _ s) = drg (inv s)

Finally, expandDragon just expands a dragon operation in terms of the other operations. Although this approximately doubles the size of the bitstring expression, we only do this lazily, when we are actually trying to split the result of a dragon transform. It’s only natural that splitting an expression results in somewhat larger expressions.

> expandDragon :: BitExpr -> BitExpr
> expandDragon s = s `app` (bit False `app` inv (rev s))

Filling and checksumming

We’re almost there! We can now implement the fill and checksum operations at the level of bitstring expressions.

fill is straightforward: keep applying the drg smart constructor until the cached length is sufficient, then use splitBits to create an expression corresponding to only the first n bits.

> fillE :: Integer -> String -> BitExpr
> fillE n str = fst . splitBits n $ go (bits str)
>   where
>     go s | bsLen s >= n = s
>          | otherwise    = go (drg s)

Finally, we can implement checksumE using the same pattern as checksum2, where we break up the string into chunks of size 2^k and then reduce each chunk. The only difference is that now we use splitBits to split, and the cached xor to compute the reduction. We know each of the blocks has an even length, so the xnor is just the negation of the cached xor.

> checksumE :: BitExpr -> BitString
> checksumE s = map (not . bsXor) . unfoldr doSplit $ s
>   where
>     doSplit Emp = Nothing
>     doSplit s   = Just (splitBits blockLen s)
>     blockLen = powTwo (bsLen s)
>     powTwo n
>       | odd n     = 1
>       | otherwise = 2 * powTwo (n `div` 2)

Let’s check that we get the same answer for the first task:

ghci> showbits $ checksumE (fillE 272 input)
  "10010010110011010"

ghci> withbits (checksum . fill 272) input
  "10010010110011010"

Great! And now for the second task:

ghci> showbits $ checksumE (fillE (17 * 2^21) input)
  "01010100101011100"

On my machine this finishes pretty much instantaneously, taking only 0.02 seconds. In order to generate enough bits, the dragon transform must be applied 21 times, but that just generates a small expression with 21 Drg constructors. Splitting into chunks of length 2^{21} certainly expands the size of the expressions a bit, but everything stays nice and logarithmic since many of the Drg constructors can remain unexpanded.

In fact, this can easily handle MUCH larger problem instances. For example:

ghci> showbits $ checksumE (fillE (17 * 2^80) input)
  "10000100010001100"

ghci> showbits $ checksumE (fillE (17 * 2^81) input)
  "01010100101011100"

Semantically, this corresponds to generating yottabytes worth of bits (I had to look up the proper prefix) and then checksumming them; operationally, though, these are still basically instantaneous. (Interestingly, I also tried 17 \cdot 2^{200}, and it instantaneously printed the first 11 bits of the answer and then segfaulted. Perhaps I have found a bug in GHC 8.0.2.)

Notice that the checksum for 17 \cdot 2^{81} is actually the same as that for 17 \cdot 2^{21}. After playing around with it a bit, the checksums for 17 \cdot 2^k seem to have a period of 12, but I’m not sure how to prove it!

Posted in haskell | Tagged , , , , , , | 2 Comments

My new programming languages course

tl;dr: my new PL course is now finished, and all the course materials are freely available. Working through all the exercises should be a great option for anyone wishing to learn some basics of programming language design and implementation.

Last May, I wrote about my ideas for designing a new PL course, and got a lot of great comments and feedback. Well, somehow I survived the semester, and the course is now over. In the end I’m pretty happy with how it went (though of course there are always things that can be improved next time).

I decided to use class time in an unconventional way: for each class meeting I created a “module”, consisting of a literate Haskell file with some example code, explanatory text, and lots of holes where students needed to write answers to exercises or fill in code. I split the students into groups, and they spent class time just working through the module. Instead of standing at the front lecturing, I just wandered around watching them work and answering questions. It took a bit of getting used to—for the first few classes I couldn’t shake the feeling that I wasn’t really doing my job—but it quickly became clear that the students were really learning and engaging with the material in a way that they would not have been able to if I had just lectured.

A happy byproduct of this approach is that the modules are fairly self-contained and can now be used by anyone to learn the material. Reading through all the modules and working through the exercises should be a great option for anyone wishing to learn some basics of programming language design and implementation. For example, I know I will probably reuse it to get summer research students up to speed. Note that the course assumes no knowledge of Haskell (so those familiar with Haskell can safely skip the first few modules), but introduces just enough to get where I want to go.

I don’t plan to release any solutions, so don’t ask. But other than that, questions, comments, bug reports, etc. are welcome!

Posted in teaching | Tagged , , , | 9 Comments

MonadRandom 0.5 released

I’m happy to announce the release of MonadRandom-0.5, a package which provides a convenient monadic interface for random number generation in the style of transformers and mtl: a Rand monad (essentially a state monad that threads through a generator), a monad transformer variant called RandT, and a RandomMonad class allowing the use of random generation operations in monad stacks containing RandT.

This release has quite a few small additions as well as a big module reorganization. However, thanks to module re-exports, most existing code using the library should continue to work with no changes; the major version bump reflects the large reorganization and my resultant inability to 100% guarantee that existing user code will not break. If your code does break, please let me know—I would be happy to help you fix it, or simply to know about it so I can help other users.

Here are a few of the biggest changes that may be of interest to users of the library:

  • A new MonadInterleave class (see #20), which is a big improvement over MonadSplit. It provides a method interleave :: m a -> m a, which works by splitting the generator, running its argument using one half of the generator, and using the other half as the final state of the resulting action (replacing whatever the final generator state otherwise would have been). This can be used, for example, to allow random computations to run in parallel, or to create lazy infinite structures of random values. In the example below, the infinite tree randTree cannot be evaluated lazily: even though it is cut off at two levels deep by hew 2, the random value in the right subtree still depends on generation of all the random values in the (infinite) left subtree, even though they are ultimately unneeded. Inserting a call to interleave, as in randTreeI, solves the problem: the generator splits at each Node, so random values in the left and right subtrees are generated independently.

    data Tree = Leaf | Node Int Tree Tree deriving Show
    
    hew :: Int -> Tree -> Tree
    hew 0 _    = Leaf
    hew _ Leaf = Leaf
    hew n (Node x l r) = Node x (hew (n-1) l) (hew (n-1) r)
    
    randTree :: Rand StdGen Tree
    randTree = Node <$> getRandom <*> randTree <*> randTree
    
    randTreeI :: Rand StdGen Tree
    randTreeI = interleave $ Node <$> getRandom <*> randTreeI <*> randTreeI
    
    >>> hew 2 <$> evalRandIO randTree
    Node 2168685089479838995 (Node (-1040559818952481847) Leaf Leaf) (Node ^CInterrupted.
    >>> hew 2 <$> evalRandIO randTreeI
    Node 8243316398511136358 (Node 4139784028141790719 Leaf Leaf) (Node 4473998613878251948 Leaf Leaf)
  • A new PrimMonad instance for RandT (thanks to Koz Ross), allowing it to be used in conjunction with e.g. mutable vectors.

  • New and improved random selection functions:
    • fromList now raises an error when the total weight of elements is zero.
    • The type of uniform is generalized to work over any Foldable.
    • New operations weighted, weightedMay, fromListMay, and uniformMay have been added. weighted is like fromList but generalized to work over any Foldable. The May variants, of course, return a Maybe result instead of raising an error.
  • New lazy vs strict variants of the Rand monad. If you import Control.Monad.Random or Control.Monad.Trans.Random you get the Lazy variant re-exported by default, but you can explicitly import .Lazy or .Strict if you want. They provide the exact same API, but Lazy is implemented with a lazy state monad and Strict with a strict one. To be honest it’s not clear what difference this might make, but since the distinction is already there with the underlying state monad for free, why not provide it?

Although there was some discussion of generalizing MonadRandom to work for a wider range of underlying generators (see the comments on my previous blog post and the discussion on issue #26), I decided to punt on that for now. It seems rather complicated, and that there are already good alternatives like the very nice random-fu package, so I decided to keep things simple for this release. I’m still open to proposals for generalizing future releases.

For a full rundown of changes in 0.5, see the change log. Comments, questions, and bug reports are always welcome either as a comment on this blog post or on the GitHub issue tracker.

Posted in haskell, projects | Tagged , , | Leave a comment

MonadRandom 0.5 and mwc-random: feedback wanted

Since 2013 or so I have been the maintainer of the MonadRandom package, which provides an mtl-style type class for monads with support for generation of pseudorandom values, along with a concrete random monad transformer RandT. As of this writing it has 89 reverse dependencies on Hackage—a healthy number, and one that makes me think carefully about any breaking changes to the package.

Recently I got a number of pull requests, and have been working on putting together an 0.5 release which adds a few functions, adds lazy- and strict-state variants of RandT, and reorganizes things to be closer to standard practice of the transformers package. Since this release will include some technically breaking changes already, it’s a good time to think about potentially including others.

The one thing I am not sure what to do about is this issue: Allow MonadRandom interface for MWC-random. mwc-random is a very nice package for psuedorandom number generation, but apparently it does not fit into the MonadRandom abstraction. First of all, I would like to understand why—I am not very familiar with mwc-random. Second of all, I’d love to figure out a solution, but ideally one that causes as little breakage to existing code as possible.

Leave a comment (either here or on the github issue) if this is something you know/care about, and let’s see if we can figure out a good solution together!

Posted in haskell, projects | Tagged , | 9 Comments