Hendrix teams at ACM ICPC

Today I took five students from Hendrix to Fort Smith to compete in the Mid-Central USA Regional of the ACM International Collegiate Programming Contest (ICPC). Both teams solved five out of ten problems, and finished second and fourth out of 18 teams at our site (23rd and 25th out of 121 teams in the whole region). I’m really proud of them!

A lot of fun was had by all. Thanks to Andrew Mackey and the rest of the folks at UAFS for hosting a great contest!

Advertisements
Posted in meta, teaching | Tagged , , , , | Leave a comment

What’s the Difference? video and slides

At ICFP in St. Louis I presented my paper with Kenny Foner, What’s the Difference? A Functional Pearl on Subtracting Bijections. Many people seemed to enjoy the talk and we got lots of great feedback afterwards. We will probably write up an extended version of the paper for JFP, to include extra stuff we could not fit in the paper and to incorporate things we learned at ICFP.

(Fun fact: this was actually my first ICFP talk—though not my first ICFP paper. I am glad I have a bunch of experience giving talks at this point; I can remember a time in my life when giving a talk on that stage would have been terrifying.)

Here is the video of the talk:

You can download my slides here. This version of the slides includes speaking notes—it’s not exactly what I ended up saying, but it should give you a decent idea if you just want to read the slides without watching the video.

Posted in combinatorics, haskell, writing | Tagged , , , , , , | Leave a comment

Counting inversions with monoidal sparks

Time for me to reveal the example I had in mind that led to the generalization in my previous post. Thanks for all the interesting comments: it seems like there are some interesting connections to be explored (e.g. to the algebra of graphs, formal group laws, …?)!

This is a literate Haskell post; download it and play along in ghci! (Note it requires GHC 8.6 since I couldn’t resist making use of DerivingVia…)

> {-# LANGUAGE DefaultSignatures          #-}
> {-# LANGUAGE FlexibleInstances          #-}
> {-# LANGUAGE MultiParamTypeClasses      #-}
> {-# LANGUAGE DerivingStrategies         #-}
> {-# LANGUAGE GeneralizedNewtypeDeriving #-}
> {-# LANGUAGE DerivingVia                #-}
> {-# LANGUAGE TypeApplications           #-}
> {-# LANGUAGE ScopedTypeVariables        #-}
> 
> import Data.Semigroup
> import Data.Coerce

Consider a sequence of integers \sigma = a_1, a_2, \dots, a_n. (Actually, integers is too specific; any linearly ordered domain will do.) An inversion is a pair of positions in \sigma which are “out of order”: that is, (i,j) such that i < j but a_i > a_j. So, for example, \sigma = [3,5,1,4,2] has six inversions, namely (3,1), (3,2), (5,1), (5,4), (5,2), (4,2). (Here I’ve written the elements that are out of order rather than their positions, which doesn’t matter much when the elements are all distinct; but it’s important to keep in mind that e.g. [2,2,1] has two inversions, not one, because each copy of 2 makes an inversion with the 1.) The total number of inversions of \sigma is denoted \mathrm{inv}(\sigma).

One way to think about the inversion count is as a measure of how far away the sequence is from being sorted. In particular, bubble sort will make precisely \mathrm{inv}(\sigma) adjacent swaps while sorting \sigma. The highest possible value of \mathrm{inv}(\sigma) is n(n-1)/2, when \sigma is sorted in reverse order.

The obvious brute-force algorithm to count inversions is to use two nested loops to enumerate all possible pairs of elements, and increment a counter each time we discover a pair which is out of order. This clearly takes O(n^2) time. Can it be done any faster?

It turns out the (generally well-known) answer is yes, using a variant of mergesort. The trick is to generalize to counting inversions and sorting the sequence at the same time. First split the sequence in half, and recursively sort and count the inversions in each half. Any inversion in the original sequence must either be entirely contained in one of the two halves (these will be counted by the recursive calls), or have one endpoint in the left half and one in the right. One key observation at this point is that any inversion with one endpoint in each half will still be an inversion even after independently sorting the two halves. The other key observation is that we can merge the two sorted subsequences and count inversions between them in linear time. Use the usual two-finger algorithm for merging two sorted sequences; each time we take an element from the right subsequence, it’s because it is less than all the remaining elements in the left subsequence, but it was to the right of all of them, so we can add the length of the remaining left subsequence to the inversion count. Intuitively, it’s this ability to count a bunch of inversions in one step which allows this algorithm to be more efficient, since any algorithm which only ever increments an inversion counter is doomed to be O(n^2) no matter how cleverly it splits up the counting. In the end, the number of total inversions is the sum of the inversions counted recursively in the two sublists, plus any inversions between the two sublists.

Here’s some Haskell code implementing this sorted-merge-and-inversion-count. We have to be a bit careful because we don’t want to call length on the remaining sublist at every step (that would ruin the asymptotic performance!), so we precompute the length and pass along the length of the left subsequence as an extra parameter which we keep up-to-date as we recurse.

> -- Precondition: the input lists are sorted.
> -- Output the sorted merge of the two lists, and the number of pairs
> -- (a,b) such that a \in xs, b \in ys with a > b.
> mergeAndCount :: Ord a => [a] -> [a] -> ([a], Int)
> mergeAndCount xs ys = go xs (length xs) ys
>   -- precondition/invariant for go xs n ys:   n == length xs
>   where
>     go [] _ ys = (ys, 0)
>     go xs _ [] = (xs, 0)
>     go (x:xs) n (y:ys)
>       | x <= y    = let (m, i) = go xs (n-1) (y:ys) in (x:m, i)
>       | otherwise = let (m, i) = go (x:xs) n ys     in (y:m, i + n)
> 
> merge :: Ord a => [a] -> [a] -> [a]
> merge xs ys = fst (mergeAndCount xs ys)
> 
> inversionsBetween :: Ord a => [a] -> [a] -> Int
> inversionsBetween xs ys = snd (mergeAndCount xs ys)

Do you see how this is an instance of the sparky monoid construction in my previous post? A is the set of sorted lists with merge as the monoid operation; B is the natural numbers under addition. The spark operation takes two sorted lists and counts the number of inversions between them. So the monoid on pairs A \times B merges the lists, and adds the inversion counts together with the number of inversions between the two lsits.

We have to verify that this satisfies the laws: let a be any sorted list, then we need

  • a \cdot \varepsilon_A = \varepsilon_B, that is, a `inversionsBetween` [] = 0. This is true since there are never any inversions between a and an empty list. Likewise for \varepsilon_A \cdot a = \varepsilon_B.

  • a `inversionsBetween` (a1 `merge` a2) == (a `inversionsBetween` a1) + (a `inversionsBetween` a2). This is also true since a1 `merge` a2 contains the same elements as a1 and a2: any inversion between a and a1 `merge` a2 will be an inversion between a and a1, or between a and a2, and vice versa. The same reasoning shows that (a1 `merge` a2) `inversionsBetween` a == (a1 `inversionsBetween` a) + (a2 `inversionsBetween` a).

Note that A and B are commutative monoids, but the spark operation isn’t commutative; in fact, any given pair of elements is an inversion between a1 and a2 precisely iff they are not an inversion between a2 and a1. Note also that A and B aren’t idempotent; for example merging a sorted list with itself produces not the same list, but a new list with two copies of each element.

So let’s see some more Haskell code to implement the entire algorithm in a nicely modular way. First, let’s encode sparky monoids in general. The Sparky class is for pairs of types with a spark operation. As we saw in the example above, sometimes it may be more efficient to compute a_1 \diamond a_2 and the spark a_1 \cdot a_2 at the same time, so we bake that possibility into the class.

> class Sparky a b where

The basic spark operation, with a default implementation that projects the result out of the prodSpark method.

>   (<.>) :: a -> a -> b
>   a1 <.> a2 = snd (prodSpark a1 a2)

prodSpark does the monoidal product and spark at the same time, with a default implementation that just does them separately.

>   prodSpark :: a -> a -> (a,b)
>   default prodSpark :: Semigroup a => a -> a -> (a,b)
>   prodSpark a1 a2 = (a1 <> a2, a1 <.> a2)

Finally we can specify that we have to implement one or the other of these methods.

>   {-# MINIMAL (<.>) | prodSpark #-}

Sparked a b is just a pair type, but with Semigroup and Monoid instances that implement the sparky product.

> data Sparked a b = S { getA :: a, getSpark :: b }
>   deriving Show
> 
> class Semigroup a => CommutativeSemigroup a
> class (Monoid a, CommutativeSemigroup a) => CommutativeMonoid a
> 
> instance (Semigroup a, CommutativeSemigroup b, Sparky a b) => Semigroup (Sparked a b) where
>   S a1 b1 <> S a2 b2 = S a' (b1 <> b2 <> b')
>     where (a', b') = prodSpark a1 a2
> 
> instance (Monoid a, CommutativeMonoid b, Sparky a b) => Monoid (Sparked a b) where
>   mempty = S mempty mempty

Now we can make instances for sorted lists under merge…

> newtype Sorted a = Sorted [a]
>   deriving Show
> 
> instance Ord a => Semigroup (Sorted a) where
>   Sorted xs <> Sorted ys = Sorted (merge xs ys)
> instance Ord a => Monoid (Sorted a) where
>   mempty = Sorted []
> 
> instance Ord a => CommutativeSemigroup (Sorted a)
> instance Ord a => CommutativeMonoid (Sorted a)

…and for inversion counts.

> newtype InvCount = InvCount Int
>   deriving newtype Num
>   deriving (Semigroup, Monoid) via Sum Int
> 
> instance CommutativeSemigroup InvCount
> instance CommutativeMonoid InvCount

Finally we make the Sparky (Sorted a) InvCount instance, which is just mergeAndCount (some conversion between newtypes is required, but we can get the compiler to do it automagically via coerce and a bit of explicit type application).

> instance Ord a => Sparky (Sorted a) InvCount where
>   prodSpark = coerce (mergeAndCount @a)

And here’s a function to turn a single a value into a sorted singleton list paired with an inversion count of zero, which will come in handy later.

> single :: a -> Sparked (Sorted a) InvCount
> single a = S (Sorted [a]) 0

Finally, we can make some generic infrastructure for doing monoidal folds. First, Parens a encodes lists of a which have been explicitly associated, i.e. fully parenthesized:

> data Parens a = Leaf a | Node (Parens a) (Parens a)
>   deriving Show

We can make a generic fold for Parens a values, which maps each Leaf into the result type b, and replaces each Node with a binary operation:

> foldParens :: (a -> b) -> (b -> b -> b) -> Parens a -> b
> foldParens lf _  (Leaf a)   = lf a
> foldParens lf nd (Node l r) = nd (foldParens lf nd l) (foldParens lf nd r)

Now for a function which splits a list in half recursively to produce a balanced parenthesization.

> balanced :: [a] -> Parens a
> balanced []  = error "List must be nonempty"
> balanced [a] = Leaf a
> balanced as  = Node (balanced as1) (balanced as2)
>   where (as1, as2) = splitAt (length as `div` 2) as

Finally, we can make a balanced variant of foldMap: instead of just mapping a function over a list and then reducing with mconcat, as foldMap does, it first creates a balanced parenthesization for the list and then reduces via the given monoid. This will always give the same result as foldMap due to associativity, but in some cases it may be more efficient.

> foldMapB :: Monoid m => (e -> m) -> [e] -> m
> foldMapB leaf = foldParens leaf (<>) . balanced

Let’s try it out!

λ> :set +s
λ> getSpark $ foldMap single [3000, 2999 .. 1 :: Int]
Sum {getSum = 4498500}
(34.94 secs, 3,469,354,896 bytes)
λ> getSpark $ foldMapB single [3000, 2999 .. 1 :: Int]
Sum {getSum = 4498500}
(0.09 secs, 20,016,936 bytes)

Empirically, it does seem that we are getting quadratic performance with normal foldMap, but O(n \log n) with foldMapB. We can verify that we are getting the correct inversion count in either case, since we know there should be n(n-1)/2 when the list is reversed, and sure enough, 3000 \cdot 2999 / 2 = 4498500.

Posted in math | Tagged , , , , , , , , | 7 Comments

Monoidal sparks

While at ICFP in St. Louis this past week, I discovered an interesting construction on monoids that no one I talked to (including Kenny Foner and Edward Kmett) seemed to have heard of or thought about before. In this post I’ll present the abstract construction itself, and in another post I’ll explain the particular context in which I first came up with it. (Normally I would put the examples first, before explaining the idea in general; but I only know of one example so far, and I’m curious to see if anyone will come up with more. I don’t want to bias people’s thinking too much with my one example!)

The bigger context here is thinking about different ways of putting a monoid structure on a product type A \times B, assuming A and B are themselves monoids. Previously I knew of two ways to do this. The obvious way is to just have the two sides operate in parallel, without interacting at all: (a_1,b_1) \diamond (a_2,b_2) = (a_1 \diamond a_2, b_1 \diamond b_2) and so on. Alternatively, if A acts on B, we can form the semidirect product.

But here is a third way. Suppose (A, \varepsilon_A, \diamond) is a monoid, and (B, \varepsilon_B, \oplus) is a commutative monoid. To connect them, we also suppose there is another binary operation - \cdot - : A \to A \to B, which I will pronounce “spark”. The way I like to think of it is that two values of type A, in addition to combining to produce another A via the monoid operation, also produce a “spark” of type B. That is, values of type B somehow capture information about the interaction between pairs of A values.

Sparking needs to interact sensibly with the monoid structures on A and B: in particular, fixing either argument must result in a monoid homomorphism from A to B. That is, for any choice of a \in A, we have a \cdot \varepsilon_A = \varepsilon_B (i.e. \varepsilon_A is boring and can never produce a nontrivial spark with anything else), and a \cdot (a_1 \diamond a_2) = (a \cdot a_1) \oplus (a \cdot a_2) (i.e. sparking commutes with the monoid operations). Similar laws should hold if we fix the second argument of - \cdot - instead of the first.

Given all of this, we can now define a monoid on A \times B as follows:

(a_1, b_1) \otimes (a_2, b_2) = (a_1 \diamond a_2, b_1 \oplus b_2 \oplus (a_1 \cdot a_2))

That is, we combine the A values normally, and then we combine the B values together with the “spark” of type B produced by the two A values.

Let’s see that this does indeed define a valid monoid:

  • The identity is (\varepsilon_A, \varepsilon_B), since

    (\varepsilon_A, \varepsilon_B) \otimes (a,b) = (\varepsilon_A  \diamond a, \varepsilon_B \oplus b \oplus (\varepsilon_A \cdot a)) =  (a, b \oplus \varepsilon_B) = (a,b)

    Notice how we used the identity laws for A (once) and B (twice), as well as the law that says \varepsilon_A \cdot a =  \varepsilon_B. The proof that (\varepsilon_A, \varepsilon_B) is a right identity for \otimes is similar.

  • To see that the combining operation is associative, we can reason as follows:

    \begin{array}{rcl} & & ((a_1,b_1) \otimes (a_2,b_2)) \otimes (a_3,b_3) \\[0.5em] & = & \qquad \text{\{expand definition of \begin{math}\otimes\end{math}\}} \\[0.5em] & & (a_1 \diamond a_2, b_1 \oplus b_2 \oplus (a_1 \cdot a_2)) \otimes (a_3,b_3) \\[0.5em] & = & \qquad \text{\{expand definition of \begin{math}\otimes\end{math} again\}} \\[0.5em] & & (a_1 \diamond a_2 \diamond a_3, b_1 \oplus b_2 \oplus (a_1 \cdot a_2) \oplus b_3 \oplus ((a_1 \diamond a_2) \cdot a_3)) \\[0.5em] & = & \qquad \text{\{\begin{math}- \cdot a_3\end{math} is a homomorphism, \begin{math}\oplus\end{math} is commutative\}} \\[0.5em] & & (a_1 \diamond a_2 \diamond a_3, b_1 \oplus b_2 \oplus b_3 \oplus (a_1 \cdot a_2) \oplus (a_1 \cdot a_3) \oplus (a_2 \cdot a_3)) \end{array}

    and

    \begin{array}{rcl} & & (a_1,b_1) \otimes ((a_2,b_2) \otimes (a_3,b_3)) \\[0.5em] & = & \qquad \text{\{expand definition of \begin{math}\otimes\end{math}\}} \\[0.5em] & & (a_1,b_1) \otimes (a_2 \diamond a_3, b_2 \oplus b_3 \oplus (a_2 \cdot a_3)) \\[0.5em] & = & \qquad \text{\{expand definition of \begin{math}\otimes\end{math} again\}} \\[0.5em] & & (a_1 \diamond a_2 \diamond a_3, b_1 \oplus (b_2 \oplus b_3 \oplus (a_2 \cdot a_3)) \oplus (a_1 \cdot (a_2 \diamond a_3))) \\[0.5em] & = & \qquad \text{\{\begin{math}a_1 \cdot -\end{math} is a homomorphism, \begin{math}\oplus\end{math} is commutative\}} \\[0.5em] & & (a_1 \diamond a_2 \diamond a_3, b_1 \oplus b_2 \oplus b_3 \oplus (a_1 \cdot a_2) \oplus (a_1 \cdot a_3) \oplus (a_2 \cdot a_3)) \end{array}

    In a formal proof one would have to also explicitly note uses of associativity of \diamond and \oplus but I didn’t want to be that pedantic here.

In addition, if A is a commutative monoid and the spark operation - \cdot - commutes, then the resulting monoid (A \times B, \otimes) will be commutative as well.

The proof of associativity gives us a bit of insight into what is going on here. Notice that when reducing (a_1,b_1) \otimes (a_2,b_2) \otimes (a_3,b_3), we end up with all possible sparks between pairs of a’s, i.e. (a_1 \cdot a_2) \oplus (a_1 \cdot a_3) \oplus (a_2 \cdot a_3), and one can prove that this holds more generally. In particular, if we start with a list of A values:

[a_1, a_2, \dots, a_n],

then inject them all into A \times B by pairing them with \varepsilon_B:

[(a_1, \varepsilon_B), (a_2, \varepsilon_B), \dots, (a_n, \varepsilon_B)],

and finally fold this list with \otimes, the second element of the resulting pair is

\displaystyle \bigoplus_{i \neq j} (a_i \cdot a_j),

that is, the combination (via the monoid on B) of the sparks between all possible pairings of the a_i. Of course there are O(n^2) such pairings: the point is that whereas computing this via a straightforward fold over the list may well take O(n^2) time, by using a balanced fold (i.e. splitting the list in half recursively and then combining from the leaves up) it may be possible to compute it in O(n \log n) time instead (at least, this is the case for the example I have in mind!).

Please leave a comment if you can you think of any instances of this pattern, or if you have seen this pattern discussed elsewhere. In a future post I will write about the instance I had in mind when coming up with this generalized formulation.

Posted in math | Tagged , , , , | 23 Comments

New ICFP functional pearl on subtracting bijections

Kenny Foner and I have a paper accepted to ICFP on subtracting bijections. Here’s the basic problem: suppose you have a bijection h between two sum types, h : A + B \leftrightarrow A' +B', and another bijection g : B \leftrightarrow B'. Of course A and A' must have the same size, but can you construct an actual bijection f between A and A'?

This problem and its solution has been well-studied in the context of combinatorics; we were wondering what additional insight could be gained from a higher-level functional approach. You can get a preprint here.

Posted in combinatorics, haskell, writing | Tagged , , , , , | 5 Comments

Ertugrul Söylemez: 1985-2018

I do not know many details, but IRC user mupf joined the #haskell-ops channel yesterday to tell us that Ertugrul Söylemez died unexpectedly on May 12. This is a great loss for the Haskell community. Going variously by mm_freak, ertes, ertesx or ertes-w, Ertugrul has been very active on the #haskell IRC channel for almost 11 years—since June 2007—and he will be remembered as a knowledgeable, generous, and patient teacher. In addition to teaching on IRC, he wrote several tutorials, such as this one on foldr and this one on monoids. The thing Ertugrul was probably most well-known for in the Haskell community was his netwire FRP package, which he had developed since 2011, and more recently its successor package wires. I think it is no exaggeration to say that his work on and insights into FRP helped shape a whole generation of approaches.

He will be greatly missed. I will leave the comments open for anyone to share memories of Ertugrul, his writing, or his code.

Posted in haskell, meta | Tagged , , , , | 4 Comments

Future of Coding podcast interview

I recently did an interview with my friend and former student Steve Krouse for his Future of Coding podcast. I think I still agree with almost everything I said, and I didn’t say anything too embarrassing, so if you want to listen to it I won’t stop you! We chatted about teaching and learning, functional programming and type systems, how to read academic papers, and a bunch of other things. You can listen to it here.

Posted in haskell, meta, teaching | Tagged , , , , | Leave a comment