While at ICFP in St. Louis this past week, I discovered an interesting construction on monoids that no one I talked to (including Kenny Foner and Edward Kmett) seemed to have heard of or thought about before. In this post I’ll present the abstract construction itself, and in another post I’ll explain the particular context in which I first came up with it. (Normally I would put the examples first, before explaining the idea in general; but I only know of one example so far, and I’m curious to see if anyone will come up with more. I don’t want to bias people’s thinking too much with my one example!)
The bigger context here is thinking about different ways of putting a monoid structure on a product type , assuming and are themselves monoids. Previously I knew of two ways to do this. The obvious way is to just have the two sides operate in parallel, without interacting at all: and so on. Alternatively, if acts on , we can form the semidirect product.
But here is a third way. Suppose is a monoid, and is a commutative monoid. To connect them, we also suppose there is another binary operation , which I will pronounce “spark”. The way I like to think of it is that two values of type , in addition to combining to produce another via the monoid operation, also produce a “spark” of type . That is, values of type somehow capture information about the interaction between pairs of values.
Sparking needs to interact sensibly with the monoid structures on and : in particular, fixing either argument must result in a monoid homomorphism from to . That is, for any choice of , we have (i.e. is boring and can never produce a nontrivial spark with anything else), and (i.e. sparking commutes with the monoid operations). Similar laws should hold if we fix the second argument of instead of the first.
Given all of this, we can now define a monoid on as follows:
That is, we combine the values normally, and then we combine the values together with the “spark” of type produced by the two values.
Let’s see that this does indeed define a valid monoid:
The identity is , since
Notice how we used the identity laws for (once) and (twice), as well as the law that says . The proof that is a right identity for is similar.
To see that the combining operation is associative, we can reason as follows:
In a formal proof one would have to also explicitly note uses of associativity of and but I didn’t want to be that pedantic here.
In addition, if is a commutative monoid and the spark operation commutes, then the resulting monoid will be commutative as well.
The proof of associativity gives us a bit of insight into what is going on here. Notice that when reducing , we end up with all possible sparks between pairs of ’s, i.e. , and one can prove that this holds more generally. In particular, if we start with a list of values:
then inject them all into by pairing them with :
and finally fold this list with , the second element of the resulting pair is
that is, the combination (via the monoid on ) of the sparks between all possible pairings of the . Of course there are such pairings: the point is that whereas computing this via a straightforward fold over the list may well take time, by using a balanced fold (i.e. splitting the list in half recursively and then combining from the leaves up) it may be possible to compute it in time instead (at least, this is the case for the example I have in mind!).
Please leave a comment if you can you think of any instances of this pattern, or if you have seen this pattern discussed elsewhere. In a future post I will write about the instance I had in mind when coming up with this generalized formulation.