While at ICFP in St. Louis this past week, I discovered an interesting construction on monoids that no one I talked to (including Kenny Foner and Edward Kmett) seemed to have heard of or thought about before. In this post I’ll present the abstract construction itself, and in another post I’ll explain the particular context in which I first came up with it. (Normally I would put the examples first, before explaining the idea in general; but I only know of one example so far, and I’m curious to see if anyone will come up with more. I don’t want to bias people’s thinking too much with my one example!)
The bigger context here is thinking about different ways of putting a monoid structure on a product type , assuming and are themselves monoids. Previously I knew of two ways to do this. The obvious way is to just have the two sides operate in parallel, without interacting at all: and so on. Alternatively, if acts on , we can form the semidirect product.
But here is a third way. Suppose is a monoid, and is a commutative monoid. To connect them, we also suppose there is another binary operation , which I will pronounce “spark”. The way I like to think of it is that two values of type , in addition to combining to produce another via the monoid operation, also produce a “spark” of type . That is, values of type somehow capture information about the interaction between pairs of values.
Sparking needs to interact sensibly with the monoid structures on and : in particular, fixing either argument must result in a monoid homomorphism from to . That is, for any choice of , we have (i.e. is boring and can never produce a nontrivial spark with anything else), and (i.e. sparking commutes with the monoid operations). Similar laws should hold if we fix the second argument of instead of the first.
Given all of this, we can now define a monoid on as follows:
That is, we combine the values normally, and then we combine the values together with the “spark” of type produced by the two values.
Let’s see that this does indeed define a valid monoid:

The identity is , since
Notice how we used the identity laws for (once) and (twice), as well as the law that says . The proof that is a right identity for is similar.

To see that the combining operation is associative, we can reason as follows:
and
In a formal proof one would have to also explicitly note uses of associativity of and but I didn’t want to be that pedantic here.
In addition, if is a commutative monoid and the spark operation commutes, then the resulting monoid will be commutative as well.
The proof of associativity gives us a bit of insight into what is going on here. Notice that when reducing , we end up with all possible sparks between pairs of ’s, i.e. , and one can prove that this holds more generally. In particular, if we start with a list of values:
,
then inject them all into by pairing them with :
,
and finally fold this list with , the second element of the resulting pair is
,
that is, the combination (via the monoid on ) of the sparks between all possible pairings of the . Of course there are such pairings: the point is that whereas computing this via a straightforward fold over the list may well take time, by using a balanced fold (i.e. splitting the list in half recursively and then combining from the leaves up) it may be possible to compute it in time instead (at least, this is the case for the example I have in mind!).
Please leave a comment if you can you think of any instances of this pattern, or if you have seen this pattern discussed elsewhere. In a future post I will write about the instance I had in mind when coming up with this generalized formulation.
Suppose U is some set. Let op1 be intersection, op2 be union on this set. Let the spark be demorgan’s law, I.e, spark(a,b) = ~(a \cap b) = ~a \cup ~b. Then, the spark respects the two homomorphism requirement.
Yes, I guess it does! It doesn’t seem all that useful to me (though I would be happy to be wrong). If we start with a bunch of sets this would allow you to compute their intersection along with the union of their complements—which is just the complement of their intersection. So rather than using this monoid you should just compute their intersection by itself, and then complement it at the end.
Neat operation! It reminds me a bit of a nonstandard monoid on the real numbers:
op(x, y) = c * x * y + x + y
where c is any constant. I think this can be sort of interpreted as a spark, taking the product of two monoids on the reals—one with a multiplication operation and the other with an additive operation. I think I saw this operation in the context of composition theorems for certain statistical divergences, can send more information if you’re curious.
Oh, very interesting monoid! I can’t remember if I’ve seen that before. The way you end up with sums of all possible products does seem similar to the way sparks combine. However, I can’t figure out how to get the laws to work out if we try taking this operation as the spark operation. It seems most obvious to try to make it a monoid homomorphism from + to itself, but this doesn’t work, since op(x, y + z) = cxy + cxz + x + y + z, but op(x,y) + op(x,z) = cxy + cxz + 2x + y + z — close, but off by x.
I’ve seen a variant of this operation before as the “multiplicative group law” (see https://en.wikipedia.org/wiki/Formal_group_law), but only for $c=1$.
Indeed, associativity fails here, as you verified.
The special case can be easily shown to be a monoid isomorphic to the multiplication monoid on whatever ring you claim the original monoid to be grounded, via the identity $xy+x+y=(x+1)(y+1)1$.
This “sparky” product does tickle my brain, but all I have are vague associations at the moment. Will see if I can come up with something before your next post
Hey Brent! It’s a pity I couldn’t come to ICFP this year, but if I were there, I would give you an example of this construction! But, perhaps, this is exactly the example you have in mind? ;) In this case I won’t spoil it!
By the way, I like your generalisation via sparks a lot — it makes a lot of sense to me after having looked at my example from various angles. (Also, in some of the recent work I did on my example, a spark doesn’t come alone — it brings a whole family of sparks with!)
…it!)
Hi Andrey! Yeah, too bad, it would have been fun to see you (though I did very much enjoy SPJ’s presentation of your paper). The part about “bringing a whole family of sparks” makes me think that your example is NOT the same as the example I have in mind…
So, in the published version of my example we have A = X and B = X*X. But now consider adding a set of labels L: A = X and B = X*L*X. Now, there is a spark associated with every element in L!
Sorry to speak in riddles :) To make it more concrete, here is the family of monoids generated by the sparks: https://github.com/snowleopard/alga/blob/master/src/Algebra/Graph/Labelled.hs#L112
Ah, I think I see, very interesting!
Upon reflection, the labels seem not to be really necessary here (although they are interesting). Even without the labels, what you have is a sort of free version of what I’ve described. That is, let A be the free commutative idempotent monoid on an underlying set U, and B is the free commutative idempotent monoid on ordered pairs UxU, then the spark operation is the operation induced by pairing a1 * a2 = (a1,a2); that is, it takes two subsets of U and forms the Cartesian product. I think what I’ve described is slightly more general since it doesn’t require anything to be idempotent (in the example I had in mind, neither monoid is idempotent).
Yes, I think you are right: the case `Graph () a` may be interesting in its own right :)
Good point about idempotence. In my formulation I require it because of the desire for some compatibility between the sparkbased monoid (I’m going to denote it with *) and the direct monoid product (I’ll denote it with +). More specifically, I’d like to have distributivity:
x * (y + z) = x * y + x * z
But this forces idempotence! Indeed, let’s take y = z = (1A, 1B). Then:
x
= x * (1A, 1B)
= x * ((1A, 1B) + (1A, 1B))
= x * (1A, 1B) + x * (1A, 1B)
= x + x
So, if we’d like the spark monoid to interact nicely with the direct monoid, we need to require idempotence (on both of the underlying monoids over A and B).
But if we are only interested in the spark monoid itself, then we can drop the idempotence requirement and then `Graph () a` may indeed be the free version of your construction.
> in the example I had in mind, neither monoid is idempotent
Aha, so your example is not the algebra of graphs — cool! I’m looking forward to reading your next blog post. Oh well, as always :)
> The proof that (\varepsilon_A, \varepsilon_B) is a right identity for \otimes is similar.
What am I missing? Doesn’t that require a \cdot \varepsilon_A = \varepsilon_B which is not one of your laws?
That is one of the laws.
Ah I see, you also wrote
> Similar laws should hold if we fix the second argument of – \cdot – instead of the first.
That’s what I was missing! Sorry for the noise
No worries! =)
Pingback: Counting inversions with monoidal sparks  blog :: Brent > [String]
This is very close to the construction of the Heisenberg group, for which $A$ is a symplectic space over an oddcharacteristic field $B$, and the ‘spark’ is the symplectic pairing. In the case where $A \cong V \oplus V^*$, where $V$ is a vector space over $B$, with its natural symplectic pairing, one can realise the Heisenberg group as a group of upper unitriangular $(2n + 1)$square matrices (where $n$ is the dimension of $V$). One can imagine a generalisation of this construction involving a matrix of (commutative, to be safe) monoids $M_{i j}$ and, for each tuple $(i, j, k)$, a ‘spark’ $M_{i j} \otimes M_{j k} \to M_{i k}$.
It is almost like this might be a useful building block for counting the number of inversions in an array or something. ;)
What a fascinating idea. ;)
Perhaps another example: Let A be the twodimensional vector space over the reals with vector addition and let B be the real numbers with addition. Let “spark” be the dot product. What does your proposed combining operation mean geometrically?
Oh, interesting example! Off the top of my head I can’t seem to make any sense of it geometrically though.