I will be giving a talk on competitive programming in Haskell tomorrow, September 10, at Haskell Love. You should attend! It’s free!

In my last competitive programming post, I challenged you to solve Purple Rain. We are presented with a linear sequence of cells, each colored either red or blue, and we are supposed to find the (contiguous) segment of cells with the maximal absolute difference between the number of red and blue. For example, below is shown one of the sample inputs, with the solution highlighted: the segment from cell 3 to cell 7 (the cells are numbered from 1) has four red cells compared to only one blue, for an absolute difference of three. You can verify that no other segment does better. ## Transforming the problem

The obvious way to do this is to generate a list of all segments, compute the absolute difference between the number of red and blue cells for each, and take the maximum. However, that approach is doomed to exceed the time limit in any programming language: it would take $O(n^3)$ time ( $O(n^2)$ possible segments times $O(n)$ to sum each one), and the problem states that $n$ can be up to $10^5$. With $10^8$ operations per second as a good rule of thumb, $O(n^3)$ with $n = 10^5$ is clearly too slow. (In fact, any time you see an input size of $10^5$, it is a dead giveaway that you are expected to find an $O(n)$ or $O(n \lg n)$ solution. $10^5$ is big enough to make an $O(n^2)$ solution prohibitively slow, but not so big that I/O itself becomes the bottleneck.)

The first insight is that we can transform this into the classic problem of finding the maximum sum subarray (also known as the maximum segment sum; either way I will abbreviate it as MSS) in two steps: first, turn each red cell into a 1, and each blue into -1. The sum of a segment then tells us how many more red than blue cells there are. Now, we actually want the biggest absolute difference between red and blue; but if we have an algorithm to find the MSS we can just run it twice: once to find the maximum excess of red over blue, and again with 1 and -1 flipped to find the maximum excess of blue over red.

The MSS problem has a long history in the functional programming community, being one of the flagship problems to demonstrate the techniques of program derivation in the style of the Bird-Meertens Formalism, aka Squiggol and The Algebra of Programming. It is possible to start out with a naive-but-obviously-correct $O(n^3)$ implementation, and do a series of equational transformations to turn it into an efficient $O(n)$ algorithm! If you’ve never seen that kind of thing before, I highly recommend checking it out; the Wikipedia page on the Bird-Meertens Formalism, linked above, is a good place to start. Certainly getting good at such derivations can be a handy skill when doing competitive programming in Haskell. But in any case, today I want to approach the problem from a different point of view, namely, coming up with a good functional equivalent to an existing imperative algorithm.

Kadane’s algorithm, first proposed by Jay Kadane sometime in the late 1970s, is a linear-time algorithm for solving the MSS problem. It is actually quite simple to implement (the tricky part is understanding why it works!).

The idea is to loop through an array while keeping track of two things: a current value `cur`, and a `best` value. The `best` value is just the greatest value `cur` has ever taken on, so keeping it updated is easy: on every loop, we compare `cur` to `best`, and save the value of `cur` into `best` if it is higher. To keep `cur` updated, we simply add each new array value to it—but if it ever falls below zero, we just reset `cur` to zero. Here is some Java code:

``````public static int kadane(int[] a) {
int best = 0, cur = 0;
for (int i = 0; i < a.length; i++) {
cur += a[i];
if (cur < 0) cur = 0;
if (cur > best) best = cur;
}
return best;
}``````

Again, it is not at all obvious why this works, though putting in the effort to understand a proof is well worth the time. That is not the purpose of this blog post, however, so I’ll leave you to read about it on your own!

In the imperative version, we iterate through a list, keep track of a current value, and also keep track of the best value we have seen so far. It is possible to translate this directly to Haskell: create a record with two fields, one for the current thing and one for the best thing, then iterate through the list with `foldl'`, doing the appropriate update at each step:

``````data State s = State { curThing :: s, bestThing :: s }

-- Given a way to update the current s value with the next list
-- element of type a, update a State s value which keeps track of the
-- current s value as well as the best s value seen so far.
step :: Ord s => (s -> a -> s) -> (State s -> a -> State s)
step update (State cur best) a = State next (max best next)
where
next = update cur a

bestIntermediate :: Ord s => (s -> a -> s) -> s -> [a] -> s
bestIntermediate update init = bestThing . foldl' (step update) (State init init)``````

But there’s a much better way! Note that the `update` function has the right type to be used with `foldl'`. But if we just computed `foldl' update init` directly, we would get only the single `s` value at the very end. But our goal is to get the best out of all the intermediate values. No problem: a scan is just a fold that returns all the intermediate values instead of only the final one! So instead of all this complicated and quasi-imperative `State` stuff, we just do a `scanl'` followed by taking the `maximum`:

``````bestIntermediate :: Ord s => (s -> a -> s) -> s -> [a] -> s
bestIntermediate update init = maximum . scanl' update init``````

Ah, much better! Using `bestIntermediate`, we can now translate Kadane’s algorithm as follows:

``````kadane1 :: [Int] -> Int
where
next s a = max 0 (s + a)``````

Whenever I write down an algorithm like this in Haskell—especially if I have “translated” it from an existing, imperative algorithm—I like to figure out how I can generalize it as much as possible. What structure is assumed of the inputs that makes the algorithm work? Can we replace some concrete monomorphic types with polymorphic ones? What type class constraints are needed? Often these sorts of generalizations make for supposedly tricky competitive programming problems. For example, if you have only ever seen Dijkstra’s algorithm presented in terms of finding shortest paths by summing edge weights, it takes quite a bit of insight to realize that the algorithm really just finds the “best” path with respect to operations that play nicely with each other in certain ways. For example, if we use the operations `max` and `min` in place of `min` and `(+)`, Dijkstra’s algorithm finds the path with the maximum bottleneck. (This will probably end up being its own blog post at some point…)

Anyway, how can we generalize `kadane1`? The obvious starting point is that if we just let GHC infer a type for `kadane1`, we would get something more general:

``````kadane2 :: (Num a, Ord a) => [a] -> a
where
next s a = max 0 (s + a)``````

The only thing we do with the input list elements is add them and compare them; we also need `0` to have the same type as the input list elements. So the algorithm works for anything that has `Ord` and `Num` instances.

But wait—do we really need `Num` if all we are using is `0` and `+`? Really we are using the monoid of integers under addition. So we can generalize again, to any ordered monoid:

``````kadane :: (Monoid a, Ord a) => [a] -> a
where
next s a = max mempty (s <> a)``````

In fact, if you study the proof of Kadane’s algorithm, you will see that this works just so long as the monoid operation interacts nicely with the ordering, that is, if $x < y$ implies $x \diamond z < y \diamond z$ and $z \diamond x < z \diamond y$ for all $z$ (this is what is usually meant by an “ordered monoid”).

## Finding the best segment

So far, our code finds the best segment sum, but it doesn’t tell us which segment it was that was best—and for this problem we are actually supposed to output the starting and ending indices of the best segment, not the maximum red-blue difference itself.

If I were doing this in Java, I would probably just add several more variables: one to record where the segment currently being considered starts (which gets reset to `i+1` when `cur` is reset to `0`), and two to record the start and end indices of the `best` segment so far. This gets kind of ugly. Conceptually, the values actually belong in triples representing a segment: the start and end index together with the sum. In Java, it would be too heavyweight to construct a `class` to store these three values together, so in practice I would just do it with a mess of individual variables as described above. Fortunately, in Haskell, this is very lightweight, and we should of course create a data type to represent a segment.

It’s also worth noting that in Haskell, we were naturally led to make a polymorphic `bestIntermediate` function, which will work just as well with a segment type as it does `Int`. Only our `kadane` function itself will have to change. We will make a data type to represent segments, with an appropriate `Ord` instance to specify when one segment is better than another, and we will update the `next` helper function to update a segment instead of just updating a sum.

## The solution

So let’s get started! First, some `LANGUAGE` pragmas and imports we will need, and a basic solution framework.

``````{-# LANGUAGE DerivingStrategies         #-}
{-# LANGUAGE DerivingVia                #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE LambdaCase                 #-}
{-# LANGUAGE ViewPatterns               #-}

import           Control.Arrow         ((>>>))
import           Data.ByteString.Char8 (ByteString)
import qualified Data.ByteString.Char8 as C
import           Data.List             (scanl')
import           Data.Semigroup

showB :: Show a => a -> C.ByteString
showB = show >>> C.pack

main = C.interact \$ solve >>> format``````

Next, a data type to represent a segment, along with an `Ord` instance, and a function to format the answer for this problem. Note that we make all the fields of `Segment` strict (this makes a big difference in runtime—you should pretty much always use strict fields unless there is a really good reason not to). Notice also that the `Ord` instance is where we encode the problem’s specific instructions about how to break ties: “If there are multiple possible answers, print the one that has the Westernmost (smallest-numbered) starting section. If there are multiple answers with the same Westernmost starting section, print the one with the Westernmost ending section.” So we first compare `Segment`s by sum, then by left index, then by right index, being careful to reverse the order of comparison for the indices, since better for indices means smaller.

``````-- The segment [l,r) (i.e. l inclusive, r exclusive), with its sum
data Segment a = I { l :: !Int, r :: !Int, s :: !a } deriving (Eq, Show)

instance Ord a => Ord (Segment a) where
compare (I l1 r1 s1) (I l2 r2 s2)
= compare s1 s2 <> compare l2 l1 <> compare r2 r1

format :: Segment a -> ByteString
format (I l r _) = C.unwords [showB l, showB (r-1)]
-- (r-1) since r is exclusive but we are supposed to show
-- the index of the last element``````

And now for Kadane’s algorithm itself. The `bestIntermediate` function is unchanged. `kadane` changes to start with an appropriate “empty” segment, and to use a `next` function that updates the current segment `[l,r)` based on the next list element `a` (the one at index `r+1`). The right index always gets incremented to `r+1`. If the current sum combined with `a` is “negative” (that is, less than `mempty`), we reset the left index to `r+1` as well (making the segment empty), and the running sum to `mempty`. Otherwise, we leave the left index unchanged and add `a` to the running sum. I also added an argument to indicate the index of the first list element, since in this problem the list is supposed to be indexed from 1, but future problems might be indexed from 0, and I would never remember which it is. (I suppose we could also take a list of pairs `(i,a)` where `i` is the index of `a`. That would even work for non-consecutive indices.)

``````bestIntermediate :: Ord s => (s -> a -> s) -> s -> [a] -> s
bestIntermediate update init = maximum . scanl' update init

kadane :: (Monoid a, Ord a) => Int -> [a] -> Segment a
kadane ix = bestIntermediate next (I ix ix mempty)
where
next (I l r s) a
| s<>a < mempty = I (r+1) (r+1) mempty
| otherwise     = I l     (r+1) (s<>a)``````

Finally, we can write the `solve` function itself. We map R’s and B’s in the input to 1 and -1, respectively, to generate a list of 1’s and -1’s called `mountain`. Then we call `kadane` on `mountain` and on `map negate mountain`, and pick whichever gives us a better answer. But wait, not quite! Remember that `kadane` needs a `Monoid` instance for the list elements, and `Int` has none. So we can whip up a quick newtype, `Step`, that has the right instances (note how deriving `Num` also allows us to use the literal values `1` and `-1` as `Step` values). `DerivingVia` is quite handy in situations like this.

``````solve :: ByteString -> Segment Step
solve (C.init -> b) = max (kadane 1 mountain) (kadane 1 (map negate mountain))
where
mountain = map (\case { 'R' -> 1; 'B' -> -1 }) (C.unpack b)

newtype Step = Step Int
deriving (Semigroup, Monoid) via Sum Int
deriving (Eq, Ord, Num)``````

## Next time: Modulo Solitaire and fast BFS

For next time, I invite you to solve Modulo Solitaire. Warning: this is a straightforward BFS problem; the issue is getting a Haskell solution to run fast enough! I struggled with this for quite some time before coming up with something that worked. My ultimate goal in this case is to develop a polished library with an elegant, functional API for doing BFS, but that runs fast under the hood. I’m very curious to see how others might approach the problem. 