Competitive programming in Haskell: introduction to dynamic programming

In my previous post, I challenged you to solve Zapis. In this problem, we are given a sequence of opening and closing brackets (parens, square brackets, and curly braces) with question marks, and have to compute the number of different ways in which the question marks could be replaced by brackets to create valid, properly nested bracket sequences.

For example, given (??), the answer is 4: we could replace the question marks with any matched pair (either (), [], or {}), or we could replace them with )(, resulting in ()().

An annoying aside

One very annoying thing to mention about this problem is that it requires us to output the last 5 digits of the answer. At first, I interpreted that to mean “output the answer modulo 10^5”, which would be a standard sort of condition for a combinatorics problem, but that’s not quite the same thing, in a very annoying way: for example, if the answer is 2, we are supposed to output 2; but if the answer is 1000000002, we are supposed to output 00002, not 2! So simply computing the answer modulo 10^5 is not good enough; if we get a final answer of 2, we don’t know whether we are supposed to pad it with zeros. I could imagine keeping track of both the result modulo 10^5 along with a Boolean flag telling us whether the number has ever overflowed; we have to pad with zeros iff the flag is set at the end. I’m pretty sure this would work. But for this problem, it turns out that the final answer is at most “only” about 100 digits, so we can just compute the answer exactly as an Integer and then literally show the last 5 digits.

A recurrence

Now, how to compute the answer? For this kind of problem the first step is to come up with a recurrence. Let s[0 \dots n-1] be the given string, and let c(i,j) be the number of ways to turn the substring s[i \dots j-1] into a properly nested sequence of brackets, so ultimately we want to compute the value of c(0,n). (Note we make c(i,j) correspond to the substring which includes i but excludes j, which means, for example, that the length of the substring is j-i.) First, some base cases:

  • c(i,i) = 1 since the empty string always counts as properly nested.
  • c(i,j) = 0 if i and j have different parity, since any properly nested string must have even length.

Otherwise, s[i] had better be an opening bracket of some kind, and we can try matching it with each of s[i+1], s[i+3], s[i+5], …, s[j-1]. In general, matching s[i] with s[k] can be done in either 0, 1, or 3 ways depending on whether they are proper opening and closing brackets and whether any question marks are involved; then we have c(i+1,k) ways to make the substring between s[i] and s[k] properly nested, and c(k+1,j) ways for the rest of the string following s[k]. These are all independent, so we multiply them. Overall, we get this:

c(i,j) = \begin{cases} 1 & i = j \\ 0 & i \not \equiv j \pmod 2 \\ \displaystyle \sum_{k \in [i+1, i+3, \dots, j-1]} m(s[i], s[k]) \cdot c(i+1,k) \cdot c(k+1,j) & \text{otherwise} \end{cases}

where m(x,y) counts the number of ways to make x and y into a matching pair of brackets: it returns 0 if the two characters cannot possibly be a matching open-close pair (either because they do not match or because one of them is the wrong way around); 1 if they match, and at most one of them is a question mark; and 3 if both are question marks.

How do we come up with such recurrences in the first place? Unfortunately, Haskell doesn’t really make this any easier—it requires some experience and insight. However, what we can say is that Haskell makes it very easy to directly code a recurrence as a recursive function, to play with it and ensure that it gives correct results for small input values.

A naive solution

To that end, if we directly code up our recurrence in Haskell, we get the following naive solution:

import Control.Arrow
import Data.Array

main = interact $ lines >>> last >>> solve >>> format

format :: Integer -> String
format = show >>> reverse >>> take 5 >>> reverse

solve :: String -> Integer
solve str = c (0,n)
  where
    n = length str
    s = listArray (0,n-1) str

    c :: (Int, Int) -> Integer
    c (i,j)
      | i == j           = 1
      | even i /= even j = 0
      | otherwise        = sum
        [ m (s!i) (s!k) * c (i+1,k) * c (k+1,j)
        | k <- [i+1, i+3 .. j-1]
        ]

m '(' ')'                = 1
m '[' ']'                = 1
m '{' '}'                = 1
m '?' '?'                = 3
m b '?' | b `elem` "([{" = 1
m '?' b | b `elem` ")]}" = 1
m _ _                    = 0

This solution is correct, but much too slow—it passes the first four test cases but then fails with a Time Limit Exceeded error. In fact, it takes exponential time in the length of the input string, because it has a classic case of overlapping subproblems. Our goal is to compute the same function, but in a way that is actually efficient.

Dynamic programming, aka memoizing recurrences

I hate the name “dynamic programming”—it conveys zero information about the thing that it names, and was essentially invented as a marketing gimmick. Dynamic programming is really just memoizing recurrences in order to compute them more efficiently. By memoizing we mean caching some kind of mapping from input to output values, so that we only have to compute a function once for each given input value; on subsequent calls with a repeated input we can just look up the corresponding output. There are many, many variations on the theme, but memoizing recurrences is really the heart of it.

In imperative languages, dynamic programming is often carried out by filling in tables via nested loops—the fact that there is a recurrence involved is obscured by the implementation. However, in Haskell, our goal will be to write code that is as close as possible to the above naive recursive version, but still actually efficient. Over the next few posts we will discuss several techniques for doing just that.

  • In part 1, we will explore the basic idea of using lazy, recursive, immutable arrays (which we have already seen in a previous post).
  • In part 2, we will use ideas from Conal Elliot’s MemoTrie package (and ultimately from a paper by Ralf Hinze) to clean up the code and make it a lot closer to the naive version.
  • This post contains a couple challenge problems that can’t quite be solved using the techniques in the previous posts.
  • At some point perhaps we’ll discuss how to memoize functions with infinite (or just very large) domains.
  • There may very well end up being more parts… we’ll see where it ends up!

Along the way I’ll also drop more links to relevant background. This will ultimately end up as a chapter in the book I’m slowly writing, and I’d like to make it into the definitive reference on dynamic programming in Haskell—so any thoughts, comments, links, etc. are most welcome!

About Brent

Associate Professor of Computer Science at Hendrix College. Functional programmer, mathematician, teacher, pianist, follower of Jesus.
This entry was posted in competitive programming, haskell and tagged , , , . Bookmark the permalink.

8 Responses to Competitive programming in Haskell: introduction to dynamic programming

    • Brent says:

      Yes, very cool! I will have to think whether I have ever seen a competitive programming problem where this could be applied. If you wanted to extract the core functionality of rec-def into a single module, appropriate for copy-pasting into a solution submitted to a site like Open Kattis, how much code would there be?

      • nomeata says:

        Hmm, I’m unsure. Depends a bit on whether you need just the set function or also bools. For those example where you need sets (which are probably the typical reachability ones) one could probably have a mini-rec-def snippet. Also, likely thread safety isn’t that much of a concern, which simplifies the problem.

        Maybe if you come across something that smells like this, we can give it a shot.

  1. Pingback: Dynamic programming in Haskell: lazy immutable arrays | blog :: Brent -> [String]

  2. Globules says:

    A random memory: The book “Algorithms: A Functional Programming Approach” [1] by Fethi Rabhi and Guy Lapalme uses Haskell as its implementation language, and has a chapter on dynamic programming. It’s from 1999 and it uses the table method.

    [1] https://www.iro.umontreal.ca/~lapalme/Algorithms-functional.html

  3. Pingback: Dynamic programming in Haskell: automatic memoization | blog :: Brent -> [String]

  4. Pingback: Competitive Programming in Haskell: two more DP challenges | blog :: Brent -> [String]

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.