Lightweight invertible enumerations in Haskell

In a previous post I introduced a new Haskell library for enumerations (now on Hackage as simple-enumeration). The Data.Enumeration module defines a type Enumeration a, represented simply by a function Integer -> a which picks out the value of type a at a given index. This representation has a number of advantages, including the ability to quickly index into very large enumerations, and the convenience that comes from having Functor, Applicative, and Alternative instances for Enumeration.

I’ve just uploaded version 0.2 of the package, which adds a new Data.Enumeration.Invertible module with a new type, IEnumeration a, representing invertible enumerations. Whereas a normal enumeration is just a function from index to value, an invertible enumeration is a bijection between indices and values. In particular, alongside the Integer -> a function for picking out the value at an index, an invertible enumeration also stores an inverse function a -> Integer (called locate) for finding the index of a given value.

On the one hand, this comes at a cost: because the type parameter a now occurs both co- and contravariantly, IEnumeration i s no longer an instance of Functor, Applicative, or Alternative. There is a mapE combinator provided for mapping IEnumeration a to IEnumeration b, but in order to work it needs both an a -> b function and an inverse b -> a.

On the other hand, we also gain something: of course the ability to look up the index of a value is nifty, and beyond that we also get a combinator

functionOf :: IEnumeration a -> IEnumeration b -> IEnumeration (a -> b)

which works as long as the IEnumeration a is finite. This is not possible to implement with normal, non-invertible enumerations: we have to take an index and turn it into a function a -> b, but that function has to take an a as input and decide what to do with it. There’s nothing we can possibly do with a value of type a unless we have a way to connect it back to the IEnumeration a it came from.

Here’s a simple example of using the functionOf combinator to enumerate all Bool -> Bool functions, and then locating the index of not:

>>> bbs = functionOf (boundedEnum @Bool) (boundedEnum @Bool)
>>> card bbs
Finite 4
>>> locate bbs not
2
>>> map (select bbs 2) [False, True]
[True,False]

And here’s an example of enumerating recursive trees, which is parallel to an example given in my previous post. Note, however, how we can no longer use combinators like <$>, <*>, and <|>, but must explicitly use <+> (disjoint sum of enumerations) and >< (enumeration product) in combination with mapE. In return, though, we can find the index of any given tree in addition to selecting trees by index.

data Tree = L | B Tree Tree
  deriving Show

toTree :: Either () (Tree, Tree) -> Tree
toTree = either (const L) (uncurry B)

fromTree :: Tree -> Either () (Tree, Tree)
fromTree L       = Left ()
fromTree (B l r) = Right (l,r)

trees :: IEnumeration Tree
trees = infinite $ mapE toTree fromTree (unit <+> (trees >< trees))

>>> locate trees (B (B L (B L L)) (B (B L (B L L)) (B L (B L L))))
123
>>> select trees 123
B (B L (B L L)) (B (B L (B L L)) (B L (B L L)))

Of course, the original Data.Enumeration module remains available; there is clearly an inherent tradeoff to invertibility, and you are free to choose either style depending on your needs. Other than the tradeoffs outlined above and a couple other minor exceptions, the two modules export largely identical APIs.

Advertisements
Posted in combinatorics, haskell, projects | Tagged , , , , , , | Leave a comment

Competitive Programming in Haskell: Scanner

In my previous post I explored solving a simple competitive programming problem in Haskell. The input of the problem just consisted of a bunch of lines containing specific data, so that we could parse it using lines and words. There is another common class of problems, however, which follow this pattern:

The first line of the input consists of an integer T. Each of the next T lines consists of…

That is, the input contains integers which are not input data per se but just tell you how many things are to follow. This is really easy to process in an imperative language like Java or C++. For example, in Java we might write code like this:

Scanner in = new Scanner(System.in);
int T = in.nextInt();
for (int i = 0; i < T; i++) {
   // process each line
}

Occasionally, we can get away with completely ignoring the extra information in Haskell. For example, if the input consists of a number T followed by T lines, each of which contains a number n followed by a list of n numbers, we can just write

main = interact $
  lines >>> drop 1 >>> map (words >>> drop 1 >>> map read) >>> ...

That is, we can ignore the first line containing T since the end-of-file will tell us how many lines there are; and we can ignore the n at the beginning of each line, since the newline character tells us when the list on that line is done.

Sometimes, however, this isn’t possible, especially when there are multiple test cases, or when a single test case has multiple parts, each of which can have a variable length. For example, consider Popular Vote, which describes its input as follows:

The first line of input contains a single positive integer T \leq 500 indicating the number of test cases. The first line of each test case also contains a single positive integer n indicating the number of candidates in the election. This is followed by n lines, with the ith line containing a single nonnegative integer indicating the number of votes candidate i received.

How would we parse this? We could still ignore T—just keep reading until the end of the file—but there’s no way we can ignore the n values. Since the values for each test case are all on separate lines instead of on one line, there’s otherwise no way to know when one test case ends and the next begins.

Once upon a time, I would have done this using splitAt and explicit recursion, like so:

type Election = [Int]

readInput :: String -> [Election]
readInput = lines >>> drop 1 {- ignore T -} >>> map read >>> go
  where
    go :: [Int] -> [Election]
    go []     = []
    go (n:xs) = votes : go rest
      where (votes,rest) = splitAt n xs

However, this is really annoying to write and easy to get wrong. There are way too many variable names to keep track of (n, xs, votes, rest, go) and for more complex inputs it becomes simply unmanageable. You might think we should switch to using a real parser combinator library—parsec is indeed installed in the environment Kattis uses to run Haskell solutions—and although sometimes a full-blown parser combinator library is needed, in this case it’s quite a bit more heavyweight than we would like. I can never remember which modules I have to import to get parsec set up; there’s a bunch of boilerplate needed to set up a lexer; and so on. Using parsec is only worth it if we’re parsing something really complex.

Scanner

The heart of the issue is that we want to be able to specify a high-level description of the sequence of things we expect to see in the input, without worrying about managing the stream of tokens explicitly. Another key insight is that 99% of the time, we don’t need the ability to deal with parse failure or the ability to parse multiple alternatives. With these insights in mind, we can create a very simple Scanner abstraction, which is just a Stateful computation over a list of tokens:

type Scanner = State [String]

runScanner :: Scanner a -> String -> a
runScanner s = evalState s . words

To run a scanner, we just feed it the entire input as a String, which gets chopped into tokens using words. (Of course in some scenarios we might want to use lines instead of words, or even do more complex tokenization.)

Note since Scanner is just a type synonym for State [String], it is automatically an instance of Functor, Applicative, and Monad (but not Alternative).

So let’s develop a little Scanner DSL. The most fundamental thing we can do is read the next token.

str :: Scanner String
str = get >>= \case { s:ss -> put ss >> return s }

(This uses the LambdaCase extension, though we could easily rewrite it without.) str gets the current list of tokens, puts it back without the first token, and returns the first token. Note that I purposely didn’t include a case for the empty list. You might think we want to include a case for the empty token list and have it return the empty string or something like that. But since the input will always be properly formatted, if this scenario ever happens it means my program has a bug—e.g. perhaps I misunderstood the description of the input format. In this scenario I want it to crash loudly, as soon as possible, rather than continuing on with some bogus data.

We can now add some scanners for reading specific token types other than String, simply by mapping the read function over the output of str:

int :: Scanner Int
int = read <$> str

integer :: Scanner Integer
integer = read <$> str

double :: Scanner Double
double = read <$> str

Again, these will crash if they see a token in an unexpected format, and that is a very deliberate choice.

Now, as I explained earlier, a very common pattern is to have an integer n followed by n copies of something. So let’s make a combinator to encapsulate that pattern:

numberOf :: Scanner a -> Scanner [a]
numberOf s = int >>= flip replicateM s

numberOf s expects to first see an Int value n, and then it runs the provided scanner n times, returning a list of the results.

It’s also sometimes useful to have a way to repeat a Scanner some unknown number of times until encountering EOF (for example, the input for some problems doesn’t specify the number of test cases up front the way that Popular Vote does). This is similar to the many combinator from Alternative.

many :: Scanner a -> Scanner [a]
many s = get >>= \case { [] -> return []; _ -> (:) <$> s <*> many s }

many s repeats the scanner s as many times as it can, returning a list of the results. In particular it first peeks at the current token list to see if it is empty. If so, it returns the empty list of results; if there are more tokens, it runs s once and then recursively calls many s, consing the results together.

Finally, it’s quite common to want to parse a specific small number of something, e.g. two double values representing a 2D coordinate pair. We could just write replicateM 2 double, but this is common enough that I find it helpful to define dedicated combinators with short names:

two, three, four :: Scanner a -> Scanner [a]
[two, three, four] = map replicateM [2..4]

The complete file can be found on GitHub. As I continue this series I’ll be putting more code into that repository. Note I do not intend to make this into a Hackage package, since that wouldn’t be useful: you can’t tell Kattis to go download a package from Hackage before running your submission. However, it is possible to submit multiple files at once, so you can include Scanner.hs in your submission and just import Scanner at the top of your main module.

Examples

So what have we gained? Writing the parser for Popular Vote is now almost trivial:

type Election = [Int]

main = interact $ runScanner elections >>> ...

elections :: Scanner [Election]
elections = numberOf (numberOf int)

In practice I would probably just inline the definition of elections directly: interact $ runScanner (numberOf (numberOf int)) >>> ...

As a slightly more involved example, chosen almost at random, consider Board Wrapping:

On the first line of input there is one integer, N \leq 50, giving the number of test cases (moulds) in the input. After this line, N test cases follow. Each test case starts with a line containing one integer n, 1 \leq n \leq 600, which is the number of boards in the mould. Then n lines follow, each with five floating point numbers x,y,w,h,v where 0 \leq x,y,w,h \leq 10000 and -90^{\circ} < v \leq 90^{\circ}. The x and y are the coordinates of the center of the board and w and h are the width and height of the board, respectively. v is the angle between the height axis of the board to the y-axis in degrees, positive clockwise.

Here’s how I would set up the input, using Scanner and a custom data type to represent boards.

import Scanner

type V = [Double]     -- 2D vectors/points
newtype A = A Double  -- angle (radians)
                      -- newtype helps avoid conversion errors

fromDeg :: Double -> A
fromDeg d = A (d * pi / 180)

data Board = Board { boardLoc :: V, boardDims :: V, boardAngle :: A }

board :: Scanner Board
board = Board
  <$> two double
  <*> two double
  <*> ((fromDeg . negate) <$> double)

main = interact $
  runScanner (numberOf (numberOf board)) >>> ...
Posted in haskell | Tagged , , , , | 6 Comments

Lightweight, efficiently sampleable enumerations in Haskell

For another project I’m working on, I needed a way to enumerate and randomly sample values from various potentially infinite collections. There are plenty of packages in this space, but none of them quite fit my needs:

  • universe (and related packages) is very nice, but it’s focused on enumerating values of Haskell data types, not arbitrary sets: since it uses type classes, you have to make a new Haskell type for each thing you want to enumerate. It also uses actual Haskell lists of values, which doesn’t play nicely with sampling.
  • enumerable has not been updated in a long time and seems to be superseded by universe.
  • enumerate is likewise focused on generating values of Haskell data types, with accompanying generic deriving machinery.
  • size-based is used as the basis for the venerable testing-feat library, but these are again focused on generating values of Haskell data types. I’m also not sure I need the added complexity of size-indexed enumerations.
  • enumeration looks super interesting, and I might be able to use it for what I want, but (a) I’m not sure whether it’s maintained anymore, and (b) it seems rather more complex than I need.

I really want something like Racket’s nice data/enumerate package, but nothing like that seems to exist in Haskell. So, of course, I made my own! For now you can find it on GitHub.1 Here’s the package in a nutshell:

  • Enumerations are represented by the parameterized type Enumeration, which is an instance of Functor, Applicative, and Alternative (but not Monad).
  • Enumerations keep track of their cardinality, which could be either countably infinite or a specific natural number.
  • Enumerations are represented as functions from index to value, so they can be efficiently indexed (which also enables efficient random sampling).
  • The provided combinators will always do something sensible so that every value in the resulting enumeration occurs at a finite index. For example, if you take the disjoint union of two infinite enumerations, the resulting enumeration will alternate between values from the two inputs.

I wrote about something similar a few years ago. The main difference is that in that post I limited myself to only finite enumerations. There’s a lot more I could say but for now I think I will just show some examples:

>>> enumerate empty
[]
>>> enumerate unit
[()]
>>> enumerate $ empty <|> unit <|> unit
[(),()]

>>> enumerate $ finite 4 >< finiteList [27,84,17]
[(0,27),(0,84),(0,17),(1,27),(1,84),(1,17),(2,27),(2,84),(2,17),(3,27),(3,84),(3,17)]

>>> select (finite 4000000000000 >< finite 123456789) 0
(0,0)
>>> select (finite 4000000000000 >< finite 123456789) 196598723084073
(1592449,82897812)
>>> card (finite 4000000000000 >< finite 123456789)
Finite 493827156000000000000

>>> :set -XTypeApplications
>>> enumerate $ takeE 26 . dropE 65 $ boundedEnum @Char
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"

>>> take 10 . enumerate $ nat >< nat
[(0,0),(0,1),(1,0),(0,2),(1,1),(2,0),(0,3),(1,2),(2,1),(3,0)]
>>> take 10 . enumerate $ cw
[1 % 1,1 % 2,2 % 1,1 % 3,3 % 2,2 % 3,3 % 1,1 % 4,4 % 3,3 % 5]

>>> take 15 . enumerate $ listOf nat
[[],[0],[0,0],[1],[0,0,0],[1,0],[2],[0,1],[1,0,0],[2,0],[3],[0,0,0,0],[1,1],[2,0,0],[3,0]]

data Tree = L | B Tree Tree
  deriving (Eq, Show)

trees :: Enumeration Tree
trees = infinite $ singleton L <|> B <$> trees <*> trees

>>> take 3 . enumerate $ trees
[L,B L L,B L (B L L)]
>>> select trees 87239862967296
B (B (B (B (B L L) (B (B (B L L) L) L)) (B L (B L (B L L)))) (B (B (B L (B L (B L L))) (B (B L L) (B L L))) (B (B L (B L (B L L))) L))) (B (B L (B (B (B L (B L L)) (B L L)) L)) (B (B (B L (B L L)) L) L))

treesOfDepthUpTo :: Int -> Enumeration Tree
treesOfDepthUpTo 0 = singleton L
treesOfDepthUpTo n = singleton L <|> B <$> t' <*> t'
  where t' = treesOfDepthUpTo (n-1)

>>> card (treesOfDepthUpTo 0)
Finite 1
>>> card (treesOfDepthUpTo 1)
Finite 2
>>> card (treesOfDepthUpTo 3)
Finite 26
>>> card (treesOfDepthUpTo 10)
Finite
14378219780015246281818710879551167697596193767663736497089725524386087657390556152293078723153293423353330879856663164406809615688082297859526620035327291442156498380795040822304677
>>> select (treesOfDepthUpTo 10) (2^50)
B L (B L (B L (B (B L (B (B L (B (B L L) L)) (B (B (B (B L L) (B L L)) (B L (B L L))) (B (B (B L L) L) (B (B L L) L))))) (B (B (B (B (B (B L L) L) (B (B L L) L)) (B L L)) (B (B (B (B L L) L) (B L (B L L))) (B (B (B L L) (B L L)) L))) (B (B (B (B L L) (B L L)) (B (B (B L L) L) L)) (B (B L L) (B (B (B L L) L) (B (B L L) L))))))))

Comments, questions, suggestions for additional features, etc. are all very welcome!


  1. I chose the name enumeration before I realized there was already a package of that name on Hackage! So now I have to come up with another name that’s not already taken. Suggestions welcome…

Posted in combinatorics, haskell, projects | Tagged , , , , | 9 Comments

Code style and moral absolutes

In my previous post about my basic setup for solving competitive programming problems with Haskell, I (somewhat provocatively) used lists to represent pairs, and wrote a partial function to process them. Commenter Yom responded with a proposed alternative that was (less) partial. I was glad for the comment, because it gave me a good opportunity to think more about why I wrote the code in the way I did, and how it fits into larger issues of good coding practices and the reasons behind them.

Good code style as moral behavior

What is good code style? You probably have some opinions about this. In fact, I’m willing to bet you might even have some very strong opinions about this; I know I do. Whether consciously or not, we tend to frame good coding practices as a moral issue. Following good coding practices makes us feel virtuous; ignoring them makes us feel guilty. I can guess that this is why Yom said “I don’t think I could bring myself to be satisfied with partial functions” [emphasis added]. And this is why we say “good code style”, not “optimal” or “rational” or “best practice” code style.

Why is this? Partly, it is just human: we like to have right and wrong ways to do everything (load the dishwasher, enforce grammar “rules”, use a text editor, etc.), and we naturally create and enforce community standards via subtle and not-so-subtle social cues. In the case of coding practices, I think we also sometimes do it consciously and explicitly, because the benefits can be unintuitive or only manifest in the long term. So the only way to get our students—or ourselves—to follow practices that are in our rational self-interest is by framing them in moral terms; rational arguments do not work in and of themselves. For example, I cannot get my students to write good comments by explaining to them how it will be beneficial to them in the future. It seems obvious to them that they will remember perfectly how their code works in the future, so any argument claiming the opposite falls on deaf ears. The only way to get them to write comments is to make it a moral issue: they should feel bad (i.e. lose points, lose respect, feel like they are “taking shortcuts”) if they don’t. Of course I do this “for their own good”: I trust that in the future they will come to appreciate this ingrained behavior on its own merits.

The problem is that things framed in moral terms become absolutes, and it is then difficult for us to assess them rationally. My students will never be able to write a [function without comments, partial function, goto statement, …] without feeling bad about it, and they probably won’t stop to think about why.

Good code style as rational behavior

I ask again: what is good code style—and why? I have identified a few reasons for various “good” coding practices. Ultimately, we want our code to have properties such as:

  • Robustness: it should handle unexpected or invalid inputs gracefully.
  • Readability: it should be easy for others (or us in the future) to read and understand the program.
  • Maintainability: it should be easy to modify the program as requirements change.
  • Efficiency: in general, programs should not do anything obviously redundant, or use data structures with a lot of overhead when faster ones are available (e.g. String vs Text or ByteString).

Even in scenarios where one might initially think these properties are not needed (e.g. writing a one-off script for some sysadmin or data processing task), they often end up being important anyway (e.g. that one-off script gets copied and mutated until it becomes a key piece of some production system). And this is exactly one of the reasons for framing good coding style in moral terms! I won’t write comments or use good function decomposition in my one-off script just because I know, rationally, that it might end up in a production system someday. (I “know” that this particular script really is just a one-off script!) But I just might follow good coding practices anyway if I feel bad about not doing it (e.g. I would feel ashamed if other people saw it).

It seems to me that most things we would typically think of as good code style are geared towards producing code with some or all of the above properties (and perhaps some other properties as well), and most scenarios in which code is being written really do benefit from these properties.

“Good” code style is context-dependent

But what if there was a scenario where these properties are actually, concretely of no benefit? As you can probably guess, I would argue that competitive programming is one such scenario:

  • Robustness: we do not care what our program does when given unexpected or invalid inputs, since we are absolutely, 100% guaranteed that our program will only ever be run on inputs that exactly follow the given specification.
  • Maintainability: the requirements for our program will never change.
  • Efficiency: if you haven’t done much competitive programming you might be surprised to learn that we often don’t care about efficiency either. That is, although we certainly do care about asymptotic efficiency, i.e. choosing a good algorithm, problem time limits are typically set in such a way that constant factors don’t matter very much. A program that runs within 5x-10x of the optimal speed will often fit comfortably within the time limit.

So what do we care about?

  • Readability: the one thing from my previous list that we do care about is readability. Debugging becomes quite difficult if you can’t read and understand the code you wrote (this becomes even more important if you’re working on a team). And insofar as a solution represents particular insights or techniques, you may want be able to read it much later in order to remember or share what you learned.
  • Programmer time: programmer time is always valuable, of course, but with competitive programming this is taken to an extreme: it is almost always done under time pressure, so the whole point is to write a program to solve a given problem as fast as possible.

The combination of optimizing for speed and not caring about things like robustness, maintainability, and efficiency leads to a number of “best practices” for competitive programming that fly in the face of typical standards. For example:

  • Adding code to deal gracefully with inputs that don’t follow the specification would just be a waste of time (a cardinal sin in this context!). My Haskell solutions are full of calls to partial functions like read, head, tail, fromJust, and so on, even though I would almost never use these functions in other contexts. This is also why I used a partial function that was only defined on lists of length two in my previous post (though as I argue in a comment, perhaps it’s not so much that the function is partial as that its type is too big).
  • I often just use String for text processing, even though something like Text or ByteString (depending on the scenario) would be faster or more robust. (The exception is problems with a large amount of I/O, when the overhead of String really does become a problem; more on this in a future post.)
  • Other than the simplest uses of foldr, foldl', and scanl, I don’t bother with generic recursion schemes; I tend to just write lots of explicit recursion, which I find quicker to write and easier to debug.

There are similar things I do in Java as well. It has taken me quite a while to become comfortable with these things and stop feeling bad about them, and I think I finally understand why.

I’m not sure I really have a main point, other than to encourage you to consider your coding practices, and why you consider certain practices to be good or bad (and whether it depends on the context!).

Next time, back to your regularly scheduled competitive programming tips!

Posted in haskell | Tagged , , | 4 Comments

Competitive Programming in Haskell: Basic Setup

I am the coach of my school’s competitive programming team and enjoy solving problems on Open Kattis. Since Kattis accepts submissions in a wide variety of languages (including Haskell, OCaml, Rust, Common Lisp, and even Prolog), I often enjoy submitting solutions in Haskell. Of the 946 problems I have solved on Kattis1, I used Haskell for 607 of them (I used Java for the rest, except for one in C++).

After solving so many problems in Haskell, by now I’ve figured out some patterns that work well, identified some common pitfalls, developed some nice little libraries, and so forth. I thought it would be fun to write a series of blog posts sharing my experience for the benefit of others—and because I expect I will also learn things from the ensuing discussion!

The Basics: I/O

As a basic running example I’ll use the same example problem that Kattis uses in its help section, namely, A Different Problem. In this problem, we are told that the input will consist of a number of pairs of integers between 0 and 10^{15}, one pair per line, and we should output the absolute value of the difference between each pair. The given example is that if the input looks like this:

10 12
71293781758123 72784
1 12345677654321

then our program should produce output that looks like this:

2
71293781685339
12345677654320

Kattis problems are always set up this way, with input of a specified format provided on standard input, and output to be written to standard output. To do this in Haskell, one might think we will need to use things like getLine and putStrLn to read and write the input. But wait! There is a much better way. Haskell’s standard Prelude has a function

interact :: (String -> String) -> IO ()

It takes a pure String -> String function, and creates an IO action which reads from standard input, feeds the input to the function, and then writes the function’s output to standard output. It uses lazy IO, so the reading and writing can be interleaved with computation of the function—a bit controversial and dangerous in general, but absolutely perfect for our use case! Every single Kattis problem I have ever solved begins with

main = interact $ ...

(or the equivalent for ByteString, more on that in a future post) and that is the only bit of IO in the entire program. Yay!

From Input to Output

So now we need to write a pure function which transforms the input into the output. Of course, in true Haskell fashion, we will do this by constructing a chained pipeline of functions to do the job incrementally. The general plan of attack (for any Kattis problem) is as follows:

  1. First, parse the input, that is, transform the raw String input into some more semantically meaningful representation—typically using a combination of functions like lines, words, read, map, and so on (or more sophisticated tools—see a later post).
  2. Next, solve the problem, turning a semantically meaningful representation of the input into a semantically meaningful representation of the output.
  3. Finally, format the output using things like show, unwords, unlines, and so on.

Idiomatic Haskell uses the composition operator (.) to combine functions. However, when solving competitive programming problems, I much prefer to use the reverse composition operator, (>>>) from Control.Arrow (that is, (>>>) = flip (.)). The reason is that since I often end up constructing long function pipelines, I want to be able to think about the process of transforming input to output and type from left to right at the same time; having to add functions from right to left would be tedious.

A Full Solution

So here’s my solution to A Different Problem:

main = interact $
  lines >>> map (words >>> map read >>> solve >>> show) >>> unlines

solve :: [Integer] -> Integer
solve [a,b] = abs (a - b)

A few notes:

  • Since each line is to be processed independently, notice how I put the processing of each line inside a single call to map.
  • We could probably inline the solve function in this case, but I prefer to split it out explicitly in order to specify its type, which both prevents problems with read/show ambiguity and also serves as a sanity check on the parsing and formatting code.
  • The machines on which our solution will run definitely have 64-bit architectures, so we could technically get away with using Int instead of Integer (maxBound :: Int64 is a bit more than 9 \times 10^{18}, plenty big enough for inputs up to 10^{15}), but there would be no benefit to doing so. If we use Integer we don’t even have to consider potential problems with overflow.

And one last thing: I said we were going to parse the input into a “semantically meaningful representation”, but I lied a teensy bit: the problem says we are going to get a pair of integers but I wrote my solve function as though it takes a list of integers. And even worse, my solve function is partial! Why did I do that?

The fact is that I almost never use actual Haskell tuples in my solutions, because they are too awkward and inconvenient. Representing homogeneous tuples as Haskell lists of a certain known length allows us to read and process “tuples” using standard functions like words and map, to combine them using zipWith, and so on. And since we get to assume that the input always precisely follows the specification—which will never change—this is one of the few situations where, in my opinion, we are fully justified in writing partial functions like this if it makes the code easier to write. So I always represent homogeneous tuples as lists and just pattern match on lists of the appropriate (known) length. (If I need heterogeneous tuples, on the other hand, I create an appropriate data type.)

Of course I’ve only scratched the surface here—I’ll have a lot more to say in future posts—but this should be enough to get you started! I’ll leave you with a few very easy problems, which can each be done with just a few lines of Haskell:

Of course you can also try solving any of the other problems (as of this writing, over 2400 of them!) on Kattis as well.


  1. Did I mention that I really enjoy solving competitive programming problems?

Posted in haskell | Tagged , , , , , | 3 Comments

Idea for a physics-based rolling ball puzzle game

For quite a while I’ve had this idea for a cool game, and had vague intentions to learn some game/physics framework well enough to make it, but I’ve finally admitted that this is never going to happen. Instead I’ll just describe my idea here in the hope that someone else will either make it, or tell me that it already exists!

This is a 2D physics-based puzzle/obstacle game where you control a ball (aka circle). The twist that distinguishes it from similar games I’ve seen is that you have only two ways to control the ball:

  • Pushing the left or right arrow keys changes the ball’s angular velocity, that is, its rate of spin. If the ball is sitting on a surface, this will cause it to roll due to friction, but if the ball is in the air, it will just change its spin rate without changing its trajectory at all.

  • The down arrow key increases the ball’s velocity in the downwards direction. If the ball is sitting on a surface this will cause it to bounce upwards a bit. If the ball is in the air you can cause it to either bounce higher, by adding to its downward velocity while it is already falling, or you can dampen a bounce by pushing the down arrow while the ball is travelling upwards.

Those are the key mechanics. My intuition is that controlling the ball would be challenging but doable, and there would be lots of opportunities for interesting obstacles to navigate. For example, to get out of a deep pit you have to keep bouncing higher and then once you’re high enough, you impart a bit of spin so the next time you bounce you travel sideways over the lip of the pit. Or there could be a ledge so that you have to bounce once or twice while travelling towards it to get high enough to clear it, but then immediately control your subsequent bounce so you don’t bounce too high and hit some sort of hazard on the ceiling. And so on.

Finally, of course there could be various power-ups (e.g. to make the ball faster, or sticky, or to alter gravity in various ways). Various puzzles might be based on figuring out which power-ups to use or how to use them to overcome various obstacles.

So, does this game already exist? Or does someone want to make it? (Preferably in Haskell? =)

Posted in projects | Tagged , , , , , , | 2 Comments

What’s the right way to QuickCheck floating-point routines?

I got a lot of great comments on my previous post about finding roots of polynomials in Haskell. One particularly promising idea I got from commenter Jake was to give up on the idea of having no external dependencies (which, to be fair, in these days of stack and nix and cabal-v2, seems like much less of a big deal than it used to be), and use the hmatrix package to find the eigenvalues of the companion matrix, which are exactly the roots.

So I tried that, and it seems to work great! The only problem is that I still don’t know how to write a reasonable test suite. I started by making a QuickCheck property expressing the fact that if we evaluate a polynomial at the returned roots, we should get something close to zero. I evaluate the polynomial using Horner’s method, which as far as I understand has good numerical stability in addition to being efficient.

polyRoots :: [Double] -> [Double]
polyRoots = ... stuff using hmatrix, see code at end of post ...

horner :: [Double] -> Double -> Double
horner as x = foldl' (\r a -> r*x + a) 0 as

_polyRoots_prop :: [Double] -> Property
_polyRoots_prop as = (length as > 1) ==>
  all ((< 1e-10) . abs . horner as) (polyRoots as)

This property passes 100 tests for quadratic polynomials, but for cubic I get failures; here’s an example. Consider the polynomial

0.1 x^3 - 15.005674483568866 x^2 - 8.597718287916894 x + 8.29

Finding its roots via hmatrix yields three:

[-1.077801388041068, 0.5106483227001805, 150.6238979010295]

Notice that the third root is much bigger in magnitude than the other two, and indeed, that third root is the problematic one. Evaluating the polynomial at these roots via Horner’s method yields

[1.2434497875801753e-14, 1.7763568394002505e-15, -1.1008971512183052e-10]

the third of which is bigger than 1e-10 which I had (very arbitrarily!) chosen as the cutoff for “close enough to zero”. But here’s the thing: after playing around with it a bit, it seems like this is the most accurate possible value for the root that can be represented using Double. That is, if I evaluate the polynomial at any value other than the root that was returned—even if I just change the very last digit by 1 in either direction—I get a result which is farther from zero.

If I make the magic cutoff value bigger—say, 1e-8 instead of 1e-10—then I still get similar counterexamples, but for larger-degree polynomials. I never liked the arbitrary choice of a tolerance anyway, and now it seems to me that saying “evaluating the polynomial at the computed roots should be within this absolute distance from zero” is fundamentally the wrong thing to say; depending on the polynomial, we might have to take what we can get. Some other things I could imagine saying instead include:

  • Evaluating the polynomial at the computed roots should be within some relative epsilon of zero, depending on the size/relative size of the coefficients
  • The computed roots are as accurate as possible (or close to it) in the sense that evaluating the polynomial at other numbers near the computed roots yields values which are farther from zero

…but, first of all, I don’t know if these are reasonable properties to expect; and even if they were, I’m not sure I know how to express them in Haskell! Any advice is most welcome. Are there any best practices for expressing desirable test properties for floating-point computations?

For completeness, here is the actual code I came up with for finding roots via hmatrix. Notice there is another annoying arbitrary value in there, for deciding when a complex root is close enough to being real that we call it a real root. I’m not really sure what to do about this either.

-- Compute the roots of a polynomial as the eigenvalues of its companion matrix,
polyRoots :: [Double] -> [Double]
polyRoots []     = []
polyRoots (0:as) = polyRoots as
polyRoots (a:as) = mapMaybe toReal eigVals
  where
    n   = length as'
    as' = map (/a) as
    companion = (konst 0 (1,n-1) === ident (n-1)) ||| col (map negate . reverse $ as')

    eigVals = toList . fst . eig $ companion
    toReal (a :+ b)
       | abs b < 1e-10 = Just a   -- This arbitrary value is annoying too!
       | otherwise     = Nothing
Posted in haskell, math | Tagged , , , , , , , , , | 12 Comments