Idea for a physics-based rolling ball puzzle game

For quite a while I’ve had this idea for a cool game, and had vague intentions to learn some game/physics framework well enough to make it, but I’ve finally admitted that this is never going to happen. Instead I’ll just describe my idea here in the hope that someone else will either make it, or tell me that it already exists!

This is a 2D physics-based puzzle/obstacle game where you control a ball (aka circle). The twist that distinguishes it from similar games I’ve seen is that you have only two ways to control the ball:

  • Pushing the left or right arrow keys changes the ball’s angular velocity, that is, its rate of spin. If the ball is sitting on a surface, this will cause it to roll due to friction, but if the ball is in the air, it will just change its spin rate without changing its trajectory at all.

  • The down arrow key increases the ball’s velocity in the downwards direction. If the ball is sitting on a surface this will cause it to bounce upwards a bit. If the ball is in the air you can cause it to either bounce higher, by adding to its downward velocity while it is already falling, or you can dampen a bounce by pushing the down arrow while the ball is travelling upwards.

Those are the key mechanics. My intuition is that controlling the ball would be challenging but doable, and there would be lots of opportunities for interesting obstacles to navigate. For example, to get out of a deep pit you have to keep bouncing higher and then once you’re high enough, you impart a bit of spin so the next time you bounce you travel sideways over the lip of the pit. Or there could be a ledge so that you have to bounce once or twice while travelling towards it to get high enough to clear it, but then immediately control your subsequent bounce so you don’t bounce too high and hit some sort of hazard on the ceiling. And so on.

Finally, of course there could be various power-ups (e.g. to make the ball faster, or sticky, or to alter gravity in various ways). Various puzzles might be based on figuring out which power-ups to use or how to use them to overcome various obstacles.

So, does this game already exist? Or does someone want to make it? (Preferably in Haskell? =)

Advertisements
Posted in projects | Tagged , , , , , , | 2 Comments

What’s the right way to QuickCheck floating-point routines?

I got a lot of great comments on my previous post about finding roots of polynomials in Haskell. One particularly promising idea I got from commenter Jake was to give up on the idea of having no external dependencies (which, to be fair, in these days of stack and nix and cabal-v2, seems like much less of a big deal than it used to be), and use the hmatrix package to find the eigenvalues of the companion matrix, which are exactly the roots.

So I tried that, and it seems to work great! The only problem is that I still don’t know how to write a reasonable test suite. I started by making a QuickCheck property expressing the fact that if we evaluate a polynomial at the returned roots, we should get something close to zero. I evaluate the polynomial using Horner’s method, which as far as I understand has good numerical stability in addition to being efficient.

polyRoots :: [Double] -> [Double]
polyRoots = ... stuff using hmatrix, see code at end of post ...

horner :: [Double] -> Double -> Double
horner as x = foldl' (\r a -> r*x + a) 0 as

_polyRoots_prop :: [Double] -> Property
_polyRoots_prop as = (length as > 1) ==>
  all ((< 1e-10) . abs . horner as) (polyRoots as)

This property passes 100 tests for quadratic polynomials, but for cubic I get failures; here’s an example. Consider the polynomial

0.1 x^3 - 15.005674483568866 x^2 - 8.597718287916894 x + 8.29

Finding its roots via hmatrix yields three:

[-1.077801388041068, 0.5106483227001805, 150.6238979010295]

Notice that the third root is much bigger in magnitude than the other two, and indeed, that third root is the problematic one. Evaluating the polynomial at these roots via Horner’s method yields

[1.2434497875801753e-14, 1.7763568394002505e-15, -1.1008971512183052e-10]

the third of which is bigger than 1e-10 which I had (very arbitrarily!) chosen as the cutoff for “close enough to zero”. But here’s the thing: after playing around with it a bit, it seems like this is the most accurate possible value for the root that can be represented using Double. That is, if I evaluate the polynomial at any value other than the root that was returned—even if I just change the very last digit by 1 in either direction—I get a result which is farther from zero.

If I make the magic cutoff value bigger—say, 1e-8 instead of 1e-10—then I still get similar counterexamples, but for larger-degree polynomials. I never liked the arbitrary choice of a tolerance anyway, and now it seems to me that saying “evaluating the polynomial at the computed roots should be within this absolute distance from zero” is fundamentally the wrong thing to say; depending on the polynomial, we might have to take what we can get. Some other things I could imagine saying instead include:

  • Evaluating the polynomial at the computed roots should be within some relative epsilon of zero, depending on the size/relative size of the coefficients
  • The computed roots are as accurate as possible (or close to it) in the sense that evaluating the polynomial at other numbers near the computed roots yields values which are farther from zero

…but, first of all, I don’t know if these are reasonable properties to expect; and even if they were, I’m not sure I know how to express them in Haskell! Any advice is most welcome. Are there any best practices for expressing desirable test properties for floating-point computations?

For completeness, here is the actual code I came up with for finding roots via hmatrix. Notice there is another annoying arbitrary value in there, for deciding when a complex root is close enough to being real that we call it a real root. I’m not really sure what to do about this either.

-- Compute the roots of a polynomial as the eigenvalues of its companion matrix,
polyRoots :: [Double] -> [Double]
polyRoots []     = []
polyRoots (0:as) = polyRoots as
polyRoots (a:as) = mapMaybe toReal eigVals
  where
    n   = length as'
    as' = map (/a) as
    companion = (konst 0 (1,n-1) === ident (n-1)) ||| col (map negate . reverse $ as')

    eigVals = toList . fst . eig $ companion
    toReal (a :+ b)
       | abs b < 1e-10 = Just a   -- This arbitrary value is annoying too!
       | otherwise     = Nothing
Posted in haskell, math | Tagged , , , , , , , , , | 12 Comments

Worstsort

Thanks for the responses to my previous post about finding roots of polynomials; I now have some new avenues to explore. But today I want to write about something completely different. I recently stumbled across this fun paper by Miguel Lerna. I realized a Haskell implementation would be very elegant, and I couldn’t pass up the opportunity to share.

Badsort

This is badsort.

> import Data.List (permutations, insert)
> 
> badsort :: Ord a => Integer -> [a] -> [a]
> badsort 0 = foldr insert []
> badsort k = head . badsort (k-1) . permutations

Claim: badsort k is a correct sorting algorithm for any natural number k. Before reading further I recommend staring at that until you understand what it’s doing and why it works.

How badsort works

Badsort is very bad. Here’s how it works:

  • badsort 0 is just plain old insertion sort.

  • badsort k xs creates the list of all permutations of xs, sorts them into lexicographic order, and selects the first. This works because the lexicographically smallest permutation of xs is, in fact, the one which is sorted.

    Oh, and of course, sorting the permutations lexicographically is done by a recursive call to badsort (k-1). (As an aside, I like how seamless this is in Haskell with polymorphic recursion—each recursive call is at a different type.)

Here are a few examples to show that it works:

ghci> badsort 0 [3,1,2]
  [1,2,3]

ghci> badsort 1 [3,1,2]  -- generates 6 permutations
  [1,2,3]

ghci> badsort 2 [3,1,2]  -- generates 720 permutations of 6 permutations
  [1,2,3]

badsort 3 [3,1,2], if we tried it (not recommended!!), would generate all possible permutations of the list of 720 permutations of the list of 6 permutations of [3,1,2]. The number of such permutations is, of course, 720!, which has 1747 decimal digits; there is literally not enough space in the universe to store all those permutations.

In general, badsort k is a correct sorting algorithm1 which takes time \Omega((n!^{(k)})^2), where n!^{(k)} denotes the k-fold iterated factorial of n, that is, n!^{(0)} = n and n!^{(k+1)} = (n!^{(k)})!. (This doesn’t even take into account the time for accessing memory; at this scale we certainly can’t assume memory access takes constant time. Fetching memory from a data center in another galaxy takes a while, you know? =)

It gets worse

> worstsort :: Ord a => (Integer -> Integer) -> [a] -> [a]
> worstsort f xs = badsort (f n) xs
>   where
>     n = fromIntegral $ length xs

Worstsort is parameterized by a function on natural numbers, and calls badsort with a recursion depth given by the function f applied to the length of the list. Oh my.

Just for fun, let’s try, oh, say, the Ackermann function.

> ack :: Integer -> Integer -> Integer
> ack 0 n = n+1
> ack m 0 = ack (m-1) 1
> ack m n = ack (m-1) (ack m (n-1))
> 
> diabolicalsort :: Ord a => [a] -> [a]
> diabolicalsort = worstsort (\n -> ack n n)

Here are some fun properties of diabolicalsort (and any other instantiation of worstsort):

  • It will provably terminate in a finite amount of time for any input! Although probably the words “terminate” and “finite” should be in scare quotes.

  • In some sense I can’t quite define formally but still believe in my heart, it “doesn’t cheat” in the sense that it is always “making real progress” towards sorting the input list. If you are trying to design a slow sorting algorithm, it would be cheating, for example, to make an algorithm that spins in a useless loop for a thousand years and then does insertion sort.

  • It works in practice on lists of length 1 or 2, but length 3 is completely hopeless. ack 3 3 = 61, so we are looking at the 61-fold iterated factorial of 3, which is a… rather large number.

  • ack 4 4 is 2^{2^{2^{65536}}} - 3; there are not enough atoms in the universe to even write down this number in base 10. And then of course we take that number and iterate factorial that many times on 4. Sheesh.

  • Let us not even speak of lists of length 5.

The upshot of this, in the end, is that it is possible to make a “non-cheating” sorting algorithm whose running time grows faster than any computable function you care to choose (proof: take your chosen computable function and substitute it for f).


  1. It might be a fun exercise to prove this formally using a proof assistant.

Posted in haskell, humor, math | Tagged , , , , , | 8 Comments

Finding roots of polynomials in Haskell?

tl;dr: There are several Haskell packages one can use to find an individual root of a function on a certain interval. But I’ve had less luck finding a suitable package for finding all the roots of a given polynomial. This blog post consists of my notes and a plea for help.

The diagrams-solve package contains miscellaneous numeric solving routines used in diagrams, including tridiagonal and cyclic tridiagonal linear system solvers (used for generating cubic splines and emulating Metafont paths) as well as functions for finding roots of low-dimensional polynomials (quadratic, cubic, and quartic). Solving quadratics is used in a bunch of places; solving cubics is needed specifically for doing interior/exterior testing on closed loops built from cubic Bezier curves; thankfully we have never needed to solve a quartic or higher.

Unfortunately, the polynomial root solvers are pretty naive: I simply transcribed some formulas (which is why it only goes up to quartics). This works OK a lot of the time, but the formulas are very numerically unstable, and it’s not hard to come up with example inputs where the returned roots are off by quite a bit. In fact, people regularly run into this when running the test suite. I am not specifically aware of any diagrams bugs that have arisen in actual practice due to the cubic solutions being off, but it’s probably just a matter of time.

So I decided it was finally time to look into better root-finding methods. This blog post is both a plea for help and a place to write down some of the things I’ve learned so far.

There’s root finding, and then there’s root finding

The first thing to get out of the way is that when you talk about “root finding” there are (at least!) two pretty different things you could mean:

  1. Given a function f and some particular interval, or an initial guess, find a value x in the interval/close to the initial guess for which f(x) = 0.
  2. Given a polynomial with {real, complex} coefficients, find all its {real, complex} roots.

If you want to do (1), there are several nice Haskell packages you could use. The Numeric.RootFinding module from the math-functions package is probably your best bet; it implements both Ridders’ method and the Newton-Raphson method, which both attempt to find a single root of a function on a given interval. They both work on any continuous Double -> Double function, not just polynomials (Newton-Raphson also needs to know the first derivative). But they don’t work well if you don’t already have an interval in mind to search; and if you want to find all the roots you have to call these multiple times (somehow coming up with an appropriate interval each time).

As for (2), I haven’t been able to find anything that would work well for diagrams-solve. Here are my notes:

  • The dsp package has a module Polynomial.Roots containing an implementation of Laguerre’s method, which finds all the (complex) roots of a polynomial. However, the dsp package is a rather heavyweight dependency to pull in just for the root-finding code; it’s also licensed under the GPL and I’d like to avoid having to “infect” the entire diagrams ecosystem with the GPL.

  • Laguerre’s method seems like it should be fairly easy to implement myself—but writing my own solver from scratch is how I got here in the first place; I’d really like to avoid it if possible. I am far from being an expert on numerical analysis, floating-point computation, etc.

  • The hmatrix-gsl package has the Numeric.GSL.Polynomials module, which has an interface to a root finding algorithm from GSL (apparently using something called “balanced-QR reduction”), but I’d like to avoid pulling in a C library as as dependency, and also, again, GPL.

  • From the Wikipedia page for Laguerre’s method I learned that the Jenkins-Traub algorithm is another widely used method for polynomial root-finding, and often preferred over Laguerre’s method. However, it seems rather complex, and the only Haskell implementation of Jenkins-Traub I have been able to fnid is this one which seems to be just a toy implementation; I don’t even know if it works correctly.

If you know of a good place where I can find polynomial solving code in Haskell, can you point me to it? Or if you know more about numerics than me, could you maybe whip up a quick implementation of Laguerre’s method and put it up on Hackage?

Posted in haskell, math | Tagged , , , , , | 14 Comments

Hendrix teams at ACM ICPC

Today I took five students from Hendrix to Fort Smith to compete in the Mid-Central USA Regional of the ACM International Collegiate Programming Contest (ICPC). Both teams solved five out of ten problems, and finished second and fourth out of 18 teams at our site (23rd and 25th out of 121 teams in the whole region). I’m really proud of them!

A lot of fun was had by all. Thanks to Andrew Mackey and the rest of the folks at UAFS for hosting a great contest!

Posted in meta, teaching | Tagged , , , , | Leave a comment

What’s the Difference? video and slides

At ICFP in St. Louis I presented my paper with Kenny Foner, What’s the Difference? A Functional Pearl on Subtracting Bijections. Many people seemed to enjoy the talk and we got lots of great feedback afterwards. We will probably write up an extended version of the paper for JFP, to include extra stuff we could not fit in the paper and to incorporate things we learned at ICFP.

(Fun fact: this was actually my first ICFP talk—though not my first ICFP paper. I am glad I have a bunch of experience giving talks at this point; I can remember a time in my life when giving a talk on that stage would have been terrifying.)

Here is the video of the talk:

You can download my slides here. This version of the slides includes speaking notes—it’s not exactly what I ended up saying, but it should give you a decent idea if you just want to read the slides without watching the video.

Posted in combinatorics, haskell, writing | Tagged , , , , , , | Leave a comment

Counting inversions with monoidal sparks

Time for me to reveal the example I had in mind that led to the generalization in my previous post. Thanks for all the interesting comments: it seems like there are some interesting connections to be explored (e.g. to the algebra of graphs, formal group laws, …?)!

This is a literate Haskell post; download it and play along in ghci! (Note it requires GHC 8.6 since I couldn’t resist making use of DerivingVia…)

> {-# LANGUAGE DefaultSignatures          #-}
> {-# LANGUAGE FlexibleInstances          #-}
> {-# LANGUAGE MultiParamTypeClasses      #-}
> {-# LANGUAGE DerivingStrategies         #-}
> {-# LANGUAGE GeneralizedNewtypeDeriving #-}
> {-# LANGUAGE DerivingVia                #-}
> {-# LANGUAGE TypeApplications           #-}
> {-# LANGUAGE ScopedTypeVariables        #-}
> 
> import Data.Semigroup
> import Data.Coerce

Consider a sequence of integers \sigma = a_1, a_2, \dots, a_n. (Actually, integers is too specific; any linearly ordered domain will do.) An inversion is a pair of positions in \sigma which are “out of order”: that is, (i,j) such that i < j but a_i > a_j. So, for example, \sigma = [3,5,1,4,2] has six inversions, namely (3,1), (3,2), (5,1), (5,4), (5,2), (4,2). (Here I’ve written the elements that are out of order rather than their positions, which doesn’t matter much when the elements are all distinct; but it’s important to keep in mind that e.g. [2,2,1] has two inversions, not one, because each copy of 2 makes an inversion with the 1.) The total number of inversions of \sigma is denoted \mathrm{inv}(\sigma).

One way to think about the inversion count is as a measure of how far away the sequence is from being sorted. In particular, bubble sort will make precisely \mathrm{inv}(\sigma) adjacent swaps while sorting \sigma. The highest possible value of \mathrm{inv}(\sigma) is n(n-1)/2, when \sigma is sorted in reverse order.

The obvious brute-force algorithm to count inversions is to use two nested loops to enumerate all possible pairs of elements, and increment a counter each time we discover a pair which is out of order. This clearly takes O(n^2) time. Can it be done any faster?

It turns out the (generally well-known) answer is yes, using a variant of mergesort. The trick is to generalize to counting inversions and sorting the sequence at the same time. First split the sequence in half, and recursively sort and count the inversions in each half. Any inversion in the original sequence must either be entirely contained in one of the two halves (these will be counted by the recursive calls), or have one endpoint in the left half and one in the right. One key observation at this point is that any inversion with one endpoint in each half will still be an inversion even after independently sorting the two halves. The other key observation is that we can merge the two sorted subsequences and count inversions between them in linear time. Use the usual two-finger algorithm for merging two sorted sequences; each time we take an element from the right subsequence, it’s because it is less than all the remaining elements in the left subsequence, but it was to the right of all of them, so we can add the length of the remaining left subsequence to the inversion count. Intuitively, it’s this ability to count a bunch of inversions in one step which allows this algorithm to be more efficient, since any algorithm which only ever increments an inversion counter is doomed to be O(n^2) no matter how cleverly it splits up the counting. In the end, the number of total inversions is the sum of the inversions counted recursively in the two sublists, plus any inversions between the two sublists.

Here’s some Haskell code implementing this sorted-merge-and-inversion-count. We have to be a bit careful because we don’t want to call length on the remaining sublist at every step (that would ruin the asymptotic performance!), so we precompute the length and pass along the length of the left subsequence as an extra parameter which we keep up-to-date as we recurse.

> -- Precondition: the input lists are sorted.
> -- Output the sorted merge of the two lists, and the number of pairs
> -- (a,b) such that a \in xs, b \in ys with a > b.
> mergeAndCount :: Ord a => [a] -> [a] -> ([a], Int)
> mergeAndCount xs ys = go xs (length xs) ys
>   -- precondition/invariant for go xs n ys:   n == length xs
>   where
>     go [] _ ys = (ys, 0)
>     go xs _ [] = (xs, 0)
>     go (x:xs) n (y:ys)
>       | x <= y    = let (m, i) = go xs (n-1) (y:ys) in (x:m, i)
>       | otherwise = let (m, i) = go (x:xs) n ys     in (y:m, i + n)
> 
> merge :: Ord a => [a] -> [a] -> [a]
> merge xs ys = fst (mergeAndCount xs ys)
> 
> inversionsBetween :: Ord a => [a] -> [a] -> Int
> inversionsBetween xs ys = snd (mergeAndCount xs ys)

Do you see how this is an instance of the sparky monoid construction in my previous post? A is the set of sorted lists with merge as the monoid operation; B is the natural numbers under addition. The spark operation takes two sorted lists and counts the number of inversions between them. So the monoid on pairs A \times B merges the lists, and adds the inversion counts together with the number of inversions between the two lsits.

We have to verify that this satisfies the laws: let a be any sorted list, then we need

  • a \cdot \varepsilon_A = \varepsilon_B, that is, a `inversionsBetween` [] = 0. This is true since there are never any inversions between a and an empty list. Likewise for \varepsilon_A \cdot a = \varepsilon_B.

  • a `inversionsBetween` (a1 `merge` a2) == (a `inversionsBetween` a1) + (a `inversionsBetween` a2). This is also true since a1 `merge` a2 contains the same elements as a1 and a2: any inversion between a and a1 `merge` a2 will be an inversion between a and a1, or between a and a2, and vice versa. The same reasoning shows that (a1 `merge` a2) `inversionsBetween` a == (a1 `inversionsBetween` a) + (a2 `inversionsBetween` a).

Note that A and B are commutative monoids, but the spark operation isn’t commutative; in fact, any given pair of elements is an inversion between a1 and a2 precisely iff they are not an inversion between a2 and a1. Note also that A and B aren’t idempotent; for example merging a sorted list with itself produces not the same list, but a new list with two copies of each element.

So let’s see some more Haskell code to implement the entire algorithm in a nicely modular way. First, let’s encode sparky monoids in general. The Sparky class is for pairs of types with a spark operation. As we saw in the example above, sometimes it may be more efficient to compute a_1 \diamond a_2 and the spark a_1 \cdot a_2 at the same time, so we bake that possibility into the class.

> class Sparky a b where

The basic spark operation, with a default implementation that projects the result out of the prodSpark method.

>   (<.>) :: a -> a -> b
>   a1 <.> a2 = snd (prodSpark a1 a2)

prodSpark does the monoidal product and spark at the same time, with a default implementation that just does them separately.

>   prodSpark :: a -> a -> (a,b)
>   default prodSpark :: Semigroup a => a -> a -> (a,b)
>   prodSpark a1 a2 = (a1 <> a2, a1 <.> a2)

Finally we can specify that we have to implement one or the other of these methods.

>   {-# MINIMAL (<.>) | prodSpark #-}

Sparked a b is just a pair type, but with Semigroup and Monoid instances that implement the sparky product.

> data Sparked a b = S { getA :: a, getSpark :: b }
>   deriving Show
> 
> class Semigroup a => CommutativeSemigroup a
> class (Monoid a, CommutativeSemigroup a) => CommutativeMonoid a
> 
> instance (Semigroup a, CommutativeSemigroup b, Sparky a b) => Semigroup (Sparked a b) where
>   S a1 b1 <> S a2 b2 = S a' (b1 <> b2 <> b')
>     where (a', b') = prodSpark a1 a2
> 
> instance (Monoid a, CommutativeMonoid b, Sparky a b) => Monoid (Sparked a b) where
>   mempty = S mempty mempty

Now we can make instances for sorted lists under merge…

> newtype Sorted a = Sorted [a]
>   deriving Show
> 
> instance Ord a => Semigroup (Sorted a) where
>   Sorted xs <> Sorted ys = Sorted (merge xs ys)
> instance Ord a => Monoid (Sorted a) where
>   mempty = Sorted []
> 
> instance Ord a => CommutativeSemigroup (Sorted a)
> instance Ord a => CommutativeMonoid (Sorted a)

…and for inversion counts.

> newtype InvCount = InvCount Int
>   deriving newtype Num
>   deriving (Semigroup, Monoid) via Sum Int
> 
> instance CommutativeSemigroup InvCount
> instance CommutativeMonoid InvCount

Finally we make the Sparky (Sorted a) InvCount instance, which is just mergeAndCount (some conversion between newtypes is required, but we can get the compiler to do it automagically via coerce and a bit of explicit type application).

> instance Ord a => Sparky (Sorted a) InvCount where
>   prodSpark = coerce (mergeAndCount @a)

And here’s a function to turn a single a value into a sorted singleton list paired with an inversion count of zero, which will come in handy later.

> single :: a -> Sparked (Sorted a) InvCount
> single a = S (Sorted [a]) 0

Finally, we can make some generic infrastructure for doing monoidal folds. First, Parens a encodes lists of a which have been explicitly associated, i.e. fully parenthesized:

> data Parens a = Leaf a | Node (Parens a) (Parens a)
>   deriving Show

We can make a generic fold for Parens a values, which maps each Leaf into the result type b, and replaces each Node with a binary operation:

> foldParens :: (a -> b) -> (b -> b -> b) -> Parens a -> b
> foldParens lf _  (Leaf a)   = lf a
> foldParens lf nd (Node l r) = nd (foldParens lf nd l) (foldParens lf nd r)

Now for a function which splits a list in half recursively to produce a balanced parenthesization.

> balanced :: [a] -> Parens a
> balanced []  = error "List must be nonempty"
> balanced [a] = Leaf a
> balanced as  = Node (balanced as1) (balanced as2)
>   where (as1, as2) = splitAt (length as `div` 2) as

Finally, we can make a balanced variant of foldMap: instead of just mapping a function over a list and then reducing with mconcat, as foldMap does, it first creates a balanced parenthesization for the list and then reduces via the given monoid. This will always give the same result as foldMap due to associativity, but in some cases it may be more efficient.

> foldMapB :: Monoid m => (e -> m) -> [e] -> m
> foldMapB leaf = foldParens leaf (<>) . balanced

Let’s try it out!

λ> :set +s
λ> getSpark $ foldMap single [3000, 2999 .. 1 :: Int]
Sum {getSum = 4498500}
(34.94 secs, 3,469,354,896 bytes)
λ> getSpark $ foldMapB single [3000, 2999 .. 1 :: Int]
Sum {getSum = 4498500}
(0.09 secs, 20,016,936 bytes)

Empirically, it does seem that we are getting quadratic performance with normal foldMap, but O(n \log n) with foldMapB. We can verify that we are getting the correct inversion count in either case, since we know there should be n(n-1)/2 when the list is reversed, and sure enough, 3000 \cdot 2999 / 2 = 4498500.

Posted in math | Tagged , , , , , , , , | 7 Comments