Thanks for the responses to my previous post about finding roots of polynomials; I now have some new avenues to explore. But today I want to write about something completely different. I recently stumbled across this fun paper by Miguel Lerna. I realized a Haskell implementation would be very elegant, and I couldn’t pass up the opportunity to share.
This is badsort.
> import Data.List (permutations, insert) > > badsort :: Ord a => Integer -> [a] -> [a] > badsort 0 = foldr insert  > badsort k = head . badsort (k-1) . permutations
badsort k is a correct sorting algorithm for any natural number
k. Before reading further I recommend staring at that until you understand what it’s doing and why it works.
How badsort works
Badsort is very bad. Here’s how it works:
badsort 0is just plain old insertion sort.
badsort k xscreates the list of all permutations of
xs, sorts them into lexicographic order, and selects the first. This works because the lexicographically smallest permutation of
xsis, in fact, the one which is sorted.
Oh, and of course, sorting the permutations lexicographically is done by a recursive call to
badsort (k-1). (As an aside, I like how seamless this is in Haskell with polymorphic recursion—each recursive call is at a different type.)
Here are a few examples to show that it works:
ghci> badsort 0 [3,1,2] [1,2,3] ghci> badsort 1 [3,1,2] -- generates 6 permutations [1,2,3] ghci> badsort 2 [3,1,2] -- generates 720 permutations of 6 permutations [1,2,3]
badsort 3 [3,1,2], if we tried it (not recommended!!), would generate all possible permutations of the list of 720 permutations of the list of 6 permutations of
[3,1,2]. The number of such permutations is, of course, , which has decimal digits; there is literally not enough space in the universe to store all those permutations.
badsort k is a correct sorting algorithm1 which takes time , where denotes the -fold iterated factorial of , that is, and . (This doesn’t even take into account the time for accessing memory; at this scale we certainly can’t assume memory access takes constant time. Fetching memory from a data center in another galaxy takes a while, you know? =)
It gets worse
> worstsort :: Ord a => (Integer -> Integer) -> [a] -> [a] > worstsort f xs = badsort (f n) xs > where > n = fromIntegral $ length xs
Worstsort is parameterized by a function on natural numbers, and calls
badsort with a recursion depth given by the function applied to the length of the list. Oh my.
Just for fun, let’s try, oh, say, the Ackermann function.
> ack :: Integer -> Integer -> Integer > ack 0 n = n+1 > ack m 0 = ack (m-1) 1 > ack m n = ack (m-1) (ack m (n-1)) > > diabolicalsort :: Ord a => [a] -> [a] > diabolicalsort = worstsort (\n -> ack n n)
Here are some fun properties of
diabolicalsort (and any other instantiation of
It will provably terminate in a finite amount of time for any input! Although probably the words “terminate” and “finite” should be in scare quotes.
In some sense I can’t quite define formally but still believe in my heart, it “doesn’t cheat” in the sense that it is always “making real progress” towards sorting the input list. If you are trying to design a slow sorting algorithm, it would be cheating, for example, to make an algorithm that spins in a useless loop for a thousand years and then does insertion sort.
It works in practice on lists of length 1 or 2, but length 3 is completely hopeless.
ack 3 3 = 61, so we are looking at the 61-fold iterated factorial of 3, which is a… rather large number.
ack 4 4is ; there are not enough atoms in the universe to even write down this number in base 10. And then of course we take that number and iterate factorial that many times on . Sheesh.
Let us not even speak of lists of length 5.
The upshot of this, in the end, is that it is possible to make a “non-cheating” sorting algorithm whose running time grows faster than any computable function you care to choose (proof: take your chosen computable function and substitute it for
It might be a fun exercise to prove this formally using a proof assistant.↩