Identifying outdated packages in cabal install plans

Every time I build a Haskell package—whether using cabal or cabal-dev, whether something from Hackage or a development version of my own package—I always do a --dry-run first, and inspect the install plan to make sure it looks reasonable. I’m sure I’m not the only person who does this (in fact, if you don’t do this, perhaps you should).

But what is meant by "reasonable"? Really, what I look for are versions of packages being installed which are not the latest versions available on Hackage. Sometimes this is fine, if the package I am installing, or one of its dependencies, legitimately can’t use the most cutting-edge version of some package. But sometimes it indicates a problem—the upper bound on some dependency needs to be updated. (Note that I’m not trying to get into the upper bounds vs. no upper bounds debate here; just stating facts.)

To help automate this process, I threw together a little tool that I’ve just uploaded to Hackage: highlight-versions. If you take an install plan generated by --dry-run (or any output containing package identifiers like foo-0.3.2) and pipe it through highlight-versions, it will highlight any packages that don’t correspond to the latest version on Hackage.

For example, suppose running cabal-dev install --dry-run generates the following output:

$ cabal-dev install --dry-run
Resolving dependencies...
In order, the following would be installed (use -v for more details):
Boolean-0.0.1
NumInstances-1.0
colour-2.3.3
dlist-0.5
data-default-0.5.0
glib-0.12.3.1
newtype-0.2
semigroups-0.8.4
split-0.1.4.3
transformers-0.2.2.0
cmdargs-0.9.7
comonad-3.0.0.2
contravariant-0.2.0.2
mtl-2.0.1.0
cairo-0.12.3.1
gio-0.12.3
pango-0.12.3
gtk-0.12.3.1
semigroupoids-3.0
void-0.5.7
MemoTrie-0.5
vector-space-0.8.2
active-0.1.0.2
vector-space-points-0.1.1.1
diagrams-core-0.5.0.1
diagrams-lib-0.5.0.1
diagrams-cairo-0.5.1

This is a big wall of text, and nothing is obvious just from staring at it. But piping the output through highlight-versions gives us some helpful information:

$ cabal-dev install --dry-run | highlight-versions
Resolving dependencies...
In order, the following would be installed (use -v for more details):
Boolean-0.0.1
NumInstances-1.0
colour-2.3.3
dlist-0.5
data-default-0.5.0
glib-0.12.3.1
newtype-0.2
semigroups-0.8.4
split-0.1.4.3 (0.2.0.0)
transformers-0.2.2.0 (0.3.0.0)
cmdargs-0.9.7 (0.10)
comonad-3.0.0.2
contravariant-0.2.0.2
mtl-2.0.1.0 (2.1.2)
cairo-0.12.3.1
gio-0.12.3
pango-0.12.3
gtk-0.12.3.1
semigroupoids-3.0
void-0.5.7
MemoTrie-0.5
vector-space-0.8.2
active-0.1.0.2
vector-space-points-0.1.1.1
diagrams-core-0.5.0.1
diagrams-lib-0.5.0.1
diagrams-cairo-0.5.1 (0.5.0.2)

We can immediately see that there are newer versions of the split, transformers, cmdargs, and mtl packages (and precisely what those newer versions are). We can also see that the version of diagrams-cairo to be installed is newer than the version on Hackage (since this is a development version). These aren’t necessarily problems in and of themselves, but in my experience, if you don’t know why cabal or cabal-dev have chosen outdated versions of some packages, it’s probably worth investigating. (--dry-run -v3 can help here.) This is also useful when uploading new versions of packages, to make sure they work with the latest and greatest stuff on Hackage. In this case the problems are just because of some changes I made to the .cabal file for the purposes of this blog post, making some upper bounds too restrictive, but in general it could be due to other dependencies as well.

Posted in haskell | Tagged , , , , , | 7 Comments

Unordered tuples and type algebra

At Hac Phi a few weekends ago (which, by the way, was awesome), Dan Doel told me about a certain curiosity in type algebra, and we ended up working out a bunch more details together with Gershom Bazerman, Scott Walck, and probably a couple others I’m forgetting. I decided to write up what we discovered. I have no idea what (if any) of this is really novel, but it was new to me at least.

The Setup

I’ll assume you’re already familiar with the basic ideas of the algebra of types0 represents the void type, 1 represents the unit type, sum is tagged union, product is (ordered) pairing, and so on.

Given a type T, since product represents pairing, we can write T^n to represent ordered n-tuples of T values. Well, how about unordered n-tuples of T values? Since there are n! possible ways to order n values, it seems that perhaps we could somehow divide by n! to "quotient out" by the symmetries we want to disregard: T^n/n!.

If you’ve never seen this sort of thing before it is certainly not at all obvious that this makes any sense! But bear with me for a minute. At the very least, we can say that if this is to make sense we ought to be able to use these sorts of type expressions to calculate the number of inhabitants of a finite type. So, let’s try it. For now let’s take T = Bool = 2. I’ll draw the elements of Bool as and .

There are clearly four different ordered pairs of Bool:

T^n is supposed to represent ordered n-tuples of T, and indeed, 2^2 = 4. How about unordered pairs? Since we don’t care about the order I’ll just choose to canonically sort all the Ts to the front, followed by all the Fs. It’s not hard to see that there are three unordered pairs of Bool:

(To distinguish them from ordered tuples, I’ll draw unordered tuples as above, with the elements slightly separated and a gray circle drawn round them.)

However, when we substitute T = n = 2 into T^n/n!, we get not 3, but (2^2)/2 = 2. What gives?

The problem

The problem is that T^n/n! is only correct if all the values in the tuples are distinct. Then we overcount each unordered tuple by exactly a factor of n!—namely, all the n! many permutations of the tuple, each of which is distinct as an ordered tuple. However, when some of the tuples have repeated elements, there can be fewer than n! distinct ordered variants of a given unordered tuple. For example, the unordered tuple has only 3 (rather than 3! = 6) ordered variants, namely

because the two ‘s are identical.

(As a small aside, when working in the theory of combinatorial species one is concerned not with actual data structures but with data structure shapes full of distinct labels—and the fact that the labels are distinct means that T^n/n! is (in some sense) actually the correct way to talk about unordered tuples within the theory of species. More on this in another post, perhaps.)

If T^n/n! is not the correct expression for unordered tuples, what is? In fact, Dan started off this whole thing by telling me the answer to this question—but he didn’t understand why it is the answer; we then proceeded to figure it out. For the purposes of pedagogy I’ll reverse the process, working up from first principles to arrive at the answer.

Counting unordered tuples

The first order of business is to count unordered tuples. Given a set T, how many unordered n-tuples are there with elements drawn from T (where repetition is allowed)? Again, since the order doesn’t matter, we can canonically sort the elements of T with all copies of the first element first, then all copies of the second element, and so on. For example, here is an unordered 8-tuple with elements drawn from T = 4 = \{ , , , \}:

Now imagine placing "dividers" to indicate the places where changes to , changes to , and so on:

(Note how there are two dividers between the last and the first , indicating that there are no occurrences of .) In fact, given that the elements are canonically sorted, it is unnecessary to specify their actual identities:

So, we can see that unordered 8-tuples with elements from T = 4 correspond bijectively to such arrangements of eight dots and three dividers. In general, unordered n-tuples are in bijection with arrangements of n dots and |T|-1 dividers, and there are as many such arrangements as ways to choose the positions of the |T|-1 dividers from among the n+|T|-1 total objects, that is,

\displaystyle \binom{n+|T|-1}{|T|-1}

(As an aside, this is the same as asking for the number of ways to place n indistinguishable balls in |T| distinguishable boxes—the balls in box i indicate the multiplicity of element i in the unordered n-tuple. This is #4 in Gian-Carlo Rota’s "twelvefold way", and is discussed on page 15 of Richard Stanley’s Enumerative Combinatorics, Volume I. See also this blog post I wrote explaining this and related ideas).

So what?

And now for a little algebra:

\displaystyle \begin{array}{cl} & \displaystyle \binom{n+|T|-1}{|T|-1} \\ & \\ = & \displaystyle \frac{(n+|T|-1)!}{n!(|T|-1)!} \\ & \\ = & \displaystyle \frac{(n+|T|-1)(n+|T|-2) \cdots (|T|)}{n!} \\ & \\ = & \displaystyle \frac{|T|(|T|+1)(|T|+2) \cdots (|T| + n-1)}{n!}\end{array}

The expression on top of the fraction is known as a rising factorial and can be abbreviated |T|^{\overline{n}}. (In general, x^{\overline{n}} = x(x+1)(x+2) \dots (x+n-1), so 1^{\overline{n}} = n!.) In the end, we have discovered that the number of unordered n-tuples of T is |T|^{\overline{n}}/n!, which looks surprisingly similar to the naïve but incorrect T^n / n!. In fact, the similarity is no coincidence, and there are good reasons for using a notation for rising factorial similar to the notation for normal powers, as we shall soon see.

And indeed, the correct type expression for unordered n-tuples of values from T is T^{\overline{n}} / n! = T(T+1)(T+2) \dots (T+(n-1))/n!. This means that if we consider the set of ordered n-tuples where the first element is drawn from T, the second element from T plus some extra distinguished element, the third from T plus two extra elements, and so on, there will be exactly n! of them for every unordered n-tuple with elements drawn from T. (In fact, we would even expect there to be some nice function from these "extended n-tuples" to unordered n-tuples such that the preimage of every unordered n-tuple is a set of size exactly n!—just because combinatorics usually works out like that. Finding such a correspondence is left as an exercise for the reader.)

A detour

Before we get back to talking about T^{\overline{n}}/n!, a slight detour. Consider the variant type expression T^{\underline{n}}/n!, where x^{\underline{n}} = x(x-1)(x-2) \dots (x-n+1) is a falling factorial. What (if anything) does it represent?

Subtraction of types is problematic in general (without resorting to virtual species), but in this case we can interpret T(T-1)(T-2) \dots as an ordered n-tuple with no duplicate values. We can choose any element of T to go first, then any but the first element to go second, then any but the first two, and so on. This can in fact be made rigorous from the perspective of types, without involving virtual species—see Dan Piponi’s blog post on the subject.

Infinite sums and discrete calculus

And now for some fun. If we sum T^{\underline{n}}/n! over all n, it ought to represent the type of unordered tuples with distinct values of any length—that is, the type of sets over T.

\displaystyle S(T) = 1 + T + \frac{T^{\underline{2}}}{2!} + \frac{T^{\underline{3}}}{3!} + \dots

Can we find a more compact representation for S(T)?

Consider the forward difference operator \Delta, defined by

\displaystyle \Delta f(x) = f(x+1) - f(x)

This is a discrete analogue of the familiar (continuous) differentiation operator from calculus. (For a good introduction to discrete calculus, see Graham et al.‘s Concrete Mathematics, one of my favorite math/CS books ever. See also the Wikipedia page on finite differences.) For our purposes we simply note that

\displaystyle \Delta x^{\underline{n}} = n x^{\underline{n-1}}

(proving this is not hard, and is left as an exercise). This is what justifies the notation for falling factorial: it is in some sense a discrete analogue of exponentiation!

The reason to bring \Delta into the picture is that given the above identity for \Delta applied to falling factorials, it is not hard to see that S(T) is its own finite difference:

\displaystyle \Delta S(T) = S(T)

Expanding, we get S(T+1) - S(T) = S(T) and hence S(T+1) = 2 S(T). (Yes, I know, there’s that pesky subtraction of types again; in the time-honored tradition of combinatorics we’ll simply pretend it makes sense and trust there is a way to make it more formal!) Solving this recurrence together with the initial condition S(0) = 1 yields

\displaystyle S(T) = 2^T

which we can interpret as the space of functions from T to Bool—that is, the type of sets over T, just like it should be! (Note that replacing falling factorial with exponentiation yields something which is its own derivative, with a solution of e^T—which indeed represents the species of sets, though it’s harder to see what e has to do with anything.)

Enough with the detour. What if we sum over T^{\overline{n}}/n!?

\displaystyle M(T) = 1 + T + \frac{T^{\overline{2}}}{2!} + \frac{T^{\overline{3}}}{3!} + \dots

There’s a backward difference operator, \nabla f(x) = f(x) - f(x-1), with the property that

\displaystyle \nabla x^{\overline{n}} = n x^{\overline{n-1}}

Hence \nabla M(T) = M(T), i.e. M(T) - M(T-1) = M(T), but here I am a bit stuck. Trying to solve this in a similar manner as before yields M(T-1) = 0, which seems bogus. 0 is certainly not a solution, since M(0) = 1. I think in this case we are actually not justified in subtracting M(T) from both sides, though I’d be hard-pressed to explain exactly why.

Intuitively, M(T) ought to represent unordered tuples of T of any length—that is, the type of multisets over T. This is isomorphic to the space of functions T \to \mathbb{N}, specifying the multiplicity of each element. I claim that \mathbb{N}^T is in fact a solution to the above equation—though I don’t really know how to derive it (or even what it really means).

\displaystyle \begin{array}{cl} & \displaystyle \mathbb{N}^T - \mathbb{N}^{T-1} \\ & \\ \cong & \displaystyle \mathbb{N}^{T-1}(\mathbb{N} - 1) \\ & \\ \cong & \displaystyle \mathbb{N}^{T-1} \mathbb{N} \\ & \\ \cong & \displaystyle \mathbb{N}^T \end{array}

The middle step notes that if you take one element away from the natural numbers, you are left with something which is still isomorphic to the natural numbers. I believe the above can all be made perfectly rigorous, but this blog post is already much too long as it is.

Posted in combinatorics | Tagged , , , , | 7 Comments

Monads: Easy or Hard?

Executive summary: they are actually both (or neither). It is easy to learn their definition but hard to grasp the consequences. Or we might say they are easy to know and hard to understand (grok). It is vitally important for both teachers and learners to understand and make this distinction.

Background

The plethora of monad tutorials (well-covered elsewhere) has been intimidating Haskell beginners for years. Beginners understandably assume that anything with over thirty wildly different tutorials must be very difficult to learn. Sadly, this attitude is a self-fulfilling prophecy: it is difficult to learn something which intimidates you, independently of its intrinsic difficulty.

Well-meaning members of the Haskell community have sometimes reacted by going to the other extreme, assuring beginners that monads are actually quite easy, and there’s no reason for all the hype. Unfortunately, this can backfire: if someone is really convinced that monads are easy, they are going to be in for a demoralizing shock.

Both attitudes, however, are wrong! The key is to reject the dualistic easy/hard distinction and replace it with something more nuanced.

Profundity

Monads are not hard; they are not easy; so what are they? They are profound. By this I mean that they have very large implications relative to the size of their definition. Here are some examples of other things that are profound in the same sense:

  • Board games like checkers, chess, go, and hex. It takes only a few minutes to learn the rules of these games, yet knowing the rules and being a good player are completely different. The simple rules of these games have many nontrivial and nonobvious "emergent" properties—which, of course, is what makes the games so interesting.

  • The lambda calculus, and Turing machines. Explaining how these work takes all of 30 minutes. Studying the implications, of course, has spawned multiple fields of active research.

  • Algebraic theories like group theory or category theory. Anyone with only a basic mathematical background can be taught the axioms of a group or a category. But, of course, these particular sets of axioms give rise to incredibly rich structure.

Thankfully, monads (as they arise in Haskell) are much less profound than these examples! But I think the analogy is correct. Understanding the definition of monads is simple: a monad is any type m of kind * -> * which supports the operations

return :: a -> m a
(>>=)  :: m a -> (a -> m b) -> m b

subject to some laws. But to really grok all the consequences—common example instances and how they are implemented and used; the relationship to Functor, Applicative, and Monoid; the relationship to the alternative definition in terms of join; all the common combinators like sequence, mapM, (>=>), etc.; and, in the end, to get some sort of intuitive sense for what the monad abstraction is really like—all that takes a long time and a lot of hard work!

Implications for pedagogy

So what does this all mean, practically speaking?

If you are learning about monads: take heart! There is a simple, easily accessible foundation from which to get started, namely, the definition of the Monad type class. Study the Monad methods and their types—and come back to this basic foundation whenever you get confused. But also realize that to gain deep understanding will take hard work over a relatively long period of time.

If you are teaching others about monads: encourage those who are intimidated—but don’t go overboard. Keep directing students back to the fundamental definition, but also provide them with many opportunities to engage with examples. (There is something else important here—examples of implementing monads are on a very different level than examples of using them, and in my experience it’s vitally important that students are explicitly taught to think about the two levels and keep them separate—but that’s a subject for another post.)

Postscript

There’s probably nothing here that hasn’t been said by someone somewhere before, and it seems rather obvious now that I have written it, but I really do think this perspective is sometimes missing from the discussion of pedagogy and Haskell. This perspective really crystallized for me as I was teaching an introductory Haskell course last spring.

Of course, this is much in the same vein as my previous post Abstraction, intuition, and the “monad tutorial fallacy”. I still believe what I wrote there, except for one thing: that post makes it seem like it is possible to have a sudden "aha!" moment where one goes from not understanding monads to understanding monads. I now realize this is wrong, just like no one suddenly goes from "not understanding chess" to "understanding chess". The learning process is simply discontinuous; learners often experience big, sudden jumps in comprehension, and it’s easy to interpret these jumps as going from "not understanding" to "understanding". Immediately after making such a jump, it’s easy to believe that one had no prior understanding at all (which is probably not true), and that there’s nothing more to understand (which is certainly not true).

As a final note, I’ve specifically discussed monads here, but of course everything I’ve said applies to many other concepts as well, such as Applicative, continuations, GADTs, iteratees/pipes/conduits, and so on. I’d love for this to spark more explicit, thoughtful discussion of pedagogy in the community, and not just around monads.

Posted in haskell, learning, teaching | Tagged , , , | 4 Comments

Diagrams mentoring at Hac Phi

Hac Phi is coming up in less than a month, August 3-5 here in Philadelphia: three days of hanging out with awesome people, eating good food, and hacking on Haskell projects. Judging by past instances, I promise you it will be super fun, whether you just started programming in Haskell yesterday or have been for twenty years.

In case you need some extra encouragement and the diagrams project sounds interesting to you, let me explicitly state that if you attend Hac Phi and want to help with the diagrams project, I will personally sit down with you at least once, more likely multiple times, over the course of the weekend and walk through code with you, do some pair programming, whatever is appropriate to get you to the next level of productivity with diagrams and/or Haskell. This applies no matter how much experience you have with diagrams or with Haskell—from absolutely none to quite a lot.

In fact, I would be perfectly happy to spend the entire weekend mentoring others and writing not a single line of code myself. Over the years I have learned that when it comes to a big project like diagrams, fostering collaboration is actually the most important aspect of a hackathon, rather than getting a bunch of coding done myself—I can always go pound out code at home once everyone has left.

So, if that sounds like fun to you, be sure to register (registration closes in a little under two weeks), and let me know you’re coming. I’m also happy to discuss concrete ideas for diagrams-related projects.

Posted in diagrams, haskell, projects | Tagged , , | 3 Comments

BlogLiterately 0.5 release

I have now released version 0.5 of BlogLiterately. (You can read about the 0.4 release here.) This version does uploading of images! Here is proof:

d’awww.

(My previous post explains the problem and solution with image uploads.)

It also allows you to specify expected outputs in a ghci session (a feature suggested by Dan Burton). This block

    [ghci]
    7+6
      13
    9+4
      12

now produces

ghci> 7+6
  13

ghci> 9+4
  13
  12

Outputs that match the expected output are shown normally; outputs that don’t match the expected output are shown with the actual output in red and expected in blue. The idea is that this helps you catch errors in your code before uploading the post. (Of course, you don’t have to specify expected outputs if you don’t want to.)

Another new feature is that BlogLiterately will prompt you for your password if you don’t specify it on the command line (another feature requested by Dan).

Finally, one of the coolest new features (in my opinion) is that the internals are now exposed as a library, and in particular you can easily add your own custom transformation passes (of type Pandoc -> IO Pandoc) to the existing ones. So, for example, you could do something particular with your own specially tagged blocks (like [ghci] blocks), or wrap images in some fancy HTML to produce frames and captions, or automatically turn certain things into links, or whatever you can dream up. If you come up with any transformations you think might be more generally useful, please send them to me so I can include them in future releases for others to use.

Happy blogging!

Posted in haskell, writing | Tagged , , , , , | 3 Comments

New haxr release

I have just uploaded a new version of the haxr package (for writing XML-RPC clients and servers), having become the new maintainer. Here is how it happened.

In a previous post I announced the release of BlogLiterately 0.4, a tool for authoring and uploading blog posts. I mentioned that the image upload feature did not yet work, due to inexplicably closed HTTP connections. After a lot of digging, installing wireshark, and some excellent advice and answers from StackOverflow, I present to you the following basically-technically-accurate transcript of what was actually going on:

BlogLiterately: Hi there!!
Server: Hello. How may I help you?
BL: I would like to upload some data plz kthx!
S: OK, go ahead.
BL (thinking to self): Oh sh**, I haven’t actually computed the data I want to send! Better get started. Hrmrmrmrmmmmm….

…four seconds later…

BL: ..i thunk i can i thunk i can i thunk i can…
S: You are wasting my time. I have better things to do. Goodbye.
BL: …i thunk i can i thunk i can… done!
BL: OK, I’m ready now! data data data data data data data data data …
S: AUUGGH SHUT UP GO AWAY

So the surface problem seemed to be too much laziness, and indeed, forcing evaluation of the data prior to opening the HTTP connection did make it work. However, the deeper problem is that base64-encoding a 28k image should not take four seconds! Indeed, it turned out that haxr was using dataenc to do base64-encoding, which suffered from quadratic runtime due to left-nested list appends. After switching to base64-bytestring, the encoding now takes basically no time at all.

The other change I made to haxr was in the type of the ValueBase64 constructor of the Value type. It used to take a String, but this is silly, since the argument is supposed to represent binary data (which will then be base64-encoded), not text. In the original author’s defense, haxr was written before ByteString existed! But now that we have ByteString, I made it the argument to ValueBase64. This means fewer conversions (in order to provide a String containing binary data, one byte per character, you basically have to read the file as a ByteString in the first place anyway and then unpack it), and also has the nice side benefit of being able to add a new XmlRpcType instance for ByteString, so that ByteStrings can be passed directory to remote, resulting in a base64 parameter. (Previously, to distinguish a base64 parameter the only way was to explicitly wrap it in a ValueBase64 constructor.)

After a bit of discussion about these issues with Gracjan Polak, haxr‘s previous maintainer, he suggested that I take it over since he wasn’t really using it anymore. I’m grateful to Bjorn Bringert (haxr‘s original author) and Gracjan for their work on haxr over the years.

And yes, this does mean that BlogLiterately now supports image uploads—and also a few other new features I’ve added in the meantime. A new release will follow shortly!

Posted in haskell | Tagged , , , , | 2 Comments

BlogLiterately 0.4 release

I have just released version 0.4 of BlogLiterately, a tool for authoring and uploading blog posts (especially Haskelly ones). Rob Greayer authored the first few versions of this useful tool, for which I’m very thankful. However, he doesn’t have the time or interest to maintain it anymore, and I had a few ideas for extending and improving it, so I offered to take over as the maintainer.

The full (dog-fooded) documentation can be found here. This blog post is just to announce the release and show off a few capabilities.

Posts are written in Markdown format (as recognized by pandoc), and BlogLiterately handles converting them to HTML and uploading them to any blog that supports the MetaWeblog API (such as WordPress). Haskell code can be syntax highlighted using hscolour (and any code can be syntax highlighted using highlighting-kate):

> -- An awesome Haskell function
> fib :: Integer -> Integer
> fib 0 = 0
> fib 1 = 1
> fib n = fib (n-1) + fib (n-2)

Special support for WordPress LaTeX is built in: \pi^2 / 6. ghci sessions can be automatically generated from a list of inputs:

ghci> fib 20  
  6765

ghci> [1..10]  
  [1,2,3,4,5,6,7,8,9,10]

The one major planned feature that is still missing is uploading of embedded images. Sadly, this feature ran into a major roadblock in the form of inexplicably closed HTTP connections (can you help answer my StackOverflow question?). Ultimately my goal is to have completely automated support for writing blog posts with inline diagrams code.

Enjoy!

Posted in haskell, writing | Tagged , , , | 9 Comments