Future of Coding podcast interview

I recently did an interview with my friend and former student Steve Krouse for his Future of Coding podcast. I think I still agree with almost everything I said, and I didn’t say anything too embarrassing, so if you want to listen to it I won’t stop you! We chatted about teaching and learning, functional programming and type systems, how to read academic papers, and a bunch of other things. You can listen to it here.

Posted in haskell, meta, teaching | Tagged , , , , | Leave a comment

Apply to attend the Oregon Programming Languages Summer School!

Although the official deadline has passed, I hear there are still a few spots left for the Oregon Programming Languages Summer School (OPLSS), hosted at the University of Oregon in Eugene. This is a really fantastic opportunity to take a deep dive in programming languages and type theory. I attended as a graduate student and although it was overwhelming at times—like drinking from a fire hose!—it was a wonderful experience. I made some great friends and learned a ton (I still occasionally refer back to my notes!). Eugene is also beautiful in the summer (I have fond memories of going geocaching in a nearby park).

The main summer school (July 9–21) has a focus on parallelism and concurrency, but for those with less background, there is also a one-week introduction on the foundations of programming languages (July 3–8). If you want, you can actually choose to attend only the July 3–8 session (or both, or just July 9–21 of course). Attending just the introductory session could be a great option for someone who knows some functional programming but wants a more solid grounding in the underlying theory.

If you’re a member of the Haskell/FP community who is thinking of applying, and I’m familiar with you or your work, I would be happy to write you a recommendation if you need one.

Posted in links, meta | Tagged , , , , , , | Leave a comment

Conversations with a six-year-old on functional programming

My six-year-old son walked up to me yesterday. “What are you reading?”

At the time, I was reading part of Janis Voigtländer’s habilitation thesis. Unsure where to even start, I decided to just answer straightforwardly: “I’m reading a very long story about free theorems.”

He persisted. “What are free theorems?”

Never one to shrink from a pedagogical challenge, I thought for a moment, then began: “Do you know what a function is?” He didn’t. “A function is like a machine where you put something in one end and something comes out the other end. For example, maybe you put a number in, and the number that is one bigger comes out. So if you put in three, four comes out, or if you put in six, seven comes out.” This clearly made sense to him, so I continued, “The type of a function machine tells you what kinds of things you put in and what kinds of things come out. So maybe you put a number in and get a number out. Or maybe you put in a list of numbers and get a number out.” He interrupted excitedly, “Or maybe you could put words in??” “Yes, exactly! Maybe you can put words in and get words out. Or maybe there is a function machine where you put other function machines in and get function machines out!” He gasped in astonishment at the idea of putting function machines into function machines.

“So,” I concluded, “a free theorem is when you can say something that is always true about a function machine if you only know its type, but you don’t know anything about what it does on the inside.” This seemed a bit beyond him (and to be fair, free theorems are only interesting when polymorphism is involved which I definitely didn’t want to go into). But the whole conversation had given me a different idea.

“Hey, I have a good idea for a game,” I said. “It’s called the function machine game. I will think of a function machine. You tell me things to put into the function machine, and I will tell you what comes out. Then you have to guess what the function machine does.” He immediately liked this game and it has been a huge hit; he wants to play it all the time. We played it while driving to a party yesterday, and we played it this morning while I was in the shower. So far, he has correctly guessed:

  • \lambda x.\, x + 1
  • \lambda x.\, x - 3
  • \lambda x.\, 10x
  • \lambda x.\, 2x
  • \lambda x.\, 6
  • \lambda w\!:\!\text{English noun}.\, \text{plural}(w)
  • \lambda x.\, 10 \lfloor x/10 \rfloor

I tried \lambda x.\, \min(x+2,10) but that was a bit tough for him. I realized that in some cases he may understand intuitively what the function does but have trouble expressing it in words (this was also a problem with \lambda x.\, 10 \lfloor x/10 \rfloor), so we started using the obvious variant where once the guesser thinks they know what the function does, the players switch roles and the person who came up with function specifies some inputs in order to test whether the guesser is able to produce the correct outputs.

\lambda x.\, 6 was also surprisingly difficult for him to guess (though he did get it right eventually). I think he was just stuck on the idea of the function doing something arithmetical to the input, and was having trouble coming up with some sort of arithmetic procedure which would result in 6 no matter what you put in! It simply hadn’t occurred to him that the machine might not care about the input. (Interestingly, many students in my functional programming class this semester were also confused by constant functions when we were learning about the lambda calculus; they really wanted to substitute the input somewhere and were upset/confused by the fact that the bound variable did not occur in the body at all!)

After a few rounds of guessing my functions, he wanted to come up with his own functions for me to guess (as I knew he would). Sometimes his functions are great and sometimes they don’t make sense (usually because his idea of what the function does changes over time, which of course he, in all sincerity, denies), but it’s fun either way. And after he finally understood \lambda x. \min(x+2, 10), he came up with his own function which was something like

\lambda x:\mathbb{N}. \begin{cases} 10 - x & x \leq 10 \\ 10 & \text{otherwise} \end{cases}

inspired, I think, by his kindergarten class where they were learning about pairs of numbers that added up to 10.

Definitely one of my better parenting days.

Posted in teaching | Tagged , , , , , | 72 Comments

Call for submissions: FARM 2018

I’m the general chair this year for the ACM SIGPLAN International Workshop on Functional Art, Music, Modelling and Design (better known as FARM), which will be colocated with ICFP this September in St. Louis, Missouri, USA. The goal of the workshop is to “gather together people who are harnessing functional techniques in the pursuit of creativity and expression”. For me (and I think for many others as well), FARM, including the associated performance evening, is always one of the highlights of ICFP.

Please consider submitting something! As in previous years, we are accepting a wide range of types of submissions. Paper and demo submissions are due by June 28: you can submit original research, a technology tutorial, a call for collaboration, or propose some sort of live demo. In addition, we also solicit performances to be given during the associated performance evening—performance proposals are due by July 8.

See the website for more info, and let me know if you have any questions!

Posted in meta | Tagged , , , , , , , | 3 Comments

Parametricity for Bifunctor

I’ve begun work to add Bifunctor, Bifoldable, and Bitraversable to the Typeclassopedia. While thinking and writing about Bifunctor I ended up proving some “folklore” results for my own satisfaction, and decided someone might possibly find it useful if I formally record the proofs here.

Bifunctor and its laws

The Bifunctor type class governs types of kind * -> * -> * which are (covariantly) functorial in both type arguments; that is, we can map over the two type arguments independently:

class Bifunctor (p :: * -> * -> *) where
  bimap  :: (a -> b) -> (c -> d) -> p a c -> p b d

  first  :: (a -> b) -> p a c -> p b c
  second :: (b -> c) -> p a b -> p a c

bimap allows mapping over both type arguments simultaneously, for example, bimap toUpper (+1) ('j', 3) = ('J',4). first and second allow mapping over only one or the other. Note that bimap can be implemented in terms of first and second, or first and second in terms of bimap (we’ll see what these implementations are later, or you can work them out as an easy exercise if you haven’t seen them before).

Instances of Bifunctor should satisfy some laws, which are straightforward analogues of the laws for Functor instances:

bimap id id = id                                (IB)
bimap (f . g) (h . i) = bimap f h . bimap g i   (CB)

first id = id                                   (IF)
first (f . g) = first f . first g               (CF)

second id = id                                  (IS)
second (f . g) = second f . second g            (CS)

So far so good. But there is an additional law relating bimap, first, and second:

bimap f g = first f . second g                  (BFS)

When I first read this law, it gave me pause: unlike the other laws, there seems to be some arbitrary asymmetry here. What about the symmetric law BFS2?

bimap f g = second g . first f                  (BFS2)

Intuitively, the order in which we perform first f and second g should not make any difference, since they cannot interact with each other in any way; the a values and b values in an arbitrary p a b structure have to be distinct. That is, intuitively, if (BFS) holds, then (BFS2) should hold as well, and I wondered whether it is possible to formally derive this fact.

Let us note first that if one defines a Bifunctor instance by defining bimap alone, then first and second receive default definitions:

first  f = bimap f id
second g = bimap id g

In this case, assuming that bimap satisfies its composition law (CB), we can see that

  first f . second g
= bimap f id . bimap id g
= bimap (f . id) (id . g)
= bimap f g
= bimap (id . f) (g . id)
= bimap id g . bimap f id
= second g . first f

So when bimap alone is defined and satisfies (CB), both (BFS) and (BFS2) hold automatically. Let’s now consider the situation where first and second are defined, and bimap acquires its default definition of bimap f g = first f . second g (which is just (BFS))—or we can assume that all three of first, second, and bimap are defined, as long as law (BFS) still holds. In this case, can we derive (BFS2) as well?

Parametricity for Bifunctor

We can derive parametricity, aka a free theorem, for any function with the type of bimap. Let p :: * -> * -> * be some two-argument type. In order to even express parametricity for types involving p, we have to assume that p is functorial—that is, we assume the existence of a function

BIMAP :: (a -> b) -> (c -> d) -> p a c -> p b d

satisfying (IB). (As with Functor, (IB) + parametricity is enough to imply that it must satisfy (CB) as well.) We write BIMAP in all caps to indicate that it is some canonical, Platonic function, handed down from on high, inherent to the very essence of p. (Something like that.) Now let us suppose that we mortals have written some function bimap with the same type as BIMAP, which also satisfies (IB). But we don’t make any assumptions about whether our bimap is the same function as BIMAP; we only assume that it has type (a -> b) -> (c -> d) -> p a c -> p b d and satisfies (IB).

We can now take the polymorphic type of bimap and turn the crank to derive a statement of parametricity. I’m assuming some familiarity with this process; for details, see Wadler’s paper. If you don’t care about the details of the derivation, you can just skip to the final statement of parametricity at the end of this section, and then proceed to see how it is applied in the next.

Parametricity for bimap says that if we interpret its type as a relation (according to certain rules), then (bimap, bimap) will be a member of the relation, that is, bimap will be related to itself. Below I show the process of expanding out the definition of the relation corresponding to bimap’s type. (I don’t want to take the time to typeset it really nicely using LaTeX or anything like that; a slightly cleaned-up text version will do for now.) I use @a to denote explicit type application, but after the second step I leave type application implicit to reduce clutter.

(bimap,bimap) ∈ ∀ a b c d. (a -> b) -> (c -> d) -> p a c -> p b d


∀ Q :: a <-> a', R :: b <-> b', S :: c <-> c', T :: d <-> d'.
  (bimap @a @b @c @d, bimap @a' @b' @c' @d')
    ∈ (Q -> R) -> (S -> T) -> p Q S -> p R T


∀ Q :: a <-> a', R :: b <-> b', S :: c <-> c', T :: d <-> d'.
  ∀ (g,g') ∈ (Q -> R).
    ∀ (h,h') ∈ (S -> T).
      ∀ (x,x') ∈ p Q S.
        (bimap g h x, bimap g' h' x') ∈ p R T


∀ Q :: a <-> a', R :: b <-> b', S :: c <-> c', T :: d <-> d'.
  ∀ (g,g'). (∀ (y,y') ∈ Q. (g y, g' y') ∈ R) ->
    ∀ (h,h'). (∀ (z,z') ∈ S. (h z, h' z') ∈ T) ->
      ∀ (x,x') ∈ p Q S.
        (bimap g h x, bimap g' h' x') ∈ p R T

At this point, we don’t need the extra generality afforded by assuming everything is a relation, so we specialize all the relations in sight to functions. For example, the relation Q :: a <-> a' just becomes a function q :: a -> a'; statements of relatedness, such as (y,y') ∈ Q, turn into equations like q y = y'.

∀ q :: a -> a', r :: b -> b', s :: c -> c', t :: d -> d'.
  ∀ (g,g'). (∀ (y,y'). q y = y' -> r (g y) = g' y') ->
    ∀ (h,h'). (∀ (z,z'). s z = z' -> t (h z) = h' z') ->
      ∀ (x,x') ∈ p q s.
        (bimap g h x, bimap g' h' x') ∈ p r t

We now have to interpret p q s and p r t as relations. Wadler’s original paper doesn’t really cover this; it only talks about the specific case of the list functor. However, reading between the lines (and consulting subsequent work on parametricity), we can see that the right way to do this is

(a,b) ∈ p f g <=> BIMAP f g a = b

That is, we interpret the type p as the relation corresponding to its BIMAP function. Hence, we now have

∀ q :: a -> a', r :: b -> b', s :: c -> c', t :: d -> d'.
  ∀ (g,g'). (∀ (y,y'). q y = y' -> r (g y) = g' y') ->
    ∀ (h,h'). (∀ (z,z'). s z = z' -> t (h z) = h' z') ->
      ∀ (x,x'). (BIMAP q s x = x') ->
        BIMAP r t (bimap g h x) = bimap g' h' x'

Finally, we can simplify this by substituting equals for equals in a few places:

∀ q :: a -> a', r :: b -> b', s :: c -> c', t :: d -> d'.
  ∀ (g,g'). (∀ (y,y'). r (g y) = g' (q y)) ->
    ∀ (h,h'). (∀ (z,z'). t (h z) = h' (s z)) ->
      ∀ (x,x').
        BIMAP r t (bimap g h x) = bimap g' h' (BIMAP q s x)       (PAR)

This is the specific statement of parametricity for bimap that we will use to derive the results below.

Uniquness of bimap

Let’s first prove that bimap is unique—that is, anything with the type of bimap must in fact be equal to BIMAP. The name of the game is just to pick appropriate values for all those quantified variables in (PAR). In particular, let’s pick

  • g = h = q = s = id,
  • r = g', and
  • t = h'.

We have some conditions to check:

  • (∀ (y,y'). r (g y) = g' (q y)) under our choices becomes (∀ (y,y'). r (id y) = r (id y)) which is obviously true.
  • Likewise, (∀ (z,z'). t (h z) = h' (s z)) becomes (∀ (z,z'). t (id z) = t (id z)) which is also obviously true.

We then get to conclude that ∀ (x,x'). BIMAP r t (bimap g h x) = bimap g' h' (BIMAP q s x), which is equivalent to

∀ (x,x'). BIMAP r t (bimap g h x) = bimap g' h' (BIMAP q s x)


BIMAP r t . bimap g h = bimap g' h' . BIMAP q s


BIMAP r t . bimap id id = bimap r t . BIMAP id id

Since we have assumed that both BIMAP and bimap satisfy the id law, this is equivalent to BIMAP r t = bimap r t, so bimap and BIMAP coincide (r and t were arbitrary).

first and second commute

Finally, let’s prove that first and second commute. We will again use (PAR), but with different choices:

  • g = g' = t = s = id
  • h = h'
  • q = r

Let’s check the conditions:

  • We have to check(∀ (y,y'). r (g y) = g' (q y)), which becomes (∀ (y,y'). r (id y) = id (r y)), which is true since both sides reduce to r y.
  • Likewise, (∀ (z,z'). t (h z) = h' (s z)) becomes (∀ (z,z'). id (h z) = h (id z)) which is true by similar reasoning.

Hence, we may conclude that BIMAP r t (bimap g h x) = bimap g' h' (BIMAP q s x), which is equivalent to

BIMAP r id . bimap id h = bimap id h . BIMAP r id

We can simplify this further using a few assumptions: first, we already showed that BIMAP = bimap. Second, since we are assuming (BFS), that is, bimap f g = first f . second g, setting f or g to id shows that bimap f id = first f . second id = first f . id = first f (by the id law for second), and likewise bimap id g = second g. Hence the equation becomes

first r . second h = second h . first r

And voila!

One can also show that the id laws plus parametricity together imply the composition laws—I will leave that as an exercise. (Hint: make appropriate choices for g', h', q, and s.)

Posted in haskell, math | Tagged , , , , , | 3 Comments

Beeminder guest post on Beeminding All The Things

I’ve written about my use of Beeminder a few times here before. I just wrote a guest post for the Beeminder blog entitled Beeminding All The Things, in which I explain some lessons learned and best practices for dealing with a very large number of Beeminder goals (50, in my case)—and why you might even consider having so many in the first place. If thinking about tools for productivity is your cup of tea, you might be interested to check it out.

Posted in meta | Tagged , , | 1 Comment

Test your intuition: logarithmic time

Here is a question to test your intuition for logarithms. NO CALCULATORS allowed, and don’t take a long time to work out an answer in your head! Just go with your gut. After you have committed to a choice (say, by writing it down), then go get a calculator and a pencil and see if you can work out whether you were right!

On a certain input, an O(n) algorithm runs in one-tenth of a second. On the same input, an O(n^2) algorithm takes one and a half weeks to run. Approximately how long would an O(n \log n) algorithm take on the same input?

  1. a few seconds?
  2. a few minutes?
  3. a few hours?
  4. a few days?

I’m pretty sure it wasn’t until quite a few years out of undergrad that I would have gotten this right.

Posted in teaching | Tagged , , , | 16 Comments