POGIL workshop

A few weeks ago I attended a 3-day training workshop in St. Louis, put on by the POGIL project. I attended a short POGIL session at the SIGCSE CS education conference in March and was sufficiently impressed to sign up for a training workshop (it didn’t hurt that Clif Kussmaul has an NSF grant that paid for my registration and travel).

POGIL is an acronym for “Process Oriented Guided Inquiry Learning”. Process-oriented refers to the fact that in addition to learning content, an explicit goal is for students to learn process skills like analytic thinking, communication, and teamwork. Guided inquiry refers to the fact that students are responsible for constructing their own knowledge, guided by carefully designed questions. The entire framework is really well thought-out and is informed by concrete research in pedagogical methods. I really enjoyed how the workshop used the POGIL method to teach us about POGIL (though of course it would be rather suspect to do anything else!). It gave me not just an intellectual appreciation for the benefits of the approach, but also a concrete understanding of the POGIL experience for a student.

The basic idea is to put students in groups of 3 or 4 and have them work through an activity or set of questions together. So far this sounds just like standard “group work”, but it’s much more carefully thought out than that:

  • Each student is assigned a role with specific responsibilities within their group. Roles typically rotate from day to day so each student gets a chance to play each role. Roles can vary but common ones include things like “manager”, “recorder”, “reporter”, and so on. I didn’t appreciate how important the roles are until attending the workshop, but they are really crucial. They help ensure every student is engaged, forestall some of the otherwise inevitable social awkwardness as students figure out how to relate to their group members, and also play an important part in helping students develop process skills.

  • The activities are carefully constructed to take students through one or more learning cycles: beginning with some data, diagrams, text, etc. (a “model”), students are guided through a process starting with simple observations, then synthesis and discovering underlying concepts, and finally more open ended/application questions.

The teacher is a facilitator: giving aid and suggestions as needed, managing dificulties that arise, giving space and time for groups to report on their progress and share with other groups, and so on. Of course, a lot of work goes into constructing the activities themselves.

In some areas, there is already a wealth of POGIL activities to choose from; unfortunately, existing materials are a bit thinner in CS (though there is a growing collection). I won’t be able to use POGIL much this coming semester, but I hope to use it quite a bit when I teach algorithms again in the spring.

Posted in teaching | Tagged , , , , | 1 Comment

New Haskell Symposium paper on “twisted functors”

Satvik Chauhan, Piyush Kurur and I have a new paper which will appear at the 2016 Haskell Symposium in Japan:

How to Twist Pointers without Breaking Them

Although pointer manipulations are used as a primary motivating example, at heart the paper is really about “twisted functors”, a class of applicative functors which arise as a natural generalization of the semi-direct product of two monoids where one acts on the other. It’s a really cute idea1, one of those ideas which seems “obvious” in retrospect, but really hadn’t been explored before.

We give some examples of applications in the paper but I’m quite certain there are many other examples of applications out there. If you find any, let us know!

  1. I can say that since it wasn’t actually my idea!

Posted in haskell, writing | Tagged , , , , , , , , , | 6 Comments

Eastman maximal comma-free codes in Haskell

This past January I watched a video of Don Knuth’s most recent annual Christmas lecture. Typically his Christmas lectures have been about trees, but breaking with tradition, he gave this lecture about comma-free codes, and presented an implementation of an interesting algorithm due to Willard Eastman. Of course his implementation was written in CWEB, and during the course of the lecture he noted that his algorithm was iterative, and he didn’t know of a nice way to write it recursively (or something like that). Naturally, that sounded like a challenge, so I implemented it in Haskell, and I think it came out quite nicely. (It still uses “iteration” in the sense of the iterate function, but of course that uses recursion, so…?) Unfortunately, that was in January, it is now July, and I don’t really remember how it works. So I decided I had better write about it now, before I forget even more about how it works.

A comma-free code is a set C of strings such that if you concatenate any two strings in C, the result does not contain any elements of C as internal substrings. The term “comma-free” refers to the fact that sequences of strings from C can be unambiguously concatenated, without the need for separators like commas. Even if you start reading somewhere in the middle of a message, you can unambiguously figure out how to partition the message into codewords. For example, {bear, like} is a comma-free code, but {bear, like, earl} is not, since bearlike contains earl as a substring. A comma-free code also obviously cannot contain any periodic strings (that is, strings which consist of repeated copies of some shorter string), like abcabc, since concatenating such a string with itself produces a string containing the same string as an internal prefix.

Given a fixed alphabet and codeword length, one is naturally led to ask how large a comma-free code can possibly be. Eastman solved this problem for odd codeword lengths, by showing how to construct a maximal commafree code. To understand Eastman’s solution, consider the set S of all aperiodic strings of length n over an alphabet \Sigma (we have already seen that periodic strings cannot be part of a comma-free code). Consider two strings equivalent if they are rotations of each other. For example, bear, earb, arbe, and rbea are all equivalent. This is an equivalence relation on strings, and so it defines a partition of S into classes of equivalent strings. Note that we can never have two equivalent strings as part of the same comma-free code, since if we concatenate a string with itself, the result contains all other equivalent strings as substrings. For example, bearbear contains earb, arbe, and rbea. So at most a comma-free code could contain one string from each equivalence class.

In fact, Eastman shows that for odd n there are comma-free codes that contain exactly one string from each equivalence class! What’s more, his proof is constructive: he shows how to pick a particular, canonical representative from each equivalence class such that the collection of all such canonical representatives is a comma-free code. This is what the program below does: given an odd-length string, it outputs the canonical rotation of that string which is part of a maximal comma-free code.

So, without further ado, let’s see the implementation! Again, I really don’t remember much about the details of how (much less why) this works. For that, I recommend watching Knuth’s lecture or reading the explanations in his code (you’ll probably want to compile it into LaTeX first).

First, some imports and such. Look, ma, no LANGUAGE extensions!

> module Commafree where
> import           Control.Arrow      (first)
> import           Control.Monad      (when)
> import           Data.List          (findIndex, intercalate)
> import           Data.List.Split
> import           Data.Maybe         (catMaybes)
> import           Data.Monoid        ((<>))
> import           System.Environment
> import           System.Exit
> import           Text.Printf
> import           Text.Read          (readMaybe)

Here’s the main Eastman algorithm, which actually works for any list of things with a total order (unlike Knuth’s, which only works for lists of nonnegative integers, although that is obviously just a cosmetic difference, since any finite set with a total order is isomorphic to a set of nonnegative integers). We turn each item into a singleton “block”, then iterate the eastmanRound function, which partitions the blocks into subsequences of blocks, which we coalesce into blocks again. So each iteration makes the partition coarser, i.e. the blocks get bigger. We keep iterating until there is only one block left, which contains the rotation that we seek.

> eastman :: Ord a => [a] -> [a]
> eastman
>   = blockContent . head . head
>   . dropWhile ((>1) . length)
>   . iterate (map mconcat . eastmanRound)
>   . map mkBlock

Some code for dealing with blocks. A block is just a list that keeps track of its length for efficiency. The important point about blocks is that they are ordered first by length, then lexicographically (see the Ord instance below). The Monoid instance is straightforward.

> data Block a = Block { blockLen :: !Int, blockContent :: [a] }
>   deriving (Show, Eq)
> instance Ord a => Ord (Block a) where
>   compare (Block m as) (Block n bs)
>     = compare m n <> compare as bs
> instance Monoid (Block a) where
>   mempty = Block 0 []
>   mappend (Block m as) (Block n bs) = Block (m+n) (as ++ bs)
> mkBlock :: a -> Block a
> mkBlock a = Block 1 [a]

One round of the algorithm works as follows: we duplicate the list, partition it after each “dip” (chop splitDip, to be explained below), possibly drop some of the leading parts and coalesce other parts based on size parity (pickOdds), and then keep only a total amount of stuff equal to the length of the original list (takeTotal). This last part with takeTotal ensures that we will end up with something which is a rotation of the original input (though partitioned). In an implementation with random-access arrays, one would just wrap the indices around using mod; in this context it’s easier to first duplicate the input list so we can deal with all rotations at once, determine which rotation we want by dropping some stuff from the beginning, then drop any excess stuff at the end.

> eastmanRound :: Ord a => [a] -> [[a]]
> eastmanRound as
>   = takeTotal (length as)
>   . pickOdds
>   . chop splitDip
>   $ (as ++ as)

It’s interesting to note that in eastmanRound the type a is actually going to be instantiated with Block b for some type b. In the first round, all the blocks are singletons, so this is no different than just taking a list of b. But in subsequent rounds the distinction is nontrivial.

A “dip” is a decreasing substring followed by a single increase, for example, 976325. (Though again, remember that we are actually dealing with sequences of blocks, not integers, so a dip is essentially a sequence of blocks of decreasing length followed by a longer one, with the requisite caveat about blocks of the same length.) splitDip looks for the first place in the list that looks like a > b < c and breaks the list right after it. This is used with the chop function to split the list into a sequence of dips.

> splitDip :: Ord a => [a] -> ([a],[a])
> splitDip (a:b:cs)
>   | a < b     = ([a,b], cs)
>   | otherwise = first (a:) (splitDip (b:cs))
> splitDip as = (as,[])

pickOdds does something like the following: it looks for maximal sequences of dips where the first dip has odd length and the rest have even length, and merges such sequences into one long partition. It also drops everything prior to the first odd dip. Something like that at least; my memory on this is a bit fuzzy.

> pickOdds :: [[a]] -> [[a]]
> pickOdds
>   = map concat
>   . dropWhile (even . length . head)
>   . drop 1
>   . splitAtOdds
> splitAtOdds :: [[a]] -> [[[a]]]
> splitAtOdds = chop $
>   \(x:xs) -> let (ys,zs) = break (odd.length) xs
>              in  (x:ys, zs)

Finally, takeTotal just takes lists until their total length matches the given total.

> takeTotal :: Int -> [[a]] -> [[a]]
> takeTotal _ [] = []
> takeTotal n _ | n <= 0 = []
> takeTotal n (xs:xss) = xs : takeTotal (n - length xs) xss

And that’s it! I also put together a main which more or less emulates what Knuth’s C program does. My program and Knuth’s give the same output on every example I have tried (except that Knuth’s prints out some intermediate information about each iteration step; mine just prints the final answer).

> main :: IO ()
> main = do
>   progName <- getProgName
>   args <- getArgs
>   let n = length args
>   when (n < 3) $ do
>     printf "Usage: %s x1 x2 ... xn\n" progName
>     exitWith (ExitFailure 1)
>   when (even n) $ do
>     printf "The number of items, n, should be odd, not `%d'!\n" n
>     exitWith (ExitFailure 2)
>   let ns :: [Maybe Int]
>       ns = map readMaybe args
>   case findIndex (maybe True (<0) . snd) (zip [1..] ns) of
>     Just i -> do
>       printf "Argument %d should be a nonnegative integer, not `%s'!\n"
>                         i                             (args !! (i-1))
>       exitWith (ExitFailure 3)
>     Nothing ->
>       putStrLn .
>       (' ' :) . intercalate " " . map show .
>       eastman . catMaybes $ ns
Posted in combinatorics, haskell | Tagged , , , | Leave a comment

Any clues about this Newton iteration formula with Jacobian matrix?

A while ago I wrote about using Boltzmann sampling to generate random instances of algebraic data types, and mentioned that I have some code I inherited for doing the core computations. There is one part of the code that I still don’t understand, having to do with a variant of Newton’s method for finding a fixed point of a mutually recursive system of equations. It seems to work, but I don’t like using code I don’t understand—for example, I’d like to be sure I understand the conditions under which it does work, to be sure I am not misusing it. I’m posting this in the hopes that someone reading this may have an idea.

Let \Phi : \mathbb{R}^n \to \mathbb{R}^n be a vector function, defined elementwise in terms of functions \Phi_1, \dots, \Phi_n : \mathbb{R}^n \to \mathbb{R}:

\displaystyle \Phi(\mathbf{X}) = (\Phi_1(\mathbf{X}), \dots, \Phi_n(\mathbf{X}))

where \mathbf{X} = (X_1, \dots, X_n) is a vector in \mathbb{R}^n. We want to find the fixed point \mathbf{Y} such that \Phi(\mathbf{Y}) = \mathbf{Y}.

The algorithm (you can see the code here) now works as follows. First, define \mathbf{J} as the n \times n Jacobian matrix of partial derivatives of the \Phi_i, that is,

\displaystyle \displaystyle \mathbf{J} = \begin{bmatrix} \frac{\partial}{\partial X_1} \Phi_1 & \dots & \frac{\partial}{\partial X_n} \Phi_1 \\ \vdots & \ddots & \vdots \\ \frac{\partial}{\partial X_1} \Phi_n & \dots & \frac{\partial}{\partial X_n} \Phi_n\end{bmatrix}

Now let \mathbf{Y}_0 = (0, \dots, 0) and let \mathbf{U}_0 = I_n be the n \times n identity matrix. Then for each i \geq 0 define

\displaystyle \mathbf{U}_{i+1} = \mathbf{U}_i + \mathbf{U}_i(\mathbf{J}(\mathbf{Y}_i)\mathbf{U}_i - (\mathbf{U}_i - I_n))

and also

\displaystyle \mathbf{Y}_{i+1} = \mathbf{Y}_i + \mathbf{U}_{i+1}(\Phi(\mathbf{Y}_i) - \mathbf{Y}_i).

Somehow, magically (under appropriate conditions on \Phi, I presume), the sequence of \mathbf{Y}_i converge to the fixed point \mathbf{Y}. But I don’t understand where this is coming from, especially the equation for \mathbf{U}_{i+1}. Most generalizations of Newton’s method that I can find seem to involve multiplying by the inverse of the Jacobian matrix. So what’s going on here? Any ideas/pointers to the literature/etc?

Posted in math | Tagged , , , , , , | 8 Comments

Towards a new programming languages course: ideas welcome!

tl;dr: This fall, I will be teaching an undergraduate PL course, with a focus on practical language design principles and tools. Feedback, questions, assignments you can share with me, etc. are all most welcome!

This fall, I will be teaching an undergraduate course on programming languages. It’s eminently sensible to ask a new hire to take on a course in their specialty, and one might think I would be thrilled. But in a way, I am dreading it.

It’s my own fault, really. In my hubris, I have decided that I don’t like the ways that PL courses are typically taught. So this summer I have to buckle down and actually design the course I do want to teach. It’s not that I’m dreading the course itself, but rather the amount of work it will take to create it!

I’m not a big fan of the sort of “survey of programming languages” course that gets taught a lot, where you spend three or four weeks on each of three or four different languages. I am not sure that students really learn much from the experience (though I would be happy to hear any reports to the contrary). At best it feels sort of like making students “eat their vegetables”—it’s not much fun but it will make them grow big and strong in some general sense.1 It’s unlikely that students will ever use the surveyed languages again. You might hope that students will think to use the surveyed languages later in their career because they were exposed to them in the course; but I doubt it, because three or four weeks is hardly enough to get any real sense for a language and where it might be useful. I think the only real argument for this sort of course is that it “exposes students to new ways of thinking”. While that is certainly true, and exposing students to new ways of thinking is important—essentially every class should be doing it, in one way or another—I think there are better ways to go about it.

In short, I want to design a course that will not only expose students to new ideas and ways of thinking, but will also give them some practical skills that they might actually use in their career. I started by considering the question: what does the field of programming languages uniquely have to offer to students that is both intellecually worthwhile (by my own standards) and valuable to them? Specifically, I want to consider students who go on to do something other than be an academic in PL: what do I want the next generation of software developers and academics in other fields to understand about programming languages?

A lightbulb finally turned on for me when I realized that while the average software developer will probably never use, say, Prolog, they almost certainly will develop a domain-specific language at some point—quite possibly without even realizing they are doing it! In fact, if we include embedded domain-specific languages, then in essence, anyone developing any API at all is creating a language. Even if you don’t want to extend the idea of “embedded domain-specific language” quite that far, the point is that the tools and ideas of language design are widely applicable. Giving students practice designing and implementing languages will make them better programmers.

So I want my course to focus on language design, encompassing both big ideas (type systems, semantics) as well as concrete tools (parsing, ASTs, type checking, interpreters). We will use a functional programming language (specifically, Haskell) for several reasons: to expose the students to a programming paradigm very different from the languages they already know (mainly Java and Python); because FP languages make a great platform for starting to talk about types; and because FP languages also make a great platform for building language-related tools like parsers, type checkers, etc. and for building embedded domain-specific languages. Notably, however, we will only use Haskell: though we will probably study other types of languages, we will ues Haskell as a medium for our study, e.g. by implementing simplified versions of them in Haskell. So while the students will be exposed to a number of ideas there is really only one concrete language they will be exposed to. The hope is that by working in a single language all semester, the students may actually end up with enough experience in the language that they really do go on to use it again later.

As an aside, an interesting challenge/opportunity comes from the fact that approximately half the students in the class will have already taken my functional programming class this past spring, and will therefore be familiar with Haskell. On the challenge side, how do I teach Haskell to the other half of the class without boring the half that already knows it? Part of the answer might lie in emphasis: I will be highlighting very different aspects of the language from those I covered in my FP course, though of course there will necessarily be overlap. On the opportunity side, however, I can also ask: how can I take advantage of the fact that half the class will already know Haskell? For example, can I design things in such a way that they help the other half of the class get up to speed more quickly?

In any case, here’s my current (very!) rough outline for the semester:

  1. Introduction to FP (Haskell) (3 weeks)
  2. Type systems & foundations (2-3 weeks)
    • lambda calculus
    • type systems
  3. Tools for language design and implementation (4 weeks)
    • (lexing &) parsing, ASTs
    • typechecking
    • interpreters
    • (very very basics of) compilers (this is not a compilers course!)
  4. Domain-specific languages (3 weeks)
  5. Social aspects? (1 week)
    • language communities
    • language adoption

My task for the rest of the summer is to develop a more concrete curriculum, and to design some projects. This will likely be a project-based course, where the majority of the points will be concentrated in a few big projects—partly because the nature of the course lends itself well to larger projects, and partly to keep me sane (I will be teaching two other courses at the same time, and having lots of small assignments constantly due is like death by a thousand cuts).

I would love feedback of any kind. Do you think this is a great idea, or a terrible one? Have you, or anyone you know of, ever run a similar course? Do you have any appropriate assignments you’d like to share with me?

  1. Actually, I love vegetables, but anyway.

Posted in teaching | Tagged , , , , | 14 Comments

How to print things

I have finally finished up writing my guide on how to print things, based on this blog post from January on my other (math) blog. It specifically enumerates the pros and cons of various methods for printing and reading loose-leaf documents (the sort of thing that academics do quite a bit, when they print out a journal article to read).

The main motivation for writing the page is to explain the (to my knowledge, novel) Möbius method for printing and reading double-sided, like this:

I actually now use this in practice. As compared to the usual method of printing double-sided, this has several advantages:

  • One always does the exact same action after finishing every page; there is no need to remember whether you are on an even or an odd page.
  • Any consecutive sequence of n/2 pages are on different sheets of paper, so it is easy to simultaneously refer to multiple pages close together. There is never any need to keep flipping a sheet back and forth to refer to the previous page (as there is with traditional double-sided printing).

But there are even new things to say about traditional double-sided printing, as well. I now know of several different algorithms for reading double-sided, each with its pros and cons; previously I had not even considered that there might be more than one way to do it.

Posted in reading | Tagged , , , , , , , , , | 2 Comments

In praise of Beeminder

In January 2013, I wrote a post explaining my use of Beeminder, after using it for six months. Well, I’ve now been using it continuously for almost four years! It has become such an integral part of my life and workflow that I literally don’t know what I would do if it went away. So I decided it was high time to write another blog post about Beeminder. This time, instead of enumerating things I am currently using it for, I will focus on things I have accomplished with the help of Beeminder. There is little doubt in my mind that I am much awesomer today than I would have been without Beeminder.

First, what is Beeminder? Here’s what I wrote three and a half years ago, which I think is still a good description:

The basic idea is that it helps you keep track of progress on any quantifiable goals, and gives you short-term incentive to stay on track: if you don’t, Beeminder takes your money. But it’s not just about the fear of losing money. Shiny graphs tracking your progress coupled with helpfully concrete short-term goals (“today you need to write 1.3 pages of that paper”) make for excellent positive motivation, too.

The key property that makes Beeminder work so well for me is that it makes long-term goals into short-term ones. I am a terrible procrastinator—due to hyperbolic discounting I can be counted on to pretty much ignore anything with long-term rewards or consequences. A vague sense that I ought to take better care of my bike is not enough to compel me to action in the present; but “inflate your tires and grease your chain before midnight or else pay $5” is.

So, what have I accomplished over the past three years?

  • First, the big one: I finished my PhD! A PhD dissertation is much too big to put off until the last minute. Writing my thesis proposal, researching and writing my dissertation itself, and making slides for my defense were all largely driven by Beeminder goals. I am honestly not sure if I would have been able to finish otherwise.
  • I landed two jobs: first, a one-year position at Williams College, and now a tenure-track position at Hendrix College. Preparing application materials and applying for academic jobs is not much fun, and it was a really tough slog putting everything together, especially while I was teaching at Williams last year. Having a Beeminder goal for spending time on my application materials was absolutely critical.
  • I finished watching every single Catsters video and writing up a toplogically sorted guide to the series.
  • Since March 2015, when I started cranking up my Beeminder goal for writing blog posts again, I have written over 80 posts on my two blogs. I also wrote more than 40 posts in late 2012-early 2013 using another goal (the gap from 2013-2015 was when I was writing my dissertation instead of blogging!).
  • Over the past three years, I have spent about 1 hour per week (typically spread over 3 or 4 days) learning biblical Hebrew. It adds up to almost 150 hours of Hebrew study—which doesn’t sound like a whole lot, but almost every minute of it was quality, focused study time. And since it has been so spread out, the material is quite firmly embedded in my long-term memory. I recently finished working through the entire introductory textbook I was using, while doing every single exercise in the associated workbook. I am still far from being an expert, but I can actually read simple things now.
  • Over the past 6 months I lost more than 15 pounds.
  • Since September I have been swimming two mornings a week: when I started, I could barely do two laps before feeling like I was going to be sick; now, I can swim 500m in under 9 minutes (just under double world record pace =).

There are lots of other things I use Beeminder for, but these are the accomplishments I am proudest of. If you want to do awesome things but can never quite seem to find the time or motivation to do them, give it a try!

Posted in meta | Tagged , , , | 3 Comments