Academic integrity: context and concrete steps

Continuing from my previous post, I wanted to write a bit about why I have been thinking about academic integrity, and what, concretely, I plan to do about it.

So, why have I been thinking about this? For one thing, my department had its fair share of academic integrity violations last year. On the one hand, it is right for students to be held accountable for their actions. On the other, in the face of a spate of violations, it is also right for us to reevaluate what we are doing and why, what sort of environmental factors may be pushing students to violate academic integrity, and how we can create a better environment. Environment does not excuse behavior, but it can shape behavior in profound ways.

Another reason for thinking about academic integrity is that starting this fall, I will be a member of the committee that hears and makes a determination in formal academic integrity cases at my institution. It seems no one wants to be on this committee, and to a certain extent I can understand why. But I chose it, for several reasons. For one, I think it is important to have someone on the committee from the natural sciences (I will be the only one), who understands issues of plagiarism in the context of technical subjects. I also care a lot about ensuring that academic integrity violations are handled carefully and thoughtfully, so that students actually learn something from the experience, and more importantly, so that they come through with their sense of belonging intact. When a student (or anyone, really) does something that violates the standards of a community and is subject to consequences, it is all too easy for them to feel as though they are now a lesser member or even excluded from the community. It takes much more intentional communication to make clear to them that although they may have violated a community standard—which necessarily comes with a consequence—they are still a valued member. (Thanks to Leslie Zorwick for explaining about the power of belonging, and for relating recent research showing that communicating belonging can make a big difference for students on academic probation—which seems similar to students accused or convicted of academic integrity violations. I would cite it but I think it is not actually published yet.)

Thinking about all of this is well and good, but what will I do about it? How do I go about communicating all of this to my students, and creating the sort of environment I want? Here are the concrete things I plan to do starting this fall:

  • In all my courses where it makes sense, I plan to require students to have at least one citation (perhaps three, if I am bold) on every assignment turned in—whether they cite web pages, help from TAs or classmates, and so on. The point is to get them thinking regularly about the resources and help that they make use of on every single assignment, to foster a spirit of thankfulness. I hope it will also make it psychologically harder for students to plagiarize and lie about it. Finally, I hope it will lead to better outcomes in cases where a student makes inappropriate use of an online resource—i.e. when they “consult” a resource, perhaps even deceiving themselves into thinking that they are really doing the work, but end up essentially copying the resource. If they don’t cite the resource in such a case, I have a messy academic integrity violation case on my hands; if they do, there is no violation, even though the student didn’t engage with the assignment as I would have hoped, and I can have a simple conversation with them about my expectations and their learning (and perhaps lower their grade).

  • I will make sure to communicate to my students how easy it is for me to detect plagiarism, and how dire the consequences can be. A bit of healthy fear never hurt!

  • But beyond that, I want to make sure my students also understand that I care much more about them, as human beings, than I do about their grade or whether they turn in an assignment. I suspect that a lot of academic integrity violations happen at 2am, the night before a deadline, when the student hasn’t even started the assignment and they are riddled with anxiety and running on little sleep—but they feel as though they have to turn something in and this urge overrides whatever convictions they might have about plagiarism. To the extent their decision is based on anxiety about grades, there’s not much I can do about it. However, if their decision stems from a feeling of shame at not turning something in and disappointing their professor, I can make a difference: in that moment, I want my students to remember that their value in my eyes as human beings is not tied to their academic performance; that I will be much more impressed by their honesty than by whether they turn something in.

  • As a new member of the academic integrity committee, I plan to spend most of my time listening and learning from the continuing members of the committee; but I do hope to make sure our communication with both accused and convicted students emphasizes that they are still valued members of our community.

Other concrete suggestions, questions, experiences to relate, etc. are all most welcome!

Posted in teaching | Tagged , , , , , | 2 Comments

Academic integrity and other virtues

I have been thinking a lot recently about academic integrity. What does it mean? Why do we care—what is it we fundamentally want students to do and to be? And whatever it is, how do we go about helping them become like that?

As a general principle, I think we ought to focus not just on prohibiting certain negative behaviors, but rather on encouraging positive behaviors (which are in a suitable sense “dual” to the negative behaviors we want to prohibit). Mere prohibitions leave a behavioral vacuum—“OK, don’t do this, so what should I do?”—and incentivize looking for loopholes, seeing how close one can toe the line without breaking the letter of the law. On the other hand, a positive principle actively guides behavior, and in actively striving towards the ideal of the positive principle, one (ideally) ends up far away from the prohibited negative behavior.

In the case of academic integrity, then, it is not enough to say “don’t plagiarize”. In fact, if one focuses on the prohibition itself, this is a particularly difficult one to live by, because academic life is not lived in a vacuum: ideas and accomplishments never spring forth ex nihilo, owing nothing to the ideas and accomplishments of others. In reality, one is constantly copying in big and small ways, explicitly and implicitly, consciously and unconsciously. In fact, this is how learning works! We just happen to think that some forms of copying are acceptable and some are not. Now, there are good reasons for distinguishing acceptable and unacceptable copying; the point is that this is often more difficult and ambiguous for students than we care to admit.

So what is the “dual” of plagiarism? What are the positive virtues which we should instill in our students? One can, of course, say “integrity”, but I don’t think this goes far enough: to have integrity is to adhere to a particular set of moral principles, but which ones? Integrity means being truthful, but truthful about what? It seems this is just another way of saying “don’t plagiarize”, i.e. don’t lie about the source of an idea. I have come up with two other virtues, however, which I think really get at the heart of the issue: thankfulness and generosity. (And in the spirit of academic thankfulness, I should say that Vic Norman first got me thinking along these lines with his paper How Will You Practice Virtue Witout Skill?: Preparing Students to be Virtuous Computer Programmers, published in the 2014-2015 Journal of the ACMS; I was also influenced by a discussion of Vic’s paper with several others at the ACMS luncheon at SIGCSE 2016.)

Academic thankfulness has to do with recognizing one’s profound debt to the academic context: to all those thinkers and doers who have come before, and to all those who help you along your journey as a learner, whether professors, other students, or random strangers on the Internet. A thankful student is naturally driven to cite anything and everything, to give credit where credit is due, even to give credit where credit is not technically necessary but can serve as a token of thanks. A thankful student recognizes the hard work and unique contributions of others, rather than seeing others as mere means to their own ends. A thankful student never plagiarizes, since taking something from someone else and claiming it for one’s own is the height of ingratitude.

Academic generosity is about freely sharing one’s own ideas, sacrificing one’s time and energy to help others, and allowing others to share in credit and recognition. Being academically generous is harder than being thankful, because it opens you up to the potential ingratitude of others, but in some sense it is the more important of the two virtues: if no one were generous, no one would have anything to be thankful for. A generous student is naturally driven to cite anything and everything, to give credit and recognition to others, whether earned or not. A generous student recognizes others as worthy collaborators rather than as means to an end. A generous student never plagiarizes, since they know how it would feel to have their own generosity taken advantage of.

There’s more to say—about the circumstances that have led me to think about this, and about how one might actually go about instilling these virtues in students, but I think I will leave that for another post.

Posted in teaching | Tagged , , , , , | 1 Comment

POGIL workshop

A few weeks ago I attended a 3-day training workshop in St. Louis, put on by the POGIL project. I attended a short POGIL session at the SIGCSE CS education conference in March and was sufficiently impressed to sign up for a training workshop (it didn’t hurt that Clif Kussmaul has an NSF grant that paid for my registration and travel).

POGIL is an acronym for “Process Oriented Guided Inquiry Learning”. Process-oriented refers to the fact that in addition to learning content, an explicit goal is for students to learn process skills like analytic thinking, communication, and teamwork. Guided inquiry refers to the fact that students are responsible for constructing their own knowledge, guided by carefully designed questions. The entire framework is really well thought-out and is informed by concrete research in pedagogical methods. I really enjoyed how the workshop used the POGIL method to teach us about POGIL (though of course it would be rather suspect to do anything else!). It gave me not just an intellectual appreciation for the benefits of the approach, but also a concrete understanding of the POGIL experience for a student.

The basic idea is to put students in groups of 3 or 4 and have them work through an activity or set of questions together. So far this sounds just like standard “group work”, but it’s much more carefully thought out than that:

  • Each student is assigned a role with specific responsibilities within their group. Roles typically rotate from day to day so each student gets a chance to play each role. Roles can vary but common ones include things like “manager”, “recorder”, “reporter”, and so on. I didn’t appreciate how important the roles are until attending the workshop, but they are really crucial. They help ensure every student is engaged, forestall some of the otherwise inevitable social awkwardness as students figure out how to relate to their group members, and also play an important part in helping students develop process skills.

  • The activities are carefully constructed to take students through one or more learning cycles: beginning with some data, diagrams, text, etc. (a “model”), students are guided through a process starting with simple observations, then synthesis and discovering underlying concepts, and finally more open ended/application questions.

The teacher is a facilitator: giving aid and suggestions as needed, managing dificulties that arise, giving space and time for groups to report on their progress and share with other groups, and so on. Of course, a lot of work goes into constructing the activities themselves.

In some areas, there is already a wealth of POGIL activities to choose from; unfortunately, existing materials are a bit thinner in CS (though there is a growing collection). I won’t be able to use POGIL much this coming semester, but I hope to use it quite a bit when I teach algorithms again in the spring.

Posted in teaching | Tagged , , , , | 1 Comment

New Haskell Symposium paper on “twisted functors”

Satvik Chauhan, Piyush Kurur and I have a new paper which will appear at the 2016 Haskell Symposium in Japan:

How to Twist Pointers without Breaking Them

Although pointer manipulations are used as a primary motivating example, at heart the paper is really about “twisted functors”, a class of applicative functors which arise as a natural generalization of the semi-direct product of two monoids where one acts on the other. It’s a really cute idea1, one of those ideas which seems “obvious” in retrospect, but really hadn’t been explored before.

We give some examples of applications in the paper but I’m quite certain there are many other examples of applications out there. If you find any, let us know!


  1. I can say that since it wasn’t actually my idea!

Posted in haskell, writing | Tagged , , , , , , , , , | 6 Comments

Eastman maximal comma-free codes in Haskell

This past January I watched a video of Don Knuth’s most recent annual Christmas lecture. Typically his Christmas lectures have been about trees, but breaking with tradition, he gave this lecture about comma-free codes, and presented an implementation of an interesting algorithm due to Willard Eastman. Of course his implementation was written in CWEB, and during the course of the lecture he noted that his algorithm was iterative, and he didn’t know of a nice way to write it recursively (or something like that). Naturally, that sounded like a challenge, so I implemented it in Haskell, and I think it came out quite nicely. (It still uses “iteration” in the sense of the iterate function, but of course that uses recursion, so…?) Unfortunately, that was in January, it is now July, and I don’t really remember how it works. So I decided I had better write about it now, before I forget even more about how it works.

A comma-free code is a set C of strings such that if you concatenate any two strings in C, the result does not contain any elements of C as internal substrings. The term “comma-free” refers to the fact that sequences of strings from C can be unambiguously concatenated, without the need for separators like commas. Even if you start reading somewhere in the middle of a message, you can unambiguously figure out how to partition the message into codewords. For example, {bear, like} is a comma-free code, but {bear, like, earl} is not, since bearlike contains earl as a substring. A comma-free code also obviously cannot contain any periodic strings (that is, strings which consist of repeated copies of some shorter string), like abcabc, since concatenating such a string with itself produces a string containing the same string as an internal prefix.

Given a fixed alphabet and codeword length, one is naturally led to ask how large a comma-free code can possibly be. Eastman solved this problem for odd codeword lengths, by showing how to construct a maximal commafree code. To understand Eastman’s solution, consider the set S of all aperiodic strings of length n over an alphabet \Sigma (we have already seen that periodic strings cannot be part of a comma-free code). Consider two strings equivalent if they are rotations of each other. For example, bear, earb, arbe, and rbea are all equivalent. This is an equivalence relation on strings, and so it defines a partition of S into classes of equivalent strings. Note that we can never have two equivalent strings as part of the same comma-free code, since if we concatenate a string with itself, the result contains all other equivalent strings as substrings. For example, bearbear contains earb, arbe, and rbea. So at most a comma-free code could contain one string from each equivalence class.

In fact, Eastman shows that for odd n there are comma-free codes that contain exactly one string from each equivalence class! What’s more, his proof is constructive: he shows how to pick a particular, canonical representative from each equivalence class such that the collection of all such canonical representatives is a comma-free code. This is what the program below does: given an odd-length string, it outputs the canonical rotation of that string which is part of a maximal comma-free code.

So, without further ado, let’s see the implementation! Again, I really don’t remember much about the details of how (much less why) this works. For that, I recommend watching Knuth’s lecture or reading the explanations in his code (you’ll probably want to compile it into LaTeX first).

First, some imports and such. Look, ma, no LANGUAGE extensions!

> module Commafree where
> 
> import           Control.Arrow      (first)
> import           Control.Monad      (when)
> import           Data.List          (findIndex, intercalate)
> import           Data.List.Split
> import           Data.Maybe         (catMaybes)
> import           Data.Monoid        ((<>))
> 
> import           System.Environment
> import           System.Exit
> import           Text.Printf
> import           Text.Read          (readMaybe)

Here’s the main Eastman algorithm, which actually works for any list of things with a total order (unlike Knuth’s, which only works for lists of nonnegative integers, although that is obviously just a cosmetic difference, since any finite set with a total order is isomorphic to a set of nonnegative integers). We turn each item into a singleton “block”, then iterate the eastmanRound function, which partitions the blocks into subsequences of blocks, which we coalesce into blocks again. So each iteration makes the partition coarser, i.e. the blocks get bigger. We keep iterating until there is only one block left, which contains the rotation that we seek.

> eastman :: Ord a => [a] -> [a]
> eastman
>   = blockContent . head . head
>   . dropWhile ((>1) . length)
>   . iterate (map mconcat . eastmanRound)
>   . map mkBlock

Some code for dealing with blocks. A block is just a list that keeps track of its length for efficiency. The important point about blocks is that they are ordered first by length, then lexicographically (see the Ord instance below). The Monoid instance is straightforward.

> data Block a = Block { blockLen :: !Int, blockContent :: [a] }
>   deriving (Show, Eq)
> 
> instance Ord a => Ord (Block a) where
>   compare (Block m as) (Block n bs)
>     = compare m n <> compare as bs
> 
> instance Monoid (Block a) where
>   mempty = Block 0 []
>   mappend (Block m as) (Block n bs) = Block (m+n) (as ++ bs)
> 
> mkBlock :: a -> Block a
> mkBlock a = Block 1 [a]

One round of the algorithm works as follows: we duplicate the list, partition it after each “dip” (chop splitDip, to be explained below), possibly drop some of the leading parts and coalesce other parts based on size parity (pickOdds), and then keep only a total amount of stuff equal to the length of the original list (takeTotal). This last part with takeTotal ensures that we will end up with something which is a rotation of the original input (though partitioned). In an implementation with random-access arrays, one would just wrap the indices around using mod; in this context it’s easier to first duplicate the input list so we can deal with all rotations at once, determine which rotation we want by dropping some stuff from the beginning, then drop any excess stuff at the end.

> eastmanRound :: Ord a => [a] -> [[a]]
> eastmanRound as
>   = takeTotal (length as)
>   . pickOdds
>   . chop splitDip
>   $ (as ++ as)

It’s interesting to note that in eastmanRound the type a is actually going to be instantiated with Block b for some type b. In the first round, all the blocks are singletons, so this is no different than just taking a list of b. But in subsequent rounds the distinction is nontrivial.

A “dip” is a decreasing substring followed by a single increase, for example, 976325. (Though again, remember that we are actually dealing with sequences of blocks, not integers, so a dip is essentially a sequence of blocks of decreasing length followed by a longer one, with the requisite caveat about blocks of the same length.) splitDip looks for the first place in the list that looks like a > b < c and breaks the list right after it. This is used with the chop function to split the list into a sequence of dips.

> splitDip :: Ord a => [a] -> ([a],[a])
> splitDip (a:b:cs)
>   | a < b     = ([a,b], cs)
>   | otherwise = first (a:) (splitDip (b:cs))
> splitDip as = (as,[])

pickOdds does something like the following: it looks for maximal sequences of dips where the first dip has odd length and the rest have even length, and merges such sequences into one long partition. It also drops everything prior to the first odd dip. Something like that at least; my memory on this is a bit fuzzy.

> pickOdds :: [[a]] -> [[a]]
> pickOdds
>   = map concat
>   . dropWhile (even . length . head)
>   . drop 1
>   . splitAtOdds
> 
> splitAtOdds :: [[a]] -> [[[a]]]
> splitAtOdds = chop $
>   \(x:xs) -> let (ys,zs) = break (odd.length) xs
>              in  (x:ys, zs)

Finally, takeTotal just takes lists until their total length matches the given total.

> takeTotal :: Int -> [[a]] -> [[a]]
> takeTotal _ [] = []
> takeTotal n _ | n <= 0 = []
> takeTotal n (xs:xss) = xs : takeTotal (n - length xs) xss

And that’s it! I also put together a main which more or less emulates what Knuth’s C program does. My program and Knuth’s give the same output on every example I have tried (except that Knuth’s prints out some intermediate information about each iteration step; mine just prints the final answer).

> main :: IO ()
> main = do
>   progName <- getProgName
>   args <- getArgs
>   let n = length args
>   when (n < 3) $ do
>     printf "Usage: %s x1 x2 ... xn\n" progName
>     exitWith (ExitFailure 1)
>   when (even n) $ do
>     printf "The number of items, n, should be odd, not `%d'!\n" n
>     exitWith (ExitFailure 2)
>   let ns :: [Maybe Int]
>       ns = map readMaybe args
>   case findIndex (maybe True (<0) . snd) (zip [1..] ns) of
>     Just i -> do
>       printf "Argument %d should be a nonnegative integer, not `%s'!\n"
>                         i                             (args !! (i-1))
>       exitWith (ExitFailure 3)
>     Nothing ->
>       putStrLn .
>       (' ' :) . intercalate " " . map show .
>       eastman . catMaybes $ ns
Posted in combinatorics, haskell | Tagged , , , | Leave a comment

Any clues about this Newton iteration formula with Jacobian matrix?

A while ago I wrote about using Boltzmann sampling to generate random instances of algebraic data types, and mentioned that I have some code I inherited for doing the core computations. There is one part of the code that I still don’t understand, having to do with a variant of Newton’s method for finding a fixed point of a mutually recursive system of equations. It seems to work, but I don’t like using code I don’t understand—for example, I’d like to be sure I understand the conditions under which it does work, to be sure I am not misusing it. I’m posting this in the hopes that someone reading this may have an idea.

Let \Phi : \mathbb{R}^n \to \mathbb{R}^n be a vector function, defined elementwise in terms of functions \Phi_1, \dots, \Phi_n : \mathbb{R}^n \to \mathbb{R}:

\displaystyle \Phi(\mathbf{X}) = (\Phi_1(\mathbf{X}), \dots, \Phi_n(\mathbf{X}))

where \mathbf{X} = (X_1, \dots, X_n) is a vector in \mathbb{R}^n. We want to find the fixed point \mathbf{Y} such that \Phi(\mathbf{Y}) = \mathbf{Y}.

The algorithm (you can see the code here) now works as follows. First, define \mathbf{J} as the n \times n Jacobian matrix of partial derivatives of the \Phi_i, that is,

\displaystyle \displaystyle \mathbf{J} = \begin{bmatrix} \frac{\partial}{\partial X_1} \Phi_1 & \dots & \frac{\partial}{\partial X_n} \Phi_1 \\ \vdots & \ddots & \vdots \\ \frac{\partial}{\partial X_1} \Phi_n & \dots & \frac{\partial}{\partial X_n} \Phi_n\end{bmatrix}

Now let \mathbf{Y}_0 = (0, \dots, 0) and let \mathbf{U}_0 = I_n be the n \times n identity matrix. Then for each i \geq 0 define

\displaystyle \mathbf{U}_{i+1} = \mathbf{U}_i + \mathbf{U}_i(\mathbf{J}(\mathbf{Y}_i)\mathbf{U}_i - (\mathbf{U}_i - I_n))

and also

\displaystyle \mathbf{Y}_{i+1} = \mathbf{Y}_i + \mathbf{U}_{i+1}(\Phi(\mathbf{Y}_i) - \mathbf{Y}_i).

Somehow, magically (under appropriate conditions on \Phi, I presume), the sequence of \mathbf{Y}_i converge to the fixed point \mathbf{Y}. But I don’t understand where this is coming from, especially the equation for \mathbf{U}_{i+1}. Most generalizations of Newton’s method that I can find seem to involve multiplying by the inverse of the Jacobian matrix. So what’s going on here? Any ideas/pointers to the literature/etc?

Posted in math | Tagged , , , , , , | 8 Comments

Towards a new programming languages course: ideas welcome!

tl;dr: This fall, I will be teaching an undergraduate PL course, with a focus on practical language design principles and tools. Feedback, questions, assignments you can share with me, etc. are all most welcome!

This fall, I will be teaching an undergraduate course on programming languages. It’s eminently sensible to ask a new hire to take on a course in their specialty, and one might think I would be thrilled. But in a way, I am dreading it.

It’s my own fault, really. In my hubris, I have decided that I don’t like the ways that PL courses are typically taught. So this summer I have to buckle down and actually design the course I do want to teach. It’s not that I’m dreading the course itself, but rather the amount of work it will take to create it!

I’m not a big fan of the sort of “survey of programming languages” course that gets taught a lot, where you spend three or four weeks on each of three or four different languages. I am not sure that students really learn much from the experience (though I would be happy to hear any reports to the contrary). At best it feels sort of like making students “eat their vegetables”—it’s not much fun but it will make them grow big and strong in some general sense.1 It’s unlikely that students will ever use the surveyed languages again. You might hope that students will think to use the surveyed languages later in their career because they were exposed to them in the course; but I doubt it, because three or four weeks is hardly enough to get any real sense for a language and where it might be useful. I think the only real argument for this sort of course is that it “exposes students to new ways of thinking”. While that is certainly true, and exposing students to new ways of thinking is important—essentially every class should be doing it, in one way or another—I think there are better ways to go about it.

In short, I want to design a course that will not only expose students to new ideas and ways of thinking, but will also give them some practical skills that they might actually use in their career. I started by considering the question: what does the field of programming languages uniquely have to offer to students that is both intellecually worthwhile (by my own standards) and valuable to them? Specifically, I want to consider students who go on to do something other than be an academic in PL: what do I want the next generation of software developers and academics in other fields to understand about programming languages?

A lightbulb finally turned on for me when I realized that while the average software developer will probably never use, say, Prolog, they almost certainly will develop a domain-specific language at some point—quite possibly without even realizing they are doing it! In fact, if we include embedded domain-specific languages, then in essence, anyone developing any API at all is creating a language. Even if you don’t want to extend the idea of “embedded domain-specific language” quite that far, the point is that the tools and ideas of language design are widely applicable. Giving students practice designing and implementing languages will make them better programmers.

So I want my course to focus on language design, encompassing both big ideas (type systems, semantics) as well as concrete tools (parsing, ASTs, type checking, interpreters). We will use a functional programming language (specifically, Haskell) for several reasons: to expose the students to a programming paradigm very different from the languages they already know (mainly Java and Python); because FP languages make a great platform for starting to talk about types; and because FP languages also make a great platform for building language-related tools like parsers, type checkers, etc. and for building embedded domain-specific languages. Notably, however, we will only use Haskell: though we will probably study other types of languages, we will ues Haskell as a medium for our study, e.g. by implementing simplified versions of them in Haskell. So while the students will be exposed to a number of ideas there is really only one concrete language they will be exposed to. The hope is that by working in a single language all semester, the students may actually end up with enough experience in the language that they really do go on to use it again later.

As an aside, an interesting challenge/opportunity comes from the fact that approximately half the students in the class will have already taken my functional programming class this past spring, and will therefore be familiar with Haskell. On the challenge side, how do I teach Haskell to the other half of the class without boring the half that already knows it? Part of the answer might lie in emphasis: I will be highlighting very different aspects of the language from those I covered in my FP course, though of course there will necessarily be overlap. On the opportunity side, however, I can also ask: how can I take advantage of the fact that half the class will already know Haskell? For example, can I design things in such a way that they help the other half of the class get up to speed more quickly?

In any case, here’s my current (very!) rough outline for the semester:

  1. Introduction to FP (Haskell) (3 weeks)
  2. Type systems & foundations (2-3 weeks)
    • lambda calculus
    • type systems
  3. Tools for language design and implementation (4 weeks)
    • (lexing &) parsing, ASTs
    • typechecking
    • interpreters
    • (very very basics of) compilers (this is not a compilers course!)
  4. Domain-specific languages (3 weeks)
  5. Social aspects? (1 week)
    • language communities
    • language adoption

My task for the rest of the summer is to develop a more concrete curriculum, and to design some projects. This will likely be a project-based course, where the majority of the points will be concentrated in a few big projects—partly because the nature of the course lends itself well to larger projects, and partly to keep me sane (I will be teaching two other courses at the same time, and having lots of small assignments constantly due is like death by a thousand cuts).

I would love feedback of any kind. Do you think this is a great idea, or a terrible one? Have you, or anyone you know of, ever run a similar course? Do you have any appropriate assignments you’d like to share with me?


  1. Actually, I love vegetables, but anyway.

Posted in teaching | Tagged , , , , | 14 Comments