Some statistics^{2}:
[I was planning to also make a visualization of my TagTime data showing when I was sleeping, working, or not-working, but putting together the video and this blog post has taken long enough already! Perhaps I’ll get around to it later.]
Overall, I would call the experiment a huge success—although as you can see, I was a full 2.5 hours per day off my target of 13.5 hours of productive work each day. What with eating, showering, making lunch, getting dinner, taking breaks (both intentional breaks as well as slacking off), and a few miscellaneous things I had to take care of like taking the car to get the tire pressure adjusted… it all adds up surprisingly fast. I think this was one of the biggest revelations for me; going into it I thought 3 hours of not-work per day was extremely generous. I now think three hours of not-work per day is probably within reach for me but would be extremely difficult, and would probably require things like planning out meals ahead of time. In any case, 55 hours of actual, focused work is still fantastic.
Some random observations/thoughts:
Having multiple projects to work on was really valuable; when I got tired of working on one thing I could often just switch to something else instead of taking an actual break. I can imagine this might be different if I were working on a big coding project (as most of the other maniac weeks have been). The big project would itself provide multiple different subtasks to work on, but more importantly, coding provides immediate feedback that is really addictive. Code a new feature, and you can actually run the new code! And it does something cool! That it didn’t do before! In contrast, when I write another page of my dissertation I just have… another page of my dissertation. I am, in fact, relatively excited about my dissertation, but it can’t provide that same sort of immediate reinforcing feedback, and it was difficult to keep going at times.
I found that having music playing really helped me get into a state of “flow”. The first few days I would play some album and then it would stop and I wouldn’t think to put on more. Later in the week I would just queue up many hours of music at a time and that worked great.
I was definitely feeling worn out by the end of the week—the last two days in particular, it felt a lot harder to get into a flow. I think I felt so good the first few days that I became overconfident—which is good to keep in mind if I do this again. The evening of 12 August was particularly bad; I just couldn’t focus. It might have been better in the long run to just go home and read a book or something; I’m just not sure how to tell in the moment when I should push through and when it’s better to cut my losses.
Blocking Facebook, turning off email notifications, etc. was really helpful. I did end up allowing myself to check email using my phone (I edited the rules a few hours before I started) and I think it was a good idea—I ended up still needing to communicate with some people, so it was very convenient and not too distracting.
Note there are two places on Tuesday afternoon where you can see the clock jump ahead by an hour or so; of course those are times when I turned off the recording. One corresponded to a time when I needed to read and write some sensitive emails; during the other, I was putting student pictures into an anki deck, and turned off the recording to avoid running afoul of FERPA.
That’s all I can think of for now; questions or comments, of course, are welcome.
Some technical notes (don’t try this at home; see http://expost.padm.us/maniactech for some recommendations on making your own timelapse). To record and create the video I used a homegrown concoction of scrot, streamer, ImageMagick, ffmpeg, with some zsh and Haskell scripts to tie it all together, and using diagrams to generate the clock and tag displays. I took about 3GB worth of raw screenshots, and it takes probably about a half hour to process all of it into a video.↩
These statistics are according to TagTime, i.e. gathered via random sampling, so there is a bit of inherent uncertainty. I leave it as an exercise for the reader to calculate the proper error bars on these times (given that I use a standard ping interval of 45 minutes).↩
Computed as 74/(171 – 9) pings multiplied by 24 hours; 9 pings occurred on Sunday morning which I did not count as part of the maniac week.↩
This is somewhat inflated by Saturday night/Sunday morning, when I both slept in and got a higher-than-average number of pings; the average excluding that night is 6.75 hours, which sounds about right.↩
Over the past year I’ve had several people say things along the lines of, “let me know if you want me to read through your thesis”. I never took them all that seriously (it’s easy to say you are willing to read a 200-page document…), but it never hurts to ask, right?
My thesis defense is scheduled for October 14, and I’m currently undertaking a massive writing/editing push to try to get as much of it wrapped up as I can before classes start on September 4. So, if there’s anyone out there actually interested in reading a draft and giving feedback, now is your chance!
The basic idea of my dissertation is to put combinatorial species and related variants (including a port of the theory to HoTT) in a common categorical framework, and then be able to use them for working with/talking about data types. If you’re brave enough to read it, you’ll find lots of category theory and type theory, and very little code—but I can promise lots of examples and pretty pictures. I’ve tried to make it somewhat self-contained, so it may be a good way to learn a bit of category theory or homotopy type theory, if you’ve been curious to learn more about those topics.
You can find the latest draft here (auto-updated every time I commit); more generally, you can find the git repo here. If you notice any typos or grammatical errors, feel free to open a pull request. For anything more substantial—thoughts on the organization, notes or questions about things you found confusing, suggestions for improvement, pointers to other references—please send me an email (first initial last name at gmail). And finally, please send me any feedback by September 9 at the latest (but the earlier the better). I need to have a final version to my committee by September 23.
Last but not least, if you’re interested to read it but don’t have the time or inclination to provide feedback on a draft, never fear—I’ll post an announcement when the final version is ready for your perusal!
Here are the rules:
And no, I’m not crazy. You (yes, you) could do this too.
In my previous post, we considered the “Axiom of Protoequivalence”—that is, the statement that every fully faithful, essentially surjective functor (i.e. every protoequivalence) is an equivalance—and I claimed that in a traditional setting this is equivalent to the axiom of choice. However, intuitively it feels like AP “ought to” be true, whereas AC must be rejected in constructive logic.
One way around this is by generalizing functors to anafunctors, which were introduced by Makkai (1996). The original paper is difficult going, since it is full of tons of detail, poorly typeset, and can only be downloaded as seven separate postscript files. There is also quite a lot of legitimate depth to the paper, which requires significant categorical sophistication (more than I possess) to fully understand. However, the basic ideas are not too hard to grok, and that’s what I will present here.
It’s important to note at the outset that anafunctors are much more than just a technical device enabling the Axiom of Protoequivalence. More generally, if everything in category theory is supposed to be done “up to isomorphism”, it is a bit suspect that functors have to be defined for objects on the nose. Anafunctors can be seen as a generalization of functors, where each object in the source category is sent not just to a single object, but to an entire isomorphism class of objects, without privileging any particular object in the class. In other words, anafunctors are functors whose “values are specified only up to unique isomorphism”.
Such functors represent a many-to-many relationship between objects of and objects of . Normal functors, as with any function, may of course map multiple objects of to the same object in . The novel aspect is the ability to have a single object of correspond to multiple objects of . The key idea is to add a class of “specifications” which mediate the relationship between objects in the source and target categories, in exactly the same way that a “junction table” must be added to support a many-to-many relationship in a database schema, as illustrated below:
On the left is a many-to-many relation between a set of shapes and a set of numbers. On the right, this relation has been mediated by a “junction table” containing a set of “specifications”—in this case, each specification is simply a pair of a shape and a number—together with two mappings (one-to-many relations) from the specifications to both of the original sets, such that a specification maps to a shape and number if and only if and were originally related.
In particular, an anafunctor is defined as follows.
, , and together define a many-to-many relationship between objects of and objects of . is called a specified value of at if there is some specification such that and , in which case we write . Moreover, is a value of at (not necessarily a specified one) if there is some for which .
The idea now is to impose additional conditions which ensure that “acts like” a regular functor .
Our initial intuition was that an anafunctor should map objects of to isomorphism classes of objects in . This may not be immediately apparent from the definition, but is in fact the case. In particular, the identity morphism maps to isomorphisms between specified values of ; that is, under the action of an anafunctor, an object together with its identity morphism “blow up” into an isomorphism class (aka a clique). To see this, let be two different specifications corresponding to , that is, . Then by preservation of composition and identities, we have , so and constitute an isomorphism between and .
There is an alternative, equivalent definition of anafunctors, which is somewhat less intuitive but usually more convenient to work with: an anafunctor is a category of specifications together with a span of functors where is fully faithful and (strictly) surjective on objects.
Note that in this definition, must be strictly (as opposed to essentially) surjective on objects, that is, for every there is some such that , rather than only requiring . Given this strict surjectivity on objects, it is equivalent to require to be full, as in the definition above, or to be (strictly) surjective on the class of all morphisms.
We are punning on notation a bit here: in the original definition of anafunctor, is a set and and are functions on objects, whereas in this more abstract definition is a category and and are functors. Of course, the two are closely related: given a span of functors , we may simply take the objects of as the class of specifications , and the actions of the functors and on objects as the functions from specifications to objects of and . Conversely, given a class of specifications and functions and , we may construct the category with and with morphisms in acting as morphisms in . From to , we construct the functor given by on objects and the identity on morphisms, and the other functor maps in to in .
Every functor can be trivially turned into an anafunctor . Anafunctors also compose. Given compatible anafunctors and , consider the action of their composite on objects: each object of may map to multiple objects of , via objects of . Each such mapping corresponds to a zig-zag path . In order to specify such a path it suffices to give the pair , which determines , , and . Note, however, that not every pair in corresponds to a valid path, but only those which agree on the middle object . Thus, we may take as the set of specifications for the composite , with and . On morphisms, . It is not hard to check that this satisfies the anafunctor laws.
If you know what a pullback is, note that the same thing can also be defined at a higher level in terms of spans. , the category of all (small) categories, is complete, and in particular has pullbacks, so we may construct a new anafunctor from to by taking a pullback of and and then composing appropriately.
One can go on to define ananatural transformations between anafunctors, and show that together these constitute a -category which is analogous to the usual -category of (small) categories, functors, and natural transformations; in particular, there is a fully faithful embedding of into , which moreover is an equivalence if AC holds.
To work in category theory based on set theory and classical logic, while avoiding AC, one is therefore justified in “mixing and matching” functors and anafunctors as convenient, but discussing them all as if they were regular functors (except when defining a particular anafunctor). Such usage can be formalized by turning everything into an anafunctor, and translating functor operations and properties into corresponding operations and properties of anafunctors.
However, as I will argue in some future posts, there is a better solution, which is to throw out set theory as a foundation of category theory and start over with homotopy type theory. In that case, thanks to a generalized notion of equality, regular functors act like anafunctors, and in particular AP holds.
Makkai, Michael. 1996. “Avoiding the Axiom of Choice in General Category Theory.” Journal of Pure and Applied Algebra 108 (2). Elsevier: 109–73.
In my previous post, I explained one place where the axiom of choice often shows up in category theory, namely, when defining certain functors whose action on objects is specified only up to unique isomorphism. In this post, I’ll explain another place AC shows up, when talking about equivalence of categories. (Actually, as we’ll see, it’s really the same underlying issue, of defining a functor defined only up to unique isomorphism; this is just a particularly important instantiation of that issue.)
When are two categories “the same”? In traditional category theory, founded on set theory, there are quite a few different definitions of “sameness” for categories. Ultimately, this comes down to the fact that set theory does not make a very good foundation for category theory! There are lots of different ideas of equivalence, and they often do not correspond to the underlying equality on sets, so one must carefully pick and choose which notions of equality to use in which situations (and some choices might be better than others!). Every concept, it seems, comes with “strict” and “weak” variants, and often many others besides. Maintaining the principle of equivalence requires hard work and vigilence.
As an example, consider the following definition, our first candidate for the definition of “sameness” of categories:
Two categories and are isomorphic if there are functors and such that and .
Seems pretty straightforward, right? Well, this is the right idea in general, but it is subtly flawed. In fact, it is somewhat “evil”, in that it talks about equality of functors ( and must be equal to the identity). However, two functors and can be isomorphic without being equal, if there is a natural isomorphism between them—that is, a pair of natural transformations and such that and are both equal to the identity natural transformation.^{1} For example, consider the Haskell functors given by
data Rose a = Node a [Rose a]
data Fork a = Leaf a | Fork (Fork a) (Fork a)
These are obviously not equal, but they are isomorphic, in the sense that there are natural transformations (i.e. polymorphic functions) rose2fork :: forall a. Rose a -> Fork a
and fork2rose :: forall a. Fork a -> Rose a
such that rose2fork . fork2rose === id
and fork2rose . rose2fork === id
(showing this is left as an exercise for the interested reader).
Here, then, is a better definition:
Categories and are equivalent if there are functors and which are inverse up to natural isomorphism, that is, there are natural isomorphisms and .
So the compositions of the functors and do not literally have to be the identity functor, but only (naturally) isomorphic to it. This does turn out to be a well-behaved notion of sameness for categories (although you’ll have to take my word for it).
The story doesn’t end here, however. In set theory, a function is a bijection—that is, an isomorphism of sets—if and only if it is both injective and surjective. By analogy, one might wonder what properties a functor must have in order to be one half of an equivalence. This leads to the following definition:
is proto-equivalent^{2} to if there is a functor which is full and faithful (i.e., a bijection on each hom-set) as well as essentially surjective, that is, for every object there exists some object such that .
Intuitively, this says that “embeds” an entire copy of into (that’s the “full and faithful” part), and that every object of which is not directly in the image of is isomorphic to one that is. So every object of is “included” in the image of , at least up to isomorphism (which, remember, is supposed to be all that matters).
So, are equivalence and protoequivalence the same thing? In one direction, it is not too hard to show that every equivalence is a protoequivalence: if and are inverse-up-to-natural-isomorphism, then they must be fully faithful and essentially surjective. It would be nice if the converse were also true: in that case, in order to prove two categories equivalent, it would suffice to construct a single functor from one to the other, and show that has the requisite properties. This often ends up being more convenient than explicitly constructing two functors and showing they are inverse. However, it turns out that the converse is provable only if one accepts the axiom of choice!
To get an intuitive sense for why this is, suppose is fully faithful and essentially surjective. To construct an equivalence between and , we must define a functor and show it is inverse to (up to natural isomorphism). However, to define we must give its action on each object , that is, we must exhibit a function . We know that for each there exists some object such that . That is,
is a collection of non-empty sets. However, in a non-constructive logic, knowing these sets are nonempty does not actually give us any objects! Instead, we have to use the axiom of choice, which gives us a choice function , and we can use this function as the object mapping of the functor .
So AC is required to prove that every protoequivalence is an equivalence. In fact, the association goes deeper yet: it turns out that the statement “every protoequivalence is an equivalence” (let’s call this the Axiom of Protoequivalence, or AP for short) not only requires AC, but is equivalent to it—that is, you can also derive AC given AP as an axiom!
On purely intuitive grounds, however, I would wager that to (almost?) anyone with sufficient category theory experience, it “feels” like AP “ought to be” true. If there is a full, faithful, and essentially surjective functor , then and “ought to be” equivalent. The particular choice of functor “doesn’t matter”, since it makes no difference up to isomorphism. On the other hand, we certainly don’t want to accept the axiom of choice. This puts us in the very awkward and inconsistent position of having two logically equivalent statements which we want to respectively affirm and reject. A fine pickle indeed! What to do?
There are four options (that I know of, at least):
This is a perfectly sensible and workable approach. It’s important to highlight, therefore, that the “problem” is in some sense more a philosophical problem than a technical one. One can perfectly well adopt the above solution and continue to do category theory; it just may not be the “nicest” (a philosophical rather than technical notion!) way to do it.
We can therefore also consider some more creative solutions!
In a classical setting, one can avoid AC and affirm (an analogue of) AP by generalizing the notion of functor to that of anafunctor (Makkai 1996). Essentially, an anafunctor is a functor “defined only up to unique isomorphism”. It turns out that the appropriate analogue of AP, where “functor” has been replaced by “anafunctor”, is indeed true—and neither requires nor implies AC. Anafunctors “act like” functors in a sufficiently strong sense that one can simply do category theory using anafunctors in place of functors. However, one also has to replace natural transformations with “ananatural transformations”, etc., and it quickly gets rather fiddly.
In a constructive setting, a witness of essential surjectivity is necessarily a function which gives an actual witness , along with a proof that , for each . In other words, a constructive witness of essential surjectivity is already a “choice function”, and an inverse functor can be defined directly, with no need to invoke AC and no need for anafunctors. So in constructive logic, AP is simply true. However, this version of “essential surjectivity” is rather strong, in that it forces you to make choices you might prefer not to make: for each there might be many isomorphic to choose from, with no “canonical” choice, and it is annoying (again, a philosophical rather than technical consideration!) to be forced to choose one.
Instead of generalizing functors, a more direct solution is to generalize the notion of equality. After all, what really seems to be at the heart of all these problems is differing notions of equality (i.e. equality of sets vs isomorphism vs equivalence…). This is precisely what is done in homotopy type theory (Univalent Foundations Program 2013).^{3} It turns out that if one builds up suitable notions of category theory on top of HoTT instead of set theory, then (a) AP is true, (b) without the need for AC, (c) even with a weaker version of essential surjectivity that corresponds more closely to essential surjectivity in classical logic.^{4} This is explained in Chapter 9 of the HoTT book.
I plan to continue writing about these things in upcoming posts, particularly items (2) and (4) above. (If you haven’t caught on by now, I’m essentially blogging parts of my dissertation; we’ll see how far I get before graduating!) In the meantime, feedback and discussion are very welcome!
Makkai, Michael. 1996. “Avoiding the Axiom of Choice in General Category Theory.” Journal of Pure and Applied Algebra 108 (2). Elsevier: 109–73.
Univalent Foundations Program, The. 2013. Homotopy Type Theory: Univalent Foundations of Mathematics. Institute for Advanced Study: http://homotopytypetheory.org/book.
The astute reader may well ask: but how do we know this is a non-evil definition of isomorphism between functors? Is it turtles all the way down (up)? This is a subtle point, but it turns out that it is not evil to talk about equality of natural transformations, since for the usual notion of category there is no higher structure after natural transformations, i.e. no nontrivial morphisms (and hence no nontrivial isomorphisms) between natural transformations. (However, you can have turtles all the way up if you really want.)↩
I made this term up, since there is no term in standard use: of course, if you accept AC, there is no need for a separate term at all!↩
As a historical note, it seems that the original work on anafunctors is part of the same intellectual thread that led to the development of HoTT.↩
That is, using propositional truncation to encode the classical notion of “there exists”.↩
In category theory, one is typically interested in specifying objects only up to unique isomorphism. In fact, definitions which make use of actual equality on objects are sometimes referred to (half-jokingly) as evil. More positively, the principle of equivalence states that properties of mathematical structures should be invariant under equivalence. This principle leads naturally to speaking of “the” object having some property, when in fact there may be many objects with the given property, but all such objects are uniquely isomorphic; this cannot cause confusion if the principle of equivalence is in effect.
This phenomenon should be familiar to anyone who has seen simple universal constructions such as terminal objects or categorical products. For example, an object is called if there is a unique morphism from each object . In general, there may be many objects satisfying this criterion. For example, in , the category of sets and functions, every singleton set is terminal: there is always a unique function from any set to a singleton set , namely, the function that sends each element of to . However, it is not hard to show that any two terminal objects must be uniquely isomorphic^{1}. Thus it “does not matter” which terminal object we use—they all have the same properties, as long as we don’t do anything “evil”—and one therefore speaks of “the” terminal object of . As another example, a product of two objects is a diagram with the universal property that any other with morphisms to and uniquely factors through . Again, there may be multiple such products, but they are all uniquely isomorphic, and one speaks of “the” product .
Note that in some cases, there may be a canonical choice among isomorphic objects. For example, this is the case with products in , where we may always pick the Cartesian product as a canonical product of and (even though there are also other products, such as ). In such cases use of “the”, as in “the product of and ”, is even more strongly justified, since we may take it to mean “the canonical product of and ”. However, in many cases (for example, with terminal objects in ), there is no canonical choice, and “the terminal object” simply means something like “some terminal object, it doesn’t matter which”.
Beneath this seemingly innocuous use of “the” (often referred to as generalized “the”), however, lurks the axiom of choice! For example, if a category has all products, we can define a functor ^{2} which picks out “the” product of any two objects and —indeed, may be taken as the definition of the product of and . But how is to be defined? Consider , where denotes the set of all possible products of and , i.e. all suitable diagrams in . Since has all products, this is a collection of nonempty sets; therefore we may invoke AC to obtain a choice function, which is precisely , the action of on objects. The action of on morphisms may then be defined straightforwardly.
The axiom of choice really is necessary to construct : as has already been noted, there is, in general, no way to make some canonical choice of object from each equivalence class. On the other hand, this seems like a fairly “benign” use of AC. If we have a collection of equivalence classes, where the elements in each class are all uniquely isomorphic, then using AC to pick one representative from each really “does not matter”, in the sense that we cannot tell the difference between different choices (as long as we refrain from evil). Unfortunately, even such “benign” use of AC still poses a problem for computation.
If you have never seen this proof before, I highly recommend working it out for yourself. Given two terminal objects and , what morphisms must exist between them? What can you say about their composition? You will need to use both the existence and uniqueness of morphisms to terminal objects.↩
Note that we have made use here of “the” product category —fortunately , like , has a suitably canonical notion of products.↩
The (in)famous Axiom of Choice (hereafter, AC) can be formulated in a number of equivalent ways. Perhaps the most well-known is:
Given a family of sets , an element of their Cartesian product is some -indexed tuple where for each . Such a tuple can be thought of as a function (called a choice function) which picks out some particular from each .
We can express this in type theory as follows. First, we assume we have some type which indexes the collection of sets; that is, there will be one set for each value of type . Given some type , we can define a subset of the values of type using a predicate, that is, a function (where denotes the universe of types). For some particular , applying to yields a type, which can be thought of as the type of evidence that is in the subset ; is in the subset if and only if is inhabited. An -indexed collection of subsets of can then be expressed as a function . In particular, is the type of evidence that is in the subset indexed by . (Note that we could also make into a family of types indexed by , that is, , but it wouldn’t add anything to this discussion.)
A set is nonempty if it has at least one element, so the fact that all the sets in are nonempty can be modeled by a dependent function which yields an element of for each index, along with a proof that it is contained in the corresponding subset.
(Note I’m using the notation for dependent function types instead of , and for dependent pairs instead of .) An element of the Cartesian product of can be expressed as a function that picks out an element for each (the choice function), together with a proof that the chosen elements are in the appropriate sets:
Putting these together, apparently the axiom of choice can be modelled by the type
Converting back to and notation and squinting actually gives some good insight into what is going on here:
Essentially, this says that we can “turn a (dependent) product of sums into a (dependent) sum of products”. This sounds a lot like distributivity, and indeed, the strange thing is that this is simply true: implementing a function of this type is a simple exercise! If you aren’t familiar with dependent type theory, you can get the intuitive idea by implementing a non-dependent Haskell analogue, namely something of type
(i -> (a,c)) -> (i -> a, i -> c)
.
Not too hard, is it? (The implementation of the dependent version is essentially the same; it’s only the types that get more complicated, not the implementation.) So what’s going on here? Why is AC so controversial if it is simply true in type theory?
This is not the axiom of choice you’re looking for. — Obi-Wan Funobi
The problem, it turns out, is that we’ve modelled the axiom of choice improperly, and it all boils down to how non-empty is defined. When a mathematician says “ is non-empty”, they typically don’t actually mean “…and here is an element of to prove it”; instead, they literally mean “it is not the case that is empty”, that is, assuming is empty leads to a contradiction. (Actually, it is a bit more subtle yet, but this is a good first approximation.) In classical logic, these viewpoints are equivalent; in constructive logic, however, they are very different! In constructive logic, knowing that it is a contradiction for to be empty does not actually help you find an element of . We modelled the statement “this collection of non-empty sets” essentially by saying “here is an element in each set”, but in constructive logic that is a much stronger statement than simply saying that each set is not empty.
(I should mention at this point that when working in HoTT, the best way to model what classical mathematicians mean when they say “ is non-empty” is probably not with a negation, but instead with the propositional truncation of the statement that contains an element. Explaining this would take us too far afield; if you’re interested, you can find details in Chapter 3 of the HoTT book, where all of this and much more is explained in great detail.)
From this point of view, we can see why the “AC” in the previous section was easy to implement: it had to produce a function choosing a bunch of elements, but it was given a bunch of elements to start! All it had to do was shuffle them around a bit. The “real” AC, on the other hand, has a much harder job: it is told some sets are non-empty, but without any actual elements being mentioned, and it then has to manufacture a bunch of elements out of thin air. This is why it has to be taken as an axiom; we can also see that it doesn’t fit very well in a constructive/computational context. Although it is logically consistent to assume it as an axiom, it has no computational interpretation, so anything we define using it will just get stuck operationally.
So, we’ll just avoid using AC. No problem, right?
The problem is that AC is really sneaky. It tends to show up all over the place, but disguised so that you don’t even realize it’s there. You really have to train yourself to think in a fundamentally constructive way before you start to notice the places where it is used. Next time I’ll explain one place it shows up a lot, namely, when defining functors in category theory (though thankfully, not when defining Functor
instances in Haskell).
Brent recently gave a talk at the New York Haskell Users’ Group presenting the new release. You can find videos of the talk on vimeo: part 1 presents a basic introduction to the library, and part 2 talks about mathematical abstraction and DSL design. The slides are also available.
This release includes a number of significant new features and improvements. Highlights include:
Support for drawing arrows between given points or between diagrams, with many options for customization (tutorial, documentation, API).
A new framework for creating custom command-line-driven executables for diagram generation (tutorial, API).
Offsets of trails and paths, i.e. compute the trail or path lying a constant distance from the given one (documentation, API).
A new API, based on Metafont, for constructing cubic splines with control over things like tangents and “tension” (tutorial, API).
Tangent and normal vectors of segments and trails (API).
Alignment can now be done by trace in addition to envelope (API).
The lens
package is now used consistently for record fields throughout the library (documentation).
Across-the-board improvements in performance and size of generated files.
See the release notes for full details, and the migration guide for help porting your diagrams 0.7 code to work with diagrams 1.0.
For the truly impatient:
cabal install diagrams
Diagrams is supported under GHC 7.4 and 7.6.
To get started, read the quick start tutorial, which will introduce you to the fundamentals of the framework and provide links for further reading.
For those who are less impatient and want to really dig in and use the power features, read the extensive user manual. There is also a growing collection of tutorials on specific topics.
Diagrams has a friendly and growing community of users and developers. To connect with the community, subscribe to the project mailing list, and/or come hang out in the #diagrams
IRC channel on freenode.org for help and discussion. Development continues stronger than ever, and there are a wide range of projects available for new contributors of all levels of Haskell skill. Make some diagrams. Fix some bugs. Submit your cool examples for inclusion in the gallery or your cool code for inclusion in the diagrams-contrib package.
Happy diagramming!
Brought to you by the diagrams team:
with contributions from:
In an attempt to solidify and extend my knowledge of category theory, I have been working my way through the excellent series of category theory lectures posted on Youtube by Eugenia Cheng and Simon Willerton, aka the Catsters.
Edsko de Vries used to have a listing of the videos, but it is no longer available. After wresting a copy from a Google cache, I began working my way through the videos, but soon discovered that Edsko’s list was organized by subject, not topologically sorted. So I started making my own list, and have put it up here in the hopes that it may be useful to others. Suggestions, corrections, improvements, etc. are of course welcome!
As far as possible, I have tried to arrange the order so that each video only depends on concepts from earlier ones. Along with each video you can also find my cryptic notes; I make no guarantee that they will be useful to anyone (even me!), but hopefully they will at least give you an idea of what is in each video.
I have a goal to watch two videos per week (at which rate it will take me about nine months to watch all of them); I will keep the list updated with new video links and notes as I go.
He starts by surveying the state-of-the-art in options for the creative scientist who wants to visualize some data. The options he outlines are:
Use some program like Excel which has a standard repertoire of graphs it can generate. The problem with this approach is that it completely stifles any creativity and freedom in visualizing data.
Use a drawing program like Illustrator or Inkscape. This gives more freedom, of course, but the process is tedious and the results cannot easily be modified.
The final option is to write some code in a framework like Processing or d3.js. The problem here, Victor says, is that you are just staring at a mass of symbols with no immediate, dynamic feedback.
He then goes on to demo a really cool prototype tool that allows drawing using a graphical interface, a bit like Illustrator or Inkscape. But the similarity is only surface deep: where those programs are restrictive and inflexible, Victor’s is richly interactive and editable. Instead of drawing concretely located lines, circles, and so on, it infers the relationships between things you are drawing, so updating the characteristics or positioning of one element automatically updates all the others which depend on it as well. In other words, one can construct a generic, editable visualization just by drawing one particular example of it.
Victor is quite negative about option (3) above—drawing by coding—referring to programming as “blindly manipulating symbols”: “blind” because you can’t actually see the picture you are creating while writing the program.
What I would like to point out is that in fact, despite his negativity about drawing by programming, when using his graphical tool Victor is still programming! It’s just that he has a graphical interface which allows him to construct certain sorts of programs, instead of writing the programs directly. In fact, you can see the programs he constructs on the left side of the screen in his tool. They appear to be structured imperative programs, consisting of sequences of drawing instructions together with things like loops and conditionals.
The problem is that this kind of higher-level interface cannot provide for all possible circumstances (unless you somehow make it Turing complete, but in that case it probably ceases to be at all intuitive). For example, Victor impressively drags and drops some spreadsheet data into his application. But what if I want to use data which is structured in some other format? What if I need to preprocess the data in some computationally nontrivial way? Or on the drawing end, what if I want to draw some shapes or compute some positions in a way that the interface does not provide for? We can’t completely get away from the need to write code in the service of visualization.
What we really need is a more inclusive idea of “programming”, and a continuum between direct manipulation of images and manipulation of symbols to produce images. Symbolic methods, of course, can be incredibly powerful—there is nothing inherently wrong with manipulating symbols.
More specifically, I am proposing something like the following:
First, I am all for making elegant and powerful high-level graphical interfaces for constructing interactive, editable drawings—and not just drawings but code that generates drawings. This can probably be pushed quite far, and there is lots of HCI research to be done here. There are also some very interesting questions relating to bidirectional computation here: making an edit via the graphical interface corresponds to some sort of edit to the code; how can this be done in a sensible and consistent way?
Recognizing, however, that sometimes you do need to actually write some code, how can we make the underlying language as beautiful as possible, and how can we make the interaction between the two systems (code and higher-level graphical interface) as elegant and seamless as possible? The ideal is for a user to be able to flow easily back and forth between the two modes, ideally spending much of their time in a high-level graphical mode.
Of course, if you hadn’t guessed by now, in the long term this is the sort of direction I would love to go with diagrams… though I need to finish my dissertation first (more on that subject soon).