The Many Functions of Functor

Posted on October 22, 2014

Although I became most familiar with the word through exposure to Category Theory, I knew that “functor” had at least a couple of other meanings in other contexts. I’d looked up its history before, but neglected to write it down. Now I won’t have to do it again!

The Players: Carnap, Quine, Tarski

According to a number of sources, “functor” entered the lexicon via the work of Rudolf Carnap. Carnap studied philosophy, physics, and mathematics at university, including several courses in mathematics and logic from Gottlob Frege. He later studied the theory of relativity at Berlin, where Einstein was professor of physics.

His dissertation concerned an axiomatic system for space-time theory, which was written from a philosophical point of view. Thereafter he became involved with the Vienna Circle of philosophers and ended up as a leading member. This group advanced the philosophy of logical positivism and was deeply concerned with the philosophy and practice of science.

In 1929, Carnap met Alfred Tarski, a Polish logician from the Lvov-Warsaw school of philosophy who was developing a logical basis for the meaning of truth. A few years later, he moved to Prague to become a professor and wrote the 1934 book The Logical Syntax of Language, which contains (as far as I am aware) the first use of “functor”. I will return to the book in a moment, after making a few more of Carnap’s connections apparent.

In Prague, Carnap met the philosophers Willard van Orman Quine and Charles Morris. When World War II broke out, they helped Carnap escape to the United States, where he remained. Tarski escaped Poland to the USA as well, and the trio of Carnap, Quine, and Tarski were thus able to meet (along with other leading philosophers and logicians) for a year at Harvard to work on the leading issues of analytical philosophy.

Carnap’s Logical Syntax of Language

As part of his desire to put the practice of science on firm logical ground, Carnap wanted to have an appropriate understanding of language that could be used to properly express scientific truths. Logical Syntax creates a formal system, a meta-language, with which to reason about constructs of natural language.

The meta-language defines syntactic constructs belonging to four categories:

  1. A fixed set of 11 symbols
  2. (Numerical) variables
  3. Constant numerals
  4. Predicates
  5. Functors

The point was to remove ambiguity of names and to add precision by designating things numerically as much as possible.

Basic forms

Designation of things was done positionally by numerical coordinates rather than by symbolic name. Positional numerals were defined inductively from 0 designating the initial position and primes marking successive positions. The traditional Arabic numerals were introduced as shorthand, as well as the possibility of tupling numbers to represent higher-dimensional positions.

Expressing properties of things was done via predicates over positions. These could express direct properties of things or relationships between things, but in a symbolic manner via the name-form of the predicate.

In order to represent properties numerically, the idea of functor was introduced. Functors were similar in form to predicates, in that they had a name-form and took numeric arguments, but instead of representing facts and/or relations they represented another numeric value.

Expressions

Valid combinations of the basic forms create expressions. Two important kinds of expressions were “sentences” and “numerical expressions.”

Sentences express logical truth value, so the expression of a predicate form is a sentence. Some of the symbols are combining forms that create sentences as well, such as equations.

Expressions of functors, on the other hand, are always numerical expressions. Numeric expression was very important to Carnap for the precise transmission of scientific detail, so he valued this precision of functor expression over the more “vague” relative or symbolic nature of predicate expressions.

Although Carnap’s concept of functor seems similar to that of a mathematical function, it is actually quite different. Functions are always a fixed relation between numbers, but functors may represent numerical observations of the world as well, such as the example given early on of the temperature-functor that has the numeric value of the temperature of its argument.

Connection to Category Theory

Saunders Mac Lane, one of the founders of Category Theory, was an associate of Carnap’s at the University of Chicago and was given the English translation of Logical Syntax of Language in 1938 for review. He was not particularly interested in the philosophical aspects of Carnap’s usage of “functor”, but in it he saw a perfect candidate to resolve a similar notational dilemma in Category Theory.

So, the Category Theory notion of “functor” is related to Carnap’s original usage by similarity of role, as a stand-in for something “function-like” in some aspect, rather than being related by definition or usage. Steve Awodey, an associate of Mac Lane’s and a category theorist himself, remarked that the word “functor” may end up being “Carnap’s most far-reaching contribution to modern mathematics.”

Quine’s Predicate-Functor Logic

Quine, Tarski, and Carnap conversed about Carnap’s logical syntax while they were together at Harvard, and it seems to have influenced Quine’s thinking about logic. He certainly adopted the term “functor”, though he expanded its meaning. Quine thought that Carnap’s connective symbols, e.g. the equality symbol that generates equations, should be replaced by more general operators that construct sentences via composition.

Quine later referred to these operators as “predicate-functors” (possibly following Carnap’s convention of naming a sort of functor by its domain, e.g. “temperature-functor”) and developed an algebraic logic based around them.

He later gave a more general definition to the term “functor” itself, encompassing this new role as a generator of sentences rather than a numeric relation. It should also be noted that while Carnap was concerned with discussing language in general, including natural language, Quine was primarily interested in formal logic systems. In particular, his Predicate Functor Logic was an alternative to Combinatory Logic that avoided higher-order functions and the resulting encroachment of set theory into the system, which he viewed as outside the scope of logic.

Categorical Grammar

Tarski’s early career was influenced by his thesis advisor Stanislaw Lesniewski and also by the other prominent Lvov-Warsaw philospher Kazimierz Ajdukiewicz. Lesniewski worked in the field of ontology and was particularly interested in semantic categories; this use of “category” is due to Husserl and can be traced back to Aristotle.

Ajdukiewicz applied the notion of categories to the syntax of language to create the field of categorical grammar. The syntactic structure was thought to determine the semantic meaning in reasonable languages, thus the borrowed categorical terminology.

Syntactic categories are an equivalence class of expressions such that a sentence containing one of these sentences remains a sentence when it is substituted for any other expression that is a member of the same category. As such, syntactic categories are essentially the same as the types of Russell.

There are two kinds of categories; basic categories, which are enumerated in the definition of the language, and all others are complex (derived) categories. Expressions in complex categories are created by combining other expressions via a special kind of “functor” expression. These other expressions are called “arguments” to the functor, and the new compound expression is the functor’s “value”.

Although Ajdukiewicz’s early work on syntactic categories predated Logical Syntax of Language, it seems likely that the field of Categorical Grammar was influenced both directly via the text, which shared similar goals, and probably also via the influence of Tarski and Quine who made use of it (if not for precisely the same purposes) in their own work.

Categorical Grammar is primarily concerned with matters of natural language, but Joachim Lambek’s work on the Lambek Calculus drew it somewhat back towards mathematics and logic. Lambek Calculus resembles in some ways Alonzo Church’s Lambda Calculus (of the simply typed variety), Quine’s Functor Predicate Logic, and combinatory logic.

Of parenthetical interest is the fact that the logicians of the Lvov-Warsaw school preferred to place their functors at the front of their arguments; by this practice they allowed expressions to be unambiguous without parentheses. This practice became known as “Polish Notation”, which is familiar to many users of Hewlett Packard calculators by way of its similarly bracketless Reverse Polish Notation.

Functors in Programming

The use of functor via its Category Theory interpretation is familiar to many functional programmers today. It has a direct, though not all-encompassing, presence in Haskell via the Functor type class. Perhaps not quite so familiar is the linguistic interpretation as used by Prolog. And the SML Module language uses the term functor; although this can be interpreted in the categorical sense, it also makes sense from the perspective of Quine’s Predicate Functor Logic. C++’s use of functor to mean “function-like object” seems to be another repurposing of the word along the lines of Mac Lane’s.

Operator Sections in Haskell: A History

Posted on October 17, 2014

Operator Sections

I was explaining the Haskell notational trick of partially applying the second argument of a two-parameter function via a combination of back-quotes turning a named function into an operator and the operator section syntax. For example, you can express the function that gives a value modulo 10 as:

The question came up of just where the idea for operator sections came from. Because this is precisely the kind of useless information I can’t help but be curious about, I resolved to find an answer. And after a bit of digging, I was successful.

Haskell History

Our first stop on the journey through functional programming history is at the wonderful paper by several of the major contributors to Haskell, A History of Haskell: Being Lazy With Class. Since I came across it, this has been my first source for the answers to questions of Haskell history, and once again it didn’t fail me. From section 4.2:

The solution to this problem was to use a generalised notion of sections, a notation that first appeared in David Wile’s dissertation (Wile, 1973) and was then disseminated via IFIP WG2.1–among others to Bird, who adopted it in his work, and Turner, who introduced it into Miranda.

Turner, of course, refers to David Turner who is the man behind the languages SASL, KRC, and Miranda. Miranda was a commercial product of Turner’s company Research Software Limited, and was the most mature and widely used of the family of non-strict functional languages at the time. The business needs of Research Software and the desire of the functional language community for a standard language didn’t quite converge, though, so Haskell arose as a non-commercial alternative.

So, Haskell got operator sections (as it got a great deal of its syntax) from Miranda. That’s not very surprising and didn’t really satisfy my curiosity, so I followed up on the next breadcrumb in the trail, Wile’s dissertation.

One More Step

The document in question is entitled A Generative, Nested-Sequential Basis for General Purpose Programming Languages, and was submitted to Carnegie-Mellon University in November, 1973 by David Sheridan Wile. It describes the idea of taming the wild pointers of data structures via similar structuring techniques to those that were being applied to tame the wild control flow of GOTO-based code, and cites the languages BLISS, APL, and LISP as primary influences.

When first asked about operator sections, I guessed that the influence had come somehow through APL, so I was gratified to see my instict validated. In fact, the notation presented borrows heavily from APL but marries it with ideas from non-strict functional programming such as natural representations of infinite data and the way that such data mediates the interaction of co-routines.

Sure enough, on page 16 we find the following:

Partially instantiated functions are called “sections” in mathematical literature, and we adopt the term here for convenience. The nature of sections is ambiguous: they are both program and data, and attempts to define them as one or the other rely on a preconceived implementation.

And on page 30, he explains further:

A unique primitive operation which produces primitive operators is also permitted; this operation is termed “partial instantiation”. The “section” or “partially instantiated function” was motivated in Chapter 1 as a natural mechanism for expressing data structure concepts of restriction. In fact, they play a much more significant role in the bsais in that many programs are sequences of partially instantiated functions. In particular, we allow the partial instantiation of any binary operator to produce either a left- or right-unary operator.

So, that’s certainly the source of Haskell’s notion of operator sections! But what about that reference to the mathematical literature? Can we trace the idea back further?

Recursive Functions

This turns out to be Theory of Recursive Functions and Effective Computability, by Hartley Rogers, Jr. Rogers was a Ph.D. student under none other than Alonzo Church, father of the Lambda Calculus. This text began as a set of notes published by the MIT Math department in ’57 and grew into its first publication as a book in ’67 during a period of huge amounts of progress in its field.

The citation of the book points to a page in section 5.4 on projection theorems relating to recursively enumerable sets. Specifically, the definition of “section” is given in terms of a k-ary projection relation R in which one of the k terms, n, is fixed. Since binary operators form such a relation, fixing one of the parameters qualifies to be called a section of the overall relation described by the operator.

Relation to Categorical Sections?

I recalled having heard some other mathematical definition of the term section, and on a hunch I looked it up at nLab, which is a site discussing Category Theory. It turns out that a section in that context is a right-inverse to a mapping; i.e. if you compose it on the right of a map f: A -> B, you get A -> B -> A, or the identity map. This doesn’t seem to be a related concept to what Rogers described, but perhaps I’m missing some deeper connection.

More on Hakyll

Posted on September 7, 2013

On Personal Websites

I have had a website running, off and on, since the mid-90s. At first, it was a simple collection of HTML files with a few images. Web technologies developed rapidly, though, and it became time-consuming to do anything interesting with my sites.

I’ve used a number of “simplifying” tools over the years, but most of them relied on dynamically generating pages from a SQL database of some sort, and I just don’t want to deal with that right now. I want simple markup again, and I want it to be served directly and efficiently by any old plain web server.

My Basic Requirements

HTML is no longer really a markup language, it’s more of a structured document modeling language. I don’t want to compose text documents in code, I want to write them naturally in my native language. A few stylistic clues in my text formatting should suffice for specifying the structure of these simple text documents.

So, Markdown provides exactly the sort of markup language I want, but a web site requires more structure. Exactly the structure provided by HTML, in fact. So, Pandoc converts Markdown to HTML (among many other formats), which provides the core functionality I need.

But there’s more– a bunch of HTML files don’t quite make a cohesive site. They need some common additional structure, such as headers and footers. They need a navigation structure to link them together. And it would be nice to be able to generate content syndication feeds as well.

What I need is a system that allows me to compose simple text files written in a text-based markup language together with some HTML templates into a static set of interlinked pure-HTML pages, along with some other resources, whenever I write a new document.

Enter Hakyll

The Hakyll system provides a flexible language for describing just that sort of system. It is essentially a static site compiler built from a set of rules for transforming source documents to output documents.

This code snippet shows the rule for compiling top-level pages of my site from Markdown to HTML:

The code is a form of Embedded Domain Specific Language (EDSL) that is both a legal Haskell expression as well as a succinct description of what I want Hakyll to do for this particular case.

The first line describes what files this rule should apply to. It’s expressed in a format similar to UNIX filename glob format. This one says that it applies to all files at the top level of the site source directory that have names ending in ".markdown".

Digressions on Haskell

For those not familiar with Haskell, the $ character is a right-associative operator that denotes function application– the left-operand is the function, the right-operand is the argument. This is in addition to the normal way of expressing function application, which is a left-associative operation denoted by juxtaposing the operands.

Normal function application has a very high precedence in the Haskell grammar, so the $ form is often used in place of parentheses to allow a secondary function application to provide the argument to a primary function application.

With that digression out of the way, the second line can be seen as a nested function application– the route function is passed the setExtension "html" function. As another digression, there are two interesting things to say about this nested application:

  1. The function application setExtension "html" evaluates to a value that is itself a function– the function that takes a filename, possibly with some extension already, and produces a new filename with the extension "html" instead. So setExtensions is a higher-order function in two ways, because it is used as an argument to another function and also because it returns a function as its result.

  2. The arguments to Haskell functions are not necessarily evaluated before the functions themselves. So if the rule on line 1 never matched any files, the setExtension "html" expression would never need to be evaluated. If the rule found multiple matches, however, the expression would evaluate only once to the function that sets filename extensions to "html".

Regardless of the language mechanics behind the scene, the effect of the second line is to ensure that when the rule completes, the resulting file will have the "html" extension rather than the "markdown" extension it started with.

Back to the Example

The third line starts the expression that does the bulk of the work. It calls the compile function, and specifies the compiler as the pandocCompiler function with its output piped through a couple of post-processors, loadAndApplyTemplates and relativizeUrls.

The pandocCompiler is built in to Hakyll, and it links to the Pandoc markup processor mentioned earlier. In default form, as it’s used here, it translates Markdown documents to HTML documents.

As the name implies, loadAndApplyTemplates applies the template we give it along with a Context, which is a data structure that describes the mappings between template variables and the values they should expand to. We use the default one, which provides template variables such as "title", "url", and "body" to the template based on the values from the item that’s being processed.

Finally, relativizeUrls will find all the links in the document and change them from absolute URL form, e.g. "/foo.html"; to relative URL form, e.g. "foo.html". This allows us to have absolute URLs for syndication feeds, but relative URLs for the site itself.

This example covered only one of eight rules I’m currently using, but hopefully it gives an idea of how simple and flexible the rule-based system is.

Hakyll’s Rule Processing

Like the make language, Hakyll’s rule processor keeps track of the relationships between source files and build products, and it only runs rules for which the input files are newer than the existing build products. If you just add a new bit of content, such as a blog entry, only a few rules may need to run again. On the other hand, changing a core template may require rebuilding most of the site!

Full rebuilds are one of the areas in which Hakyll really shines, though. Since its rules are a language embedded within Haskell, a Hakyll site builder is a compiled and optimized binary program whose single purpose is to rebuild your site as quickly as possible.

By default, it stores all its intermediate work on disk, but if you have the memory to work with, it can also keep all its intermediate work in memory, which makes it even faster. For my own site, which only has a few files so far, rebuilding the entire thing is nearly instant even on an old 32-bit PC, so I haven’t bothered with any optimization.

Hello, world

Posted on September 4, 2013

It’s about time I had a blog/journal/mental dumping ground again, and getting this set up was a nice little exercise in system administration and a little bit of Haskell coding.

I mention system administration because this is running on a system with a version of Debian Linux installed that has been continuously upgraded since around 2001. It was both slightly broken and very crufty, but I was able to get plenty of space cleared and get everything up-to-date with few issues.

And I mention Haskell because I’m using two Haskell programs as part of the infrastructure of this site. One is the site engine itself, Hakyll. It’s a customizable build tool that translates from various markup formats to HTML, then assembles all the site’s pieces together.

The second Haskell program I’m using the the Clay CSS generator, which lets me build the CSS rules in a Haskell program in a CSS-like DSL. A short program generates both a flexible semantic grid layout and a baseline grid for managing the vertical rhythm of the text. I am not a master page designer by any means, but I’m happy with the result.

I will share more details about these tools, along with other things I’ve been working on, in entries to follow.