# Peirce's Logic Of Information

Author: Jon Awbrey

## Peirce's concept of information

I've been meaning to get back to Peirce's theory of information, because I believe that it contains a yet-to-be-tapped potential for many current issues, though it would take just a little bit of drilling to exploit its resources to the fullest that we can.

In my own imagination, I tend to organize Peirce's ideas about information, along with its relationship to comprehension and extension, in what certain accidents of personal history lead me to think of as the light-cone picture — but it's really just the two branches of a geometric cone, or the pencil that is generated by a point in a lattice or partial order, with no real connection to physics intended, at least, not so directly as the picture at first suggests:

 ``` o.......o Properties \ / \ / \ / O Object / \ / \ / \ o.......o Instances ```

Here, the Object could be the object of a generic concept, like human, or it could be the object of a dynamic concept, like system in motion. Then the Instances are the extension of that concept through some space, and the Properties are the comprehension of that concept, or what would count as the intensions of the Object in question.

We can imagine the situation where we have perfect information about an Object, knowing all its Properties and Instances, and we can generalize this image to the sorts of situation that we not-so-perfect human beings far more frequently find ourselves occupying, where we have but partial information about Objects.

I will start with some loose threads that were left hanging the last time that I tangled with this topic, but here are some bits of background reading for those who would like to thread the maze on their own:

In my current state of information I'm aware of Peirce having investigated at least two different measures of information, one that we may call multiplicative and the other that we may call exponential. Though I'll take up with the multiplicative measure first, it will be one of my interests here to understand the possible relation between these two measures of information and also their relation to current concepts of information.

Somewhere in his manuscripts that I saw many years ago, Peirce illustrates multiplicative measures in discrete situations by drawing bipartite graphs, called bigraphs, and counting the lines between the points of the graphs. In the same vein, he illustrates exponential measures by counting how many functional bigraphs can be drawn between two sets of points of given sizes.

The conical picture that I drew before illustrates the multiplicative case, if we are given that we have j properties in the comprehension of object x and k instances in the extension of object x, then the information measure associated with object x is given by the product jk, a product that also counts the number of edges in the complete bigraph between the two domains of properties and instances.

 ``` o.......o Properties \ / \ / \ / O Object / \ / \ / \ o.......o Instances ```

This area measure of information, given by Peirce's formula, “Information = Comprehension × Extension”, is the background against which the change of information associated with all further constraints can be reckoned.

That's a very rough sketch of just one aspect of Peirce's theory, but we can refine it as we go. The next order of business, though, is to balance these abstract speculations with a few well-posed concrete examples.

It's not just the genesis but the genius of inquiry that it begins in a muddle, that is to say, a muddled sort of experiential situation where the experiencer thereof may have a vague sense that this moment of experience has that mark of quality and this other moment of experience has that other mark of quality but can hardly say with any degree of certainty anything more than that. So let's define a muddle as a situation that can be represented in the following sort of bigraph model:

 ``` q_1 q_2 q_3 q_4 q_5 o o o o o \ |\ /|\ /| / \ | \ / | \ / | / \ | \ | \ | / \ | / \ | / \ | / \|/ \|/ \|/ o o o m_1 m_2 m_3 ```

For concreteness, I have illustrated a muddle where the moment m1 has the qualities q1, q2, q3, the moment m2 has the qualities q2, q3, q4, and the moment m3 has the qualities q3, q4, q5, but arbitrary muddles can be far more muddled than that and yet fall within the bounds of the bigraph model.

We have in fact returned to a sort of situation that precurses the constellation of the light-cone picture that I drew before:

 ``` o.......o Properties \ / \ / \ / O Object / \ / \ / \ o.......o Instances ```

The progression from the muddle of experience to the spindle of necessity begins with a step of abductive reasoning or hypothesis formation, a step that integrates the manifold of sensuous impressions by factoring it over the medium of unitary concepts.

## Inquiry as an information process

Let's jump in and see if we can tackle a couple of the more concrete examples that Peirce gives us of processes that change an agent's state of information, in the present application exemplifying the properties of inductive reasoning.

### Example 1. Blind man, buff color

The run up to the first example begins as follows:

```| We come next to consider inductions.  In inferences of this kind
| we proceed as if upon the principle that as is a sample of a class
| so is the whole class.  The word 'class' in this connection means
| nothing more than what is denoted by one term, -- or in other words
| the sphere of a term.  Whatever characters belong to the whole sphere
| of a term constitute the content of that term.  Hence the principle of
| induction is that whatever can be predicated of a specimen of the sphere
| of a term is part of the content of that term.  And what is a specimen?
| It is something taken from a class or the sphere of a term, at random --
| that is, not upon any further principle, not selected from a part of
| that sphere;  in other words it is something taken from the sphere
| of a term and not taken as belonging to a narrower sphere.  Hence
| the principle of induction is that whatever can be predicated of
| something taken as belonging to the sphere of a term is part of
| the content of that term.  But this principle is not axiomatic
| by any means.  Why then do we adopt it?
|
| To explain this, we must remember that the process of induction is a
| process of adding to our knowledge;  it differs therein from deduction --
| which merely explicates what we know -- and is on this very account called
| scientific inference.  Now deduction rests as we have seen upon the inverse
| proportionality of the extension and comprehension of every term;  and this
| principle makes it impossible apparently to proceed in the direction of
| ascent to universals.  But a little reflection will show that when our
| knowledge receives an addition this principle does not hold.
|
| Thus suppose a blind man to be told that no red things are
| blue.  He has previously known only that red is a color;
| and that certain things 'A', 'B', and 'C' are red.
|
|    The comprehension of red then has been for him   'color'.
|    Its extension has been                           'A', 'B', 'C'.
|
| But when he learns that no red thing is blue, 'non-blue'
| is added to the comprehension of red, without the least
| diminution of its extension.
|
|    Its comprehension becomes   'non-blue color'.
|    Its extension remains       'A', 'B', 'C'.
|
| Suppose afterwards he learns that a fourth thing 'D' is red.
| Then, the comprehension of 'red' remains unchanged, 'non-blue color';
| while its extension becomes 'A', 'B', 'C', and 'D'.  Thus, the rule
| that the greater the extension of a term the less its comprehension
| and 'vice versa', holds good only so long as our knowledge is not
| added to;  but as soon as our knowledge is increased, either the
| comprehension or extension of that term which the new information
| concerns is increased without a corresponding decrease of the other
| quantity.
|
| The reason why this takes place is worthy of notice.  Every addition to
| the information which is incased in a term, results in making some term
| equivalent to that term.  Thus when the blind man learns that 'red' is
| not-blue, 'red not-blue' becomes for him equivalent to 'red'.  Before
| that, he might have thought that 'red not-blue' was a little more
| restricted term than 'red', and therefore it was so to him, but
| the new information makes it the exact equivalent of red.
| In the same way, when he learns that 'D' is red, the
| term 'D-like red' becomes equivalent to 'red'.
|
| Thus, every addition to our information about a term is an addition
| to the number of equivalents which that term has.  Now, in whatever
| way a term gets to have a new equivalent, whether by an increase in
| our knowledge, or by a change in the things it denotes, this always
| results in an increase either of extension or comprehension without
| a corresponding decrease in the other quantity.
|
| C.S. Peirce, 'Chronological Edition', CE 1, 462-464.
|
|"The Logic of Science, or, Induction and Hypothesis",
| Lowell Institute Lectures of 1866, pages 357-504 in:
|'Writings of Charles S. Peirce, A Chronological Edition,
| Volume 1, 1857-1866', Peirce Edition Project (eds.),
| Indiana University Press, Bloomington, IN, 1982.
```

Let's begin working through that last passage, bit by bit:

We come next to consider inductions. In inferences of this kind we proceed as if upon the principle that as is a sample of a class so is the whole class. The word 'class' in this connection means nothing more than what is denoted by one term, — or in other words the sphere of a term. Whatever characters belong to the whole sphere of a term constitute the content of that term. Hence the principle of induction is that whatever can be predicated of a specimen of the sphere of a term is part of the content of that term. And what is a specimen? It is something taken from a class or the sphere of a term, at random — that is, not upon any further principle, not selected from a part of that sphere; in other words it is something taken from the sphere of a term and not taken as belonging to a narrower sphere. Hence the principle of induction is that whatever can be predicated of something taken as belonging to the sphere of a term is part of the content of that term. But this principle is not axiomatic by any means. Why then do we adopt it? (C.S. Peirce, Chronological Edition, CE 1, 462-463).

Here is a programme for keeping track of the players, that is, the various alternative and associated terms that might occur in the following discussion of Peirce's theory of information:

 ``` Predicates Qualities Comprehension Connotation Content Depth Properties o.......o Characters \ / \ / \ / Concept Object, Class O<----o Symbol / \ Term / \ Elements / \ Instances o.......o Specimens Exemplars Extension Denotation Sphere Breadth ```

I am deploying an ample plurality of rhetorical alternatives partly in order to emphasize the fact that the focus here is not on defining entities like exemplars and qualities but on the shape of the incidence relation between their rough ilks. Some of the terms lodged in the same stratum of the paradigm are close synonyms, in other cases, they are measures of the associated dimensions. I have placed class in accord with contemporary usage in set theory, as my sense of the reading tells me that Peirce is using class in a unitary sense, in contrast with his use of sphere in more distributive sense.

To explain this, we must remember that the process of induction is a process of adding to our knowledge; it differs therein from deduction — which merely explicates what we know — and is on this very account called scientific inference. Now deduction rests as we have seen upon the inverse proportionality of the extension and comprehension of every term; and this principle makes it impossible apparently to proceed in the direction of ascent to universals. But a little reflection will show that when our knowledge receives an addition this principle does not hold. (C.S. Peirce, Chronological Edition, CE 1, 463).

Peirce's usage varies here, but for the sake of simplicity let's regard extension as a set of instances and comprehension as a set of properties. This will allow us to treat breadth as a measure of extension and depth as a measure of comprehension, writing breadth(t) = |extension(t)| and depth(t) = |comprehension(t)| for any term t.

In the classical model, where the measures of extension and comprehension are inversely proportional, we can write an equation of the following sort:

breadth · depth = constant

In Peirce's theory, this constant is identified as the measure of information, which may remain constant in certain situations, but which may vary in others:

breadth · depth = information

A rough plot may serve to illustrate certain aspects of the situation. For convenience in graphing, let's indicate the measure of information by means of the symbols [1, ..., 9, a, b, c, d, e, f, *] to signify the range of integral values from 1 to 16. If we can imagine interpolating smooth curves through the symbols plotted for a given information value, then that would give us some idea of the constant information curves or the isoformal curves of this information topography.

 ``` Depth ^ | | * | f | e | d | c | b | a | 9 | 8 * | 7 e | 6 c | 5 a f | 4 8 c * | 3 6 9 c f | 2 4 6 8 a c e * | 1 2 3 4 5 6 7 8 9 a b c d e f * o----------------------------------> Breadth ```

In struggling to sketch some of the Figures that I need for the next part of this discussion, I am finding the plaintext format so constraining that I think it has become almost indispensable to introduce a standard alternative form of representation for the requisite graphs, specifically, their representation in terms of adjacency matrices or incidence matrices.

For example, consider the bigraph that we drew before:

 ``` q_1 q_2 q_3 q_4 q_5 o o o o o \ |\ /|\ /| / \ | \ / | \ / | / \ | \ | \ | / \ | / \ | / \ | / \|/ \|/ \|/ o o o e_1 e_2 e_3 ```

This can be represented by means of the following matrix:

 ``` e_1 e_2 e_3 o-------------------o | | q_1 | 1 0 0 | | | q_2 | 1 1 0 | | | q_3 | 1 1 1 | | | q_4 | 0 1 1 | | | q_5 | 0 0 1 | | | o-------------------o ```

Here, the occurrence of a "1" at the intersection of ej street and qk avenue indicates that there is an edge connecting ej and qk, and the occurrence of a "0" says otherwise.

In future, then, I'll feel free to represent a graph by means of its matrix, especially whenever I can't easily draw it, and safely leave the rest to the reader's imagination.

We come to the first example of an information process:

```| Thus suppose a blind man to be told that no red things are blue.
| He has previously known only that red is a color;  and that
| certain things 'A', 'B', and 'C' are red.
|
|    The comprehension of red then has been for him   'color'.
|    Its extension has been                           'A', 'B', 'C'.
|
| C.S. Peirce, 'Chronological Edition', CE 1, 463
```

I can think of two distinct ways that we might interpret what Peirce is saying about the relationship between the terms color and red. The harder path is the intransitive interpretation, which is what we do when we say that an apple is red and red is a color but object to anyone getting the notion that an apple is a color. The easier path, which I'll naturally take in the absence of any evidence that Peirce intends otherwise, is the transitive interpretation, as if to intend that color means "colored thing" or having any color, whereby it follows from anything being red that it necessarily has color. This puts the terms color and red on an equal par with each other, in distinction to having color be a higher order predicate than red.

It had at first sight seemed to me that our protagonist's initial state of information could be represented like so:

 ``` Color o | | | | | Red o /|\ / | \ / | \ / | \ / | \ o o o A B C ```

But on further reflection it seems to me that, in the absence of information to the contrary, our hero doesn't really know yet which of the terms color or red, if either, to place above the other. In other words, as far as he can know at this stage of the game, red could just as well be the only color in town, making being red and having color into indiscernible predicates.

On this account, the agent's initial state of knowledge is more accurately represented by the following bigraph:

 ``` A B C o-------------------o | | Color | 1 1 1 | | | Red | 1 1 1 | | | o-------------------o ```

I have no idea if this is the proper way to see things. But we can always backtrack to this point if necessary.

I am beginning to have third and fourth thoughts about my current reading of Peirce's intent with the first example -- it seems more likely that he had in mind an unarticulated number of things that have colors other than red -- still, the present scenario doesn't appear to be inconsistent in its own right, and it does seem to give us something near to the minimal model of Peirce's descriptions, so I think that it may be worthwhile following through for any light that it can shed on the topic area of information process.

It will help, in the long run, to tighten up some of the terminology that we are using to discuss the abstract forms of graphs and their various and sundry styles of relatively concrete representation. As a side benefit, we will find this general paradigm of usage readily adaptable to making the needed distinctions between abstract graphs and their concrete replicas, to mention Peirce's own favorite word for the same distinction.

Consider the graphical matrix that we last looked on:

 ``` A B C o-------------------o | | Color | 1 1 1 | | | Red | 1 1 1 | | | o-------------------o ```

This is the incidence matrix of a concretely labeled graph.

Taking one small step of abstraction up from there would give us the incidence matrix of an abstractly labeled graph, like so:

 ``` 1 2 3 o-------------------o | | 4 | 1 1 1 | | | 5 | 1 1 1 | | | o-------------------o ```

This is the incidence matrix of a labeled bipartite graph, or a labeled bigraph, for short. The adjective bipartite means that its set of points (aka nodes or vertices) can be partitioned into just two parts, such that all the lines (aka edges or blocks) of the graph lie between the points of one part of the partition and the points of the other part of the partition, with no lines that lie between the points in any one part of the partition.

As it happens, the particular example that we are contemplating here has all of the lines that it can have for the partition in question, that is, a line between each point of one part and every point of the other part, and so it's called a complete labeled bigraph. On account of the partition of points into a 2-set and a 3-set, the abstract graph that is said to underlie this labeled graph is standardly notated by graph theorists as K2,3, where the letter K is evidently intended as a mnemonic for complete.

Though the most careful among us will occasionally slip up and use the terms interchangeably, there is a slight but significant nuance of distinction between adjacency matrices and incidence matrices. The incidence matrix of our labeled representative of K2,3 uses the underlying partition of points to economize the dimensions of matric materiel, tantamount to treating the two parts as if they contained different types of points, thus allowing the following compact form:

 ``` 1 2 3 o-------------------o | | 4 | 1 1 1 | | | 5 | 1 1 1 | | | o-------------------o ```

The economy of the incidence matrix, in those cases where it's available, will be evident if we compare it with the corresponding adjacency matrix:

 ``` 1 2 3 4 5 o-------------------------------o | | 1 | 0 0 0 1 1 | | | 2 | 0 0 0 1 1 | | | 3 | 0 0 0 1 1 | | | 4 | 1 1 1 0 0 | | | 5 | 1 1 1 0 0 | | | o-------------------------------o ```

The adjacency matrix exhibits a 1 at the intersection of row j and column k if and only point j and point k are adjacent, that is, connected by a line, and it exhibits a 0 otherwise. Since we are presently discussing graphs, where each line runs in two directions, that is, not directed graphs, or digraphs, the adjacency matrix is symmetric about the main diagonal.

The term graph in graph theory refers to an abstract formal object, of which any scribble or sketch can be but a concrete representation. Abstraction being a relative condition, one finds that there will be labeled graphs, painted graphs, plane embedded graphs, planar lattice embedded graphs, and so on to the limits of formal imagination or practical application, whichever runs out first on a given day of the week. Though still abstract, each in its own peculiar but beautiful way, each example of one of these more devolved species of graphs is naturally regarded as being relatively more concrete than its own underlying graph. Note also here the handy strategy of using the longer names to nomenclate the more concrete abstractions, pinning the most winged niche names on the most abstract concretions.

One way to tell the abstracter thing from the concreter thing is by counting up the relative multiplicities, the abstracter counting as one, relatively speaking, to the concreters' many.

For instance, consider the graph that is pictured as follows:

 ``` o /|\ / | \ / | \ o o o \ | / \ | / \|/ o ```

There is only one graph of this description. It is in fact our old friend K2,3, as becomes manifest when the relevant partition of points is disclosed. There are, in comparison, 10 distinct labeled graphs that share this underlying graph:

 ``` 1 1 1 1 2 2 2 3 3 4 /|\ /|\ /|\ /|\ /|\ /|\ /|\ /|\ /|\ /|\ 3 4 5 2 4 5 2 3 5 2 3 4 1 4 5 1 3 5 1 3 4 1 2 5 1 2 4 1 2 3 \|/ \|/ \|/ \|/ \|/ \|/ \|/ \|/ \|/ \|/ 2 3 4 5 3 4 5 4 5 5 ```

Let's see if we have enough tackle to haul our first example into the light:

```| Thus suppose a blind man to be told that no red things are blue.
| He has previously known only that red is a color;  and that
| certain things 'A', 'B', and 'C' are red.
|
|    The comprehension of red then has been for him   'color'.
|    Its extension has been                           'A', 'B', 'C'.
|
| C.S. Peirce, 'Chronological Edition', CE 1, 463
```

Supposing for the moment that we begin with the minimal model, to wit, the "complete muddle" that has the form of the complete bigraph K2,3, it's tantamount to preposing prior to the place where Peirce comes in a primal experience that is even more inchoate than the one where our hero takes the term red as denoting a class or an object of thought:

 ``` A B C o-------------------o | | Color | 1 1 1 | | | Red | 1 1 1 | | | o-------------------o ```

Initiated into this predicament of buzzing booming confusion, the most that we have is three moments of experience, A, B, C, each of which has the same two marks of quality, Color and Red. It doesn't make any real difference if we call them Exemplars and Qualities, respectively, so for brevity we might as well.

The odd thing about the maximal muddle, in so far as it's constrained to fill out the given frame, is that it's actually one of the easiest to bring to order through the simple act of introducing a middle term:

 ``` Color Red o o \ / \ / \ / \ / \ / Intermediate Object o<<<<<<<<<<

Except for the chevroned relation of denotation from the middle term to the intermediate object, this is cast as a lattice diagram, where transitive closure of lattice relations is understood to be in force. Accordingly, there is an implicit line between each point at the top and each point at the bottom, forming the same K2,3 on these points that we had before. With regard to the middling object that we used to factor the initial muddle, it has a breadth of 3 and a depth of 2. At least that's 1 way to look at it. I'm not saying it's the only 1.

Once again, it looks like we need to back away from our current pictures of the problem, take in a wider view, and think a bit more broadly about what's really going on here. One of the big ideas in the background, so pervasive that we frequently overlook it, can be emblazoned in the theme: The Information's The Thing. In other words, the total information is at all times the principal reality to be taken into consideration, while comprehension and extension are but projected aspects of the information.

This can be made clearer if we recall the places where Peirce characterizes the information borne by a sign as the totality of synthetic facts that are implicit in its use. For example:

```| We see then that all symbols besides their denotative and
| connotative objects have another;  their informative object.
| The denotative object is the total of possible things denoted.
| The connotative object is the total of symbols translated or implied.
| The informative object is the total of forms manifested and is measured
| by the amount of intension the term has, over and above what is necessary
| for limiting its extension.
|
| For example the denotative object of 'man' is such collections of
| matter the word knows while it knows them, i.e., while they are organized.
| The connotative object of 'man' is the total form which the word expresses.
| The informative object of 'man' is the total fact which it embodies;  or the
| value of the conception which is its equivalent symbol.
|
| C.S. Peirce, 'Chronological Edition', CE 1, 276
```

The use of the phrase “total fact” recalls the classical articulations of the three kinds of inference — abduction, deduction, induction — by way of Cases, Facts, and Rules, as prefigured in the following generic sketch:

 ``` o |\ | \ | \ | \ Rule | \ | \ | \ Fact | o | / | / | / | / Case | / | / |/ o ```

In trying to form a minimal model of Peirce's story, which we must be able to do if his story is the least bit consistent, we find ourselves engaged in two different types of reasoning in parallel with each other. At one level, we are thinking out the constitution of a particular model, also known as a situation, a structure, or a universe of discourse. At another level, we are thinking about the structures of many alternative models, many different imaginary scenarios that might flesh out the script. The former is actually a zeroth order model (ZOM), and can be constructed within the bounds of propositional logic or partial orderings of propositions. Getting to the next level will take a few higher order propositions (HOP's), that is, propositions about propositions, or propositions about the structures of whole different universes of discourse. That is more or less tantamount to the first order predicate calculus, or the logic of quantified propositions. However, if we are really satisfied with finding a minimal model, any way that we can arrange one, and don't really need a complete analysis of the story, then it is possible to carry out almost all of the reasoning in and about a simple ZOM.

Back to the story:

```| Thus suppose a blind man to be told that no red things are blue.
| He has previously known only that red is a color;  and that
| certain things 'A', 'B', and 'C' are red.
|
|    The comprehension of red then has been for him   'color'.
|    Its extension has been                           'A', 'B', 'C'.
|
| C.S. Peirce, 'Chronological Edition', CE 1, 463
```

We have seen that there must be a state of information that precedes the one where this story begins, a state where being colored and being red are indistinct facts.

The information embodied in that state of knowledge, regarded as a totality of synthetic facts, might be represented in a propositional data base as follows:

```   A => Color
B => Color
C => Color
A => Red
B => Red
C => Red
```

The same raw data might be given in lattice form as follows:

 ``` Color Color Color Red Red Red o o o o o o | | | | | | | | | | | | | | | | | | o o o o o o A B C A B C ```

Arbitrary congeries of empirical data or synthetic facts can be represented in just these ways. As such, the total fact embodied in them is perfectly good information, as far as information goes. But the particular collection of empirical data or synthetic facts embodied in the above representations is not arbitrary — it has a special structure that permits it to be factored through the medium of a hypothetical object M and a middle term as follows:

A, B, C => M
M => Color, Red

Or, to capture the total fact in a single picture:

 ``` Color, Red o |\ | \ | \ | \ Rule | \ | \ | \ Fact | o M | / | / | / | / Case | / | / |/ o A, B, C ```

Not every state of information allows the interpolation of a compact object with its own comprehension and extension, but this epistemic situation does.

As mentioned in the above Discussion Note, one of the difficulties that we encounter in trying to model Peirce's blind man story is the problem of how to handle improper implications or trivial intensions of the form XX. On the one hand, any concept or term will significantly alter the informational situation when it first arises, for example, on the prompting of an abductive hypothesis or other creative intervention. On the other hand, Peirce appears to discount these types of intensions by accounting for the information as the "superfluous comprehension" of a symbol, in effect, as the intension that a symbol has "over and above what is necessary for limiting its extension" (CE 1, 276). I sought to finesse this issue in my retelling of the story by interjecting a prior episode where the abductive factorization is more explicitly considered. Only time will tell whether this is a sensible direction to take or not.

For ease of reference, let's give our hero a name, say Homer, and let's now try to imagine how it might have transpired that he arrived at yet another state of information that we know he must have reached some time before we came in, namely, when he learned that not all colors are red. This could have happened by acquaintance or by being told, but Peirce has ruled out the former route to knowledge, it seems, by making Homer blind.

The minimal elements of knowledge involved in taking this step seem to be that RedColor, but that ColorRed, possibly along with the fact that there is at least one thing, say D, that has Color but is not Red.

We have at this point stepped outside the bounds of what can be represented in a "zeroth order model" (ZOM), as we can tell from the use of the existential quantifier "there exists at least one" in the above statement, and also, less conspicuously, from the fact that the condition ColorRed is not being invoked in a purely propositional sense, but refers to the inequality of two propositions regarded as whole functions, that is, ColorRed : XB, where X is a suitable universe of discourse and B is the boolean domain {0, 1}.

The following Figure will give us some hint of the situation:

 ``` Color o / \ / \ / \ / \ / \ Red o \ /|\ \ / | \ \ / | \ \ / | \ \ / | \ \ o o o o A B C D ```

This seems to suggest that the term Red has increased in depth while remaining the same in breadth, but that the term Color has increased in breadth while remaining the same in depth.

We have of course just opened up a whole new can of worms — like they say, a single picture is worth a thousand worms:

 ``` Celestial Body * / \ / \ / \ / \ / \ Morning Star * * Evening Star . . . . . . . . . . . . . . . . . . . . * * * ? ? ? ```

Here's the thousand worms part. Peirce exploits the literary device of a blind man's knowledge of color in order to rule out knowledge by acquaintance, leaving open only the channel of knowledge by being told. The real reason for doing this is of course that it forces us observers of the information process involved to make all the information explicit. In a more reflective moment we might also think to question our customary supposition that the putatively distinct modes of information acquisition are really all that different, logically analyzed, but save that for later.

Now taking this tack raises the spectre of intension sans extension (ISE), that is to say, the idea that one can have information about comprehensions in the absence of being acquainted with any real-live experiential examples of the concepts or terms in question. This in turn forces us to re-examine the very idea of a symbol's extension, and to explicate in more exact terms what's involved in saying that the extension of Red is A, B, C, etc.

Imaginary dialogues with a person lacking the sense X may lead us to explicate those of our hidden assumptions and implicit beliefs that we fail to share with persons lacking the sense X, but leave unexamined those of our taken-for-granteds that we have in common even with persons lacking the sense X. To have a truly senseless dialogue, it takes a computer, and that is one of the benefits to come from many years of trying to teach rocks of silicon to think, as we do in artificial intelligence research, or at least, that's how it was in the early days, before we succeeded in predisposing our computing machineries to so many of our unthinking prejudices.

But luckily I can still remember some of the things that I learned in the early days of building a zeroth order ontological modeler (ZOOM), and trying to acquaint it with my environment by telling it everything it forced me to realize I already knew, or at least thought that I did.

By way of a recap, the current state of our inquiry into Peirce's ideas about information can be stated as follows.

Peirce has offered us a couple of formulas for information:

Information = Comprehension × Extension
Information = Superfluous Comprehension

By way of explaining the latter idea he says the following:

```| The information of a term is the measure of its superfluous comprehension.
| That is to say that the proper office of the comprehension is to determine
| the extension of the term.  For instance, you and I are men because we
| possess those attributes -- having two legs, being rational, etc. --
| which make up the comprehension of 'man'.  Every addition to the
| comprehension of a term lessens its extension up to a certain
| point, after that further additions increase the information
|
| C.S. Peirce, 'Chronological Edition', CE 1, 467
```

These ideas sound plausible enough at first — at least they did to me — but in trying to work out the kinds of details that it would take to make a bona fide measure out of this notion of information, we run into a bunch of questions that we have to answer before going on.

By way of illustrating these concepts in a concrete case, Peirce invents a story about a blind man learning about color terms, and describes the learner's initial state of information as defined by the following data set:

Comprehension("Red") = {Color}
Extension("Red") = {A, B, C}

On this data we would calculate the measure of information in the term "Red" as follows:

|Information("Red")| = |{Color}| · |{A, B, C}| = 1 · 3 = 3

Some of the questions that arise at this point are these:

1. What is the reason for excluding Red from the comprehension of "Red"?
2. What is the criterion for superfluous comprehension?
3. What is the extension of "Color" in this example?
4. What is the measure of an indefinite extension?

Questions like these require us to think more carefully about the context in which measures are usually defined. I don't know if Peirce in 1865–1866 was aware of all of these issues, but I know that he eventually became very sophisticated about the conditions of defining measures, at least in the case of probability measures, which are roughly the extensional half of the information problem.

There's a way to finesse the use of existential quantifiers when it comes to especially simple classes of existential propositions, in this way stretching the usefulness of zeroth order logic slightly further than it might initially be thought. This is done through the use of a certain interpretive convention, namely, we adopt the blanket assumption that things of a given description may be taken to exist in default of their propositional specification being explicitly denied or else consequentially inconsistent. In effect, it is never necessary to say when something of a given description exists, only when it does not.

This formal existence convention of interpretation permits us to resolve, or maybe just temporize, a number of the problems that we ran into earlier.

For example, the problem about RedColor but RedColor all but magically disappers when rightly viewed in this light. First, just so that I can use 1-letter labels in diagrams and formulas without causing a confusion between the term "Color" and the element C, let me change the term "Color" to "Hue".

Let us now contemplate the proposition RedHued, henceforth RH, and what it says in the context of a suitable universe of discourse X. The import becomes strikingly evident in the existential graph syntax, where RH takes the form (R (H)), making manifest that it excludes the existence of anything in the universe X from the region indicated by the propositional specification R (H), that is, "Red and not Hued". The situation can be diagrammed in a rough lattice fashion as follows:

 ``` X o / \ / \ / \ / \ / \ H o o (H) / \ \ / \ \ / \ \ / \ \ / \ \ R o o (R) o (R) ```

In short, the formal existence of things under the propositional descriptions H (R), Hued Non-Reds, and (H)(R), Non-Hued Non-Reds, provides the asymmetry needed for a proper order relation RH.

Let's see, have we progressed as far as comprehending the beginning of our story, even if comprehending means rationalizing, re-visioning, and even retro-fitting?

```| Thus suppose a blind man to be told that no red things are blue.
| He has previously known only that red is a color;  and that
| certain things 'A', 'B', and 'C' are red.
|
|    The comprehension of red then has been for him   'color'.
|    Its extension has been                           'A', 'B', 'C'.
|
| C.S. Peirce, 'Chronological Edition', CE 1, 463
```

Cosmetically revising H for Hued for things that have Color and superficially adding X for our tale's encompassing cosmos, we have come to form the following picture of the overall scene:

 ``` X o / \ / \ / \ / \ / \ H o o (H) / \ \ / \ \ / \ \ / \ \ / \ \ R o o (R) o (R) /|\ / | \ / | \ / | \ / | \ o o o A B C ```

We must hold in suspension many questions about the comprehension of Red, till better comprehending what makes portions of comprehension superfluous, but there doesn't seem to be the same order of difficulty here with saying that the extension of Red is {A, B, C}, so let us explore that devolving branch of our conical configuration a little further.

The only difficulty that we have found, so far, with the descent of extent was the issue of what manner and what degree of information rest incumbent in our presumptive reposition of three distinct things, A, B, C, and not just three distinct variables for what is conceivably a lesser variety.

So let us contemplate that issue next.

It's a small world indeed that has just three Red things, A, B, C, and so long as we occupy ourselves with a finite cosmos like the one in this story, there is once again a way to warp zeroth order logic (ZOL) to the task. To wit, we declare the qualities of being A, B, C, respectively, to be named by their names, and in order to say that they number three distinct things we state three distinctions or inequations:

AB
BC
CA

This much we can do in any logical calculus, grammar, language, or syntax that is adequate to express the expressions of ZOL, but there are certain pragmatic benefits that accrue to a more efficient form of representation.

Let's draw the sub-universe (subverse?) of Red things in euler-venn style, the heavier shading (^) for being's rule, the lighter ( ) for its absence:

 ```o-----------------------------------------------------------o | | | | | o-------------o | | / ^ ^ ^ ^ ^ ^ ^ \ | | /^ ^ ^ ^ ^ ^ ^ ^ ^\ | | / ^ ^ ^ ^ ^ ^ ^ ^ ^ \ | | /^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^\ | | / ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ \ | | o^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^o | | |^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^| | | |^ ^ ^ ^ ^ ^ A ^ ^ ^ ^ ^ ^| | | |^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^| | | |^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^| | | |^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^| | | o--o----------o ^ o----------o--o | | /^ ^ \ \^/ / ^ ^\ | | / ^ ^ ^\ o /^ ^ ^ \ | | /^ ^ ^ ^ \ / \ / ^ ^ ^ ^\ | | / ^ ^ ^ ^ ^\ / \ /^ ^ ^ ^ ^ \ | | /^ ^ ^ ^ ^ ^ \ / \ / ^ ^ ^ ^ ^ ^\ | | o ^ ^ ^ ^ ^ ^ ^o--o-------o--o^ ^ ^ ^ ^ ^ ^ o | | | ^ ^ ^ ^ ^ ^ ^ ^ | | ^ ^ ^ ^ ^ ^ ^ ^ | | | | ^ ^ ^ ^ ^ ^ ^ ^ | | ^ ^ ^ ^ ^ ^ ^ ^ | | | | ^ ^ ^ ^ ^ ^ ^ ^ | | ^ ^ ^ ^ ^ ^ ^ ^ | | | | ^ ^ ^ B ^ ^ ^ ^ | | ^ ^ ^ ^ C ^ ^ ^ | | | | ^ ^ ^ ^ ^ ^ ^ ^ | | ^ ^ ^ ^ ^ ^ ^ ^ | | | o ^ ^ ^ ^ ^ ^ ^ ^ o o ^ ^ ^ ^ ^ ^ ^ ^ o | | \^ ^ ^ ^ ^ ^ ^ ^ ^\ /^ ^ ^ ^ ^ ^ ^ ^ ^/ | | \ ^ ^ ^ ^ ^ ^ ^ ^ \ / ^ ^ ^ ^ ^ ^ ^ ^ / | | \^ ^ ^ ^ ^ ^ ^ ^ ^\ /^ ^ ^ ^ ^ ^ ^ ^ ^/ | | \ ^ ^ ^ ^ ^ ^ ^ ^ o ^ ^ ^ ^ ^ ^ ^ ^ / | | \^ ^ ^ ^ ^ ^ ^ ^/ \^ ^ ^ ^ ^ ^ ^ ^/ | | o-------------o o-------------o | | | | | o-----------------------------------------------------------o ```

In their imports for this sub-universe, the three propositional inequalities, AB, BC, CA, so constrain the qualities of being A, B, C, respectively, that just one of the corresponding propositions can be true of any given thing. The whole region marked by circumflecks is the rule of Red.

### Example 2. Dots and crosses

```| For example we have here a number of circles
| dotted and undotted, crossed and uncrossed:
|
| (·X·)  (···)  (·X·)  (···)  ( X )  (   )  ( X )  (   )
|
| Here it is evident that the greater the extension the
| less the comprehension:
|
| o-------------------o-------------------o
| |                   |                   |
| | dotted            | 4 circles         |
| |                   |                   |
| o-------------------o-------------------o
| |                   |                   |
| | dotted & crossed  | 2 circles         |
| |                   |                   |
| o-------------------o-------------------o
|
| Now suppose we make these two terms 'dotted circle'
| and 'crossed and dotted circle' equivalent.  This we can
| do by crossing our uncrossed dotted circles.  In that way,
| we increase the comprehension of 'dotted circle' and at the
| same time increase the extension of 'crossed and dotted circle'
| since we now make it denote 'all dotted circles'.
|
| C.S. Peirce, 'Chronological Edition', CE 1, 464.
|
|"The Logic of Science, or, Induction and Hypothesis",
| Lowell Institute Lectures of 1866, pages 357-504 in:
|'Writings of Charles S. Peirce, A Chronological Edition,
| Volume 1, 1857-1866', Peirce Edition Project (eds.),
| Indiana University Press, Bloomington, IN, 1982.
```
```ICE.  Note 20

| Thus every increase in the number of equivalents of any term increases either
| its extension or comprehension and 'conversely'.  It may be said that there
| are no equivalent terms in logic, since the only difference between such
| terms would be merely external and grammatical, while in logic terms
| which have the same meaning are identical.  I fully admit that.
| Indeed, the process of getting an equivalent for a term is
| an identification of two terms previously diverse.  It is,
| in fact, the process of nutrition of terms by which they
| get all their life and vigor and by which they put forth
| an energy almost creative -- since it has the effect of
| reducing the chaos of ignorance to the cosmos of science.
| Each of these equivalents is the explication of what there is
| wrapt up in the primary -- they are the surrogates, the interpreters
| of the original term.  They are new bodies, animated by that same soul.
| I call them the 'interpretants' of the term.  And the quantity of these
| 'interpretants', I term the 'information' or 'implication' of the term.
|
| C.S. Peirce, 'Chronological Edition', CE 1, 464-465.
|
| Charles Sanders Peirce,
|"The Logic of Science, or, Induction and Hypothesis",
| Lowell Institute Lectures of 1866, pages 357-504 in:
|
|'Writings of Charles S. Peirce:  A Chronological Edition',
|'Volume 1, 1857-1866', Peirce Edition Project,
| Indiana University Press, Bloomington, IN, 1982.
```
```ICE.  Note 21

| We must therefore modify the law of
| the inverse proportionality of
| extension and comprehension
| and instead of writing
|
| Extension x Comprehension = Constant
|
| which crudely expresses the fact
| that the greater the extension the
| less the comprehension, we must write
|
| Extension x Comprehension = Information
|
| which means that when the information
| is increased there is an increase of
| either extension or comprehension
| without any diminution of the
| other of these quantities.
|
| Now, ladies and gentlemen, as it is true that
| every increase of our knowledge is an increase
| in the information of a term -- that is, is an
| addition to the number of terms equivalent to
| that term -- so it is also true that the first
| step in the knowledge of a thing, the first
| framing of a term, is also the origin of the
| information of that term because it gives the
| first term equivalent to that term.  I here
| announce the great and fundamental secret
| of the logic of science.  There is no term,
| properly so called, which is entirely destitute
| of information, of equivalent terms.  The moment
| an expression acquires sufficient comprehension
| to determine its extension, it already has more
| than enough to do so.
|
| C.S. Peirce, 'Chronological Edition', CE 1, 465.
|
| Charles Sanders Peirce,
|"The Logic of Science, or, Induction and Hypothesis",
| Lowell Institute Lectures of 1866, pages 357-504 in:
|
|'Writings of Charles S. Peirce:  A Chronological Edition',
|'Volume 1, 1857-1866', Peirce Edition Project,
| Indiana University Press, Bloomington, IN, 1982.
```

## References

• Peirce, C.S., Writings of Charles S. Peirce : A Chronological Edition, Volume 1, 1857–1866, Peirce Edition Project (eds.), Indiana University Press, Bloomington, IN, 1982. Cited as CE 1.
• Peirce, C.S. (1866), "The Logic of Science, or, Induction and Hypothesis", Lowell Institute Lectures, CE 1, 357–504.

• Awbrey, Jon, and Awbrey, Susan (1995), "Interpretation as Action : The Risk of Inquiry", Inquiry : Critical Thinking Across the Disciplines 15, 40–52. Online.
• De Tienne, André (2006), "Peirce's Logic of Information", Seminario del Grupo de Estudios Peirceanos, Universidad de Navarra, 28 Sep 2006. Online.