Difference between revisions of "Directory:Jon Awbrey/Papers/Peirce's 1870 Logic Of Relatives"

MyWikiBiz, Author Your Legacy — Sunday December 01, 2024
Jump to navigationJump to search
Line 2,165: Line 2,165:
  
 
Naturally enough, the diagonal extensions are represented by diagonal matrices:
 
Naturally enough, the diagonal extensions are represented by diagonal matrices:
 +
 +
<br>
  
 
{| align="center" cellspacing="6" width="90%"
 
{| align="center" cellspacing="6" width="90%"
Line 2,187: Line 2,189:
 
\text{---}
 
\text{---}
 
\\
 
\\
\mathrm{B} & 1 & 0 & 0 & 0 & 0 & 0 & 0
+
\mathrm{B} & 1 &   &   &   &   &   &
 
\\
 
\\
\mathrm{C} & 0 & 1 & 0 & 0 & 0 & 0 & 0
+
\mathrm{C} &   & 1 &   &   &   &   &
 
\\
 
\\
\mathrm{D} & 0 & 0 & 1 & 0 & 0 & 0 & 0
+
\mathrm{D} &   &   & 1 &   &   &   &
 
\\
 
\\
\mathrm{E} & 0 & 0 & 0 & 1 & 0 & 0 & 0
+
\mathrm{E} &   &   &   & 1 &   &   &
 
\\
 
\\
\mathrm{I} & 0 & 0 & 0 & 0 & 1 & 0 & 0
+
\mathrm{I} &   &   &   &   & 1 &   &
 
\\
 
\\
\mathrm{J} & 0 & 0 & 0 & 0 & 0 & 1 & 0
+
\mathrm{J} &   &   &   &   &   & 1 &
 
\\
 
\\
\mathrm{O} & 0 & 0 & 0 & 0 & 0 & 0 & 1
+
\mathrm{O} &   &   &   &   &   &   & 1
 
\end{array}</math>
 
\end{array}</math>
 
|}
 
|}
  
<pre>
+
<br>
!m!| B C D E I J O
+
 
---o---------------
+
{| align="center" cellspacing="6" width="90%"
  B | 0 0 0 0 0 0 0
+
|
C | 0 1 0 0 0 0 0
+
<math>\begin{array}{c|ccccccc}
D | 0 0 0 0 0 0 0
+
\mathrm{m,} &
E | 0 0 0 0 0 0 0
+
\mathrm{B}  &
I | 0 0 0 0 1 0 0
+
\mathrm{C}  &
J | 0 0 0 0 0 1 0
+
\mathrm{D}  &
O | 0 0 0 0 0 0 1
+
\mathrm{E}  &
</pre>
+
\mathrm{I}  &
 +
\mathrm{J}  &
 +
\mathrm{O}
 +
\\
 +
\text{---}  &
 +
\text{---}  &
 +
\text{---}  &
 +
\text{---}  &
 +
\text{---}  &
 +
\text{---}  &
 +
\text{---} &
 +
\text{---}
 +
\\
 +
\mathrm{B} & 0 &  &  &  &  &  &
 +
\\
 +
\mathrm{C} &  & 1 &  &  &  &  &
 +
\\
 +
\mathrm{D} &  &  & 0 &  &  &  &
 +
\\
 +
\mathrm{E} &  &  &  & 0 &  &  &
 +
\\
 +
\mathrm{I} &  &  &  &  & 1 &  &
 +
\\
 +
\mathrm{J} &  &  &  &  &  & 1 &
 +
\\
 +
\mathrm{O} &  &  &  &  &  &  & 1
 +
\end{array}</math>
 +
|}
  
 
<pre>
 
<pre>

Revision as of 22:13, 7 April 2009

Author's Note. The text that follows is a collection of notes that will eventually be developed into a paper on Charles Sanders Peirce's 1870 memoir on the logic of relative terms.

Preliminaries

Application of the Algebraic Signs to Logic

Peirce's text employs a number of different typefaces to denote different types of logical entities. The following Tables indicate the LaTeX typefaces that we will use for Peirce's stock examples.


\(\text{Absolute Terms (Monadic Relatives)}\!\)

\(\begin{array}{ll} \mathrm{a}. & \text{animal} \\ \mathrm{b}. & \text{black} \\ \mathrm{f}. & \text{Frenchman} \\ \mathrm{h}. & \text{horse} \\ \mathrm{m}. & \text{man} \\ \mathrm{p}. & \text{President of the United States Senate} \\ \mathrm{r}. & \text{rich person} \\ \mathrm{u}. & \text{violinist} \\ \mathrm{v}. & \text{Vice-President of the United States} \\ \mathrm{w}. & \text{woman} \end{array}\)


\(\text{Simple Relative Terms (Dyadic Relatives)}\!\)

\(\begin{array}{ll} \mathit{a}. & \text{enemy} \\ \mathit{b}. & \text{benefactor} \\ \mathit{c}. & \text{conqueror} \\ \mathit{e}. & \text{emperor} \\ \mathit{h}. & \text{husband} \\ \mathit{l}. & \text{lover} \\ \mathit{m}. & \text{mother} \\ \mathit{n}. & \text{not} \\ \mathit{o}. & \text{owner} \\ \mathit{s}. & \text{servant} \\ \mathit{w}. & \text{wife} \end{array}\)


\(\text{Conjugative Terms (Higher Adic Relatives)}\!\)

\(\begin{array}{ll} \mathfrak{b}. & \text{betrayer to ------ of ------} \\ \mathfrak{g}. & \text{giver to ------ of ------} \\ \mathfrak{t}. & \text{transferrer from ------ to ------} \\ \mathfrak{w}. & \text{winner over of ------ to ------ from ------} \end{array}\)


Selection 1

Use of the Letters

The letters of the alphabet will denote logical signs.

Now logical terms are of three grand classes.

The first embraces those whose logical form involves only the conception of quality, and which therefore represent a thing simply as "a ——". These discriminate objects in the most rudimentary way, which does not involve any consciousness of discrimination. They regard an object as it is in itself as such (quale); for example, as horse, tree, or man. These are absolute terms.

The second class embraces terms whose logical form involves the conception of relation, and which require the addition of another term to complete the denotation. These discriminate objects with a distinct consciousness of discrimination. They regard an object as over against another, that is as relative; as father of, lover of, or servant of. These are simple relative terms.

The third class embraces terms whose logical form involves the conception of bringing things into relation, and which require the addition of more than one term to complete the denotation. They discriminate not only with consciousness of discrimination, but with consciousness of its origin. They regard an object as medium or third between two others, that is as conjugative; as giver of —— to ——, or buyer of —— for —— from ——. These may be termed conjugative terms.

The conjugative term involves the conception of third, the relative that of second or other, the absolute term simply considers an object. No fourth class of terms exists involving the conception of fourth, because when that of third is introduced, since it involves the conception of bringing objects into relation, all higher numbers are given at once, inasmuch as the conception of bringing objects into relation is independent of the number of members of the relationship. Whether this reason for the fact that there is no fourth class of terms fundamentally different from the third is satisfactory of not, the fact itself is made perfectly evident by the study of the logic of relatives.

(Peirce, CP 3.63).

I am going to experiment with an interlacing commentary on Peirce's 1870 "Logic of Relatives" paper, revisiting some critical transitions from several different angles and calling attention to a variety of puzzles, problems, and potentials that are not so often remarked or tapped.

What strikes me about the initial installment this time around is its use of a certain pattern of argument that I can recognize as invoking a "closure principle", and this is a figure of reasoning that Peirce uses in three other places: his discussion of "continuous predicates", his definition of sign relations, and in the pragmatic maxim itself.

One might also call attention to the following two statements:

Now logical terms are of three grand classes.

No fourth class of terms exists involving the conception of fourth, because when that of third is introduced, since it involves the conception of bringing objects into relation, all higher numbers are given at once, inasmuch as the conception of bringing objects into relation is independent of the number of members of the relationship.

Selection 2

Numbers Corresponding to Letters

I propose to use the term "universe" to denote that class of individuals about which alone the whole discourse is understood to run. The universe, therefore, in this sense, as in Mr. De Morgan's, is different on different occasions. In this sense, moreover, discourse may run upon something which is not a subjective part of the universe; for instance, upon the qualities or collections of the individuals it contains.

I propose to assign to all logical terms, numbers; to an absolute term, the number of individuals it denotes; to a relative term, the average number of things so related to one individual. Thus in a universe of perfect men (men), the number of "tooth of" would be 32. The number of a relative with two correlates would be the average number of things so related to a pair of individuals; and so on for relatives of higher numbers of correlates. I propose to denote the number of a logical term by enclosing the term in square brackets, thus, \([t].\!\)

(Peirce, CP 3.65).

Peirce's remarks at CP 3.65 are so replete with remarkable ideas, some of them so taken for granted in mathematical discourse that they usually escape explicit mention, and others so suggestive of things to come in a future remote from his time of writing, and yet so smoothly introduced in passing that it's all too easy to overlook their consequential significance, that I can do no better here than to highlight these ideas in other words, whose main advantage is to be a little more jarring to the mind's sensibilities.

  • This mapping of letters to numbers, or logical terms to mathematical quantities, is the very core of what "quantification theory" is all about, and definitely more to the point than the mere "innovation" of using distinctive symbols for the so-called "quantifiers". We will speak of this more later on.
  • The mapping of logical terms to numerical measures, to express it in current language, would probably be recognizable as some kind of "morphism" or "functor" from a logical domain to a quantitative co-domain.
  • Notice that Peirce follows the mathematician's usual practice, then and now, of making the status of being an "individual" or a "universal" relative to a discourse in progress. I have come to appreciate more and more of late how radically different this "patchwork" or "piecewise" approach to things is from the way of some philosophers who seem to be content with nothing less than many worlds domination, which means that they are never content and rarely get started toward the solution of any real problem. Just my observation, I hope you understand.
  • It is worth noting that Peirce takes the "plural denotation" of terms for granted, or what's the number of a term for, if it could not vary apart from being one or nil?
  • I also observe that Peirce takes the individual objects of a particular universe of discourse in a "generative" way, not a "totalizing" way, and thus they afford us with the basis for talking freely about collections, constructions, properties, qualities, subsets, and "higher types", as the phrase is mint.

Selection 3

The Signs of Inclusion, Equality, Etc.

I shall follow Boole in taking the sign of equality to signify identity. Thus, if \(\mathrm{v}\!\) denotes the Vice-President of the United States, and \(\mathrm{p}\!\) the President of the Senate of the United States,

\(\mathrm{v} = \mathrm{p}\!\)

means that every Vice-President of the United States is President of the Senate, and every President of the United States Senate is Vice-President.

The sign "less than" is to be so taken that

\(\mathrm{f} < \mathrm{m}\!\)

means that every Frenchman is a man, but there are men besides Frenchmen. Drobisch has used this sign in the same sense. It will follow from these significations of \(=\!\) and \(<\!\) that the sign \(-\!\!\!<\!\) (or \(\leqq\), "as small as") will mean "is". Thus,

\(\mathrm{f} -\!\!\!< \mathrm{m}\)

means "every Frenchman is a man", without saying whether there are any other men or not. So,

\(\mathit{m} -\!\!\!< \mathit{l}\)

will mean that every mother of anything is a lover of the same thing; although this interpretation in some degree anticipates a convention to be made further on. These significations of \(=\!\) and \(<\!\) plainly conform to the indispensable conditions. Upon the transitive character of these relations the syllogism depends, for by virtue of it, from

  \(\mathrm{f} -\!\!\!< \mathrm{m}\)  

and

\(\mathrm{m} -\!\!\!< \mathrm{a}\)  

we can infer that

\(\mathrm{f} -\!\!\!< \mathrm{a}\)  

that is, from every Frenchman being a man and every man being an animal, that every Frenchman is an animal.

But not only do the significations of \(=\!\) and \(<\!\) here adopted fulfill all absolute requirements, but they have the supererogatory virtue of being very nearly the same as the common significations. Equality is, in fact, nothing but the identity of two numbers; numbers that are equal are those which are predicable of the same collections, just as terms that are identical are those which are predicable of the same classes. So, to write \(5 < 7\!\) is to say that \(5\!\) is part of \(7\!\), just as to write \(\mathrm{f} < \mathrm{m}\!\) is to say that Frenchmen are part of men. Indeed, if \(\mathrm{f} < \mathrm{m}\!\), then the number of Frenchmen is less than the number of men, and if \(\mathrm{v} = \mathrm{p}\!\), then the number of Vice-Presidents is equal to the number of Presidents of the Senate; so that the numbers may always be substituted for the terms themselves, in case no signs of operation occur in the equations or inequalities.

(Peirce, CP 3.66).

The quantifier mapping from terms to their numbers that Peirce signifies by means of the square bracket notation \([t]\!\) has one of its principal uses in providing a basis for the computation of frequencies, probabilities, and all of the other statistical measures that can be constructed from these, and thus in affording what may be called a principle of correspondence between probability theory and its limiting case in the forms of logic.

This brings us once again to the relativity of contingency and necessity, as one way of approaching necessity is through the avenue of probability, describing necessity as a probability of 1, but the whole apparatus of probability theory only figures in if it is cast against the backdrop of probability space axioms, the reference class of distributions, and the sample space that we cannot help but to abdeuce upon the scene of observations. Aye, there's the snake eyes. And with them we can see that there is always an irreducible quantum of facticity to all our necessities. More plainly spoken, it takes a fairly complex conceptual infrastructure just to begin speaking of probabilities, and this setting can only be set up by means of abductive, fallible, hypothetical, and inherently risky mental acts.

Pragmatic thinking is the logic of abduction, which is just another way of saying that it addresses the question: "What may be hoped?" We have to face the possibility that it may be just as impossible to speak of "absolute identity" with any hope of making practical philosophical sense as it is to speak of "absolute simultaneity" with any hope of making operational physical sense.

Selection 4

The Signs for Addition

The sign of addition is taken by Boole so that

\(x + y\!\)

denotes everything denoted by \(x\!\), and, besides, everything denoted by \(y\!\).

Thus

\(\mathrm{m} + \mathrm{w}\!\)

denotes all men, and, besides, all women.

This signification for this sign is needed for connecting the notation of logic with that of the theory of probabilities. But if there is anything which is denoted by both terms of the sum, the latter no longer stands for any logical term on account of its implying that the objects denoted by one term are to be taken besides the objects denoted by the other.

For example,

\(\mathrm{f} + \mathrm{u}\!\)

means all Frenchmen besides all violinists, and, therefore, considered as a logical term, implies that all French violinists are besides themselves.

For this reason alone, in a paper which is published in the Proceedings of the Academy for March 17, 1867, I preferred to take as the regular addition of logic a non-invertible process, such that

\(\mathrm{m} ~+\!\!,~ \mathrm{b}\)

stands for all men and black things, without any implication that the black things are to be taken besides the men; and the study of the logic of relatives has supplied me with other weighty reasons for the same determination.

Since the publication of that paper, I have found that Mr. W. Stanley Jevons, in a tract called Pure Logic, or the Logic of Quality [1864], had anticipated me in substituting the same operation for Boole's addition, although he rejects Boole's operation entirely and writes the new one with a  \(+\!\)  sign while withholding from it the name of addition.

It is plain that both the regular non-invertible addition and the invertible addition satisfy the absolute conditions. But the notation has other recommendations. The conception of taking together involved in these processes is strongly analogous to that of summation, the sum of 2 and 5, for example, being the number of a collection which consists of a collection of two and a collection of five. Any logical equation or inequality in which no operation but addition is involved may be converted into a numerical equation or inequality by substituting the numbers of the several terms for the terms themselves — provided all the terms summed are mutually exclusive.

Addition being taken in this sense, nothing is to be denoted by zero, for then

\(x ~+\!\!,~ 0 ~=~ x\)

whatever is denoted by \(x\!\); and this is the definition of zero. This interpretation is given by Boole, and is very neat, on account of the resemblance between the ordinary conception of zero and that of nothing, and because we shall thus have

\([0] ~=~ 0.\)

(Peirce, CP 3.67).

A wealth of issues arise here that I hope to take up in depth at a later point, but for the moment I shall be able to mention only the barest sample of them in passing.

The two papers that precede this one in CP 3 are Peirce's papers of March and September 1867 in the 'Proceedings of the American Academy of Arts and Sciences', titled "On an Improvement in Boole's Calculus of Logic" and "Upon the Logic of Mathematics", respectively. Among other things, these two papers provide us with further clues about the motivating considerations that brought Peirce to introduce the "number of a term" function, signified here by square brackets. I have already quoted from the "Logic of Mathematics" paper in a related connection. Here are the links to those excerpts:

In setting up a correspondence between "letters" and "numbers", my sense is that Peirce is "nocking an arrow", or constructing some kind of structure-preserving map from a logical domain to a numerical domain, and this interpretation is here reinforced by the careful attention that he gives to the conditions under which precisely which aspects of structure are preserved, plus his telling recognition of the criterial fact that zeroes are preserved by the mapping. But here's the catch, the arrow is from the qualitative domain to the quantitative domain, which is just the opposite of what I tend to expect, since I think of quantitative measures as preserving more information than qualitative measures. To curtail the story, it is possible to sort this all out, but that is a story for another day.

Other than that, I just want to red flag the beginnings of another one of those "failures to communicate" that so dogged the disciplines in the 20th Century, namely, the fact that Peirce seemed to have an inkling about the problems that would be caused by using the plus sign for inclusive disjunction, but, as it happens, his advice was overridden by the usages in various different communities, rendering the exchange of information among engineering, mathematical, and philosophical specialties a minefield in place of mindfield to this very day.

Selection 5

The Signs for Multiplication

I shall adopt for the conception of multiplication the application of a relation, in such a way that, for example, \(\mathit{l}\mathrm{w}\!\) shall denote whatever is lover of a woman. This notation is the same as that used by Mr. De Morgan, although he apears not to have had multiplication in his mind.

\(\mathit{s}(\mathrm{m} ~+\!\!,~ \mathrm{w})\) will, then, denote whatever is servant of anything of the class composed of men and women taken together. So that:

\(\mathit{s}(\mathrm{m} ~+\!\!,~ \mathrm{w}) ~=~ \mathit{s}\mathrm{m} ~+\!\!,~ \mathit{s}\mathrm{w}\).

\((\mathit{l} ~+\!\!,~ \mathit{s})\mathrm{w}\) will denote whatever is lover or servant to a woman, and:

\((\mathit{l} ~+\!\!,~ \mathit{s})\mathrm{w} ~=~ \mathit{l}\mathrm{w} ~+\!\!,~ \mathit{s}\mathrm{w}\).

\((\mathit{s}\mathit{l})\mathrm{w}\!\) will denote whatever stands to a woman in the relation of servant of a lover, and:

\((\mathit{s}\mathit{l})\mathrm{w} ~=~ \mathit{s}(\mathit{l}\mathrm{w})\).

Thus all the absolute conditions of multiplication are satisfied.

The term "identical with ——" is a unity for this multiplication. That is to say, if we denote "identical with ——" by \(\mathit{1}\!\) we have:

\(x \mathit{1} ~=~ x\),

whatever relative term \(x\!\) may be. For what is a lover of something identical with anything, is the same as a lover of that thing.

(Peirce, CP 3.68).

Peirce in 1870 is five years down the road from the Peirce of 1865–1866 who lectured extensively on the role of sign relations in the logic of scientific inquiry, articulating their involvement in the three types of inference, and inventing the concept of "information" to explain what it is that signs convey in the process. By this time, then, the semiotic or sign relational approach to logic is so implicit in his way of working that he does not always take the trouble to point out its distinctive features at each and every turn. So let's take a moment to draw out a few of these characters.

Sign relations, like any non-trivial brand of 3-adic relations, can become overwhelming to think about once the cardinality of the object, sign, and interpretant domains or the complexity of the relation itself ascends beyond the simplest examples. Furthermore, most of the strategies that we would normally use to control the complexity, like neglecting one of the domains, in effect, projecting the 3-adic sign relation onto one of its 2-adic faces, or focusing on a single ordered triple of the form \((o, s, i)\!\) at a time, can result in our receiving a distorted impression of the sign relation's true nature and structure.

I find that it helps me to draw, or at least to imagine drawing, diagrams of the following form, where I can keep tabs on what's an object, what's a sign, and what's an interpretant sign, for a selected set of sign-relational triples.

Here is how I would picture Peirce's example of equivalent terms, \(\mathrm{v} = \mathrm{p}\!\), where \(^{\backprime\backprime} \mathrm{v} ^{\prime\prime}\) denotes the Vice-President of the United States, and \(^{\backprime\backprime} \mathrm{p} ^{\prime\prime}\) denotes the President of the Senate of the United States.

o-----------------------------o-----------------------------o
|  Objective Framework (OF)   | Interpretive Framework (IF) |
o-----------------------------o-----------------------------o
|           Objects           |            Signs            |
o-----------------------------o-----------------------------o
|                                                           |
|                                 o "v"                     |
|                                /                          |
|                               /                           |
|                              /                            |
|           o ... o-----------@                             |
|                              \                            |
|                               \                           |
|                                \                          |
|                                 o "p"                     |
|                                                           |
o-----------------------------o-----------------------------o

Depending on whether we interpret the terms \(^{\backprime\backprime} \mathrm{v} ^{\prime\prime}\) and \(^{\backprime\backprime} \mathrm{p} ^{\prime\prime}\) as applying to persons who hold these offices at one particular time or as applying to all those persons who have held these offices over an extended period of history, their denotations may be either singular of plural, respectively.

As a shortcut technique for indicating general denotations or plural referents, I will use the elliptic convention that represents these by means of figures like "o o o" or "o … o", placed at the object ends of sign relational triads.

For a more complex example, here is how I would picture Peirce's example of an equivalence between terms that comes about by applying one of the distributive laws, for relative multiplication over absolute summation.

o-----------------------------o-----------------------------o
|  Objective Framework (OF)   | Interpretive Framework (IF) |
o-----------------------------o-----------------------------o
|           Objects           |            Signs            |
o-----------------------------o-----------------------------o
|                                                           |
|                                 o "'s'(m +, w)"           |
|                                /                          |
|                               /                           |
|                              /                            |
|           o ... o-----------@                             |
|                              \                            |
|                               \                           |
|                                \                          |
|                                 o "'s'm +, 's'w"          |
|                                                           |
o-----------------------------o-----------------------------o

Selection 6

The Signs for Multiplication (cont.)

A conjugative term like giver naturally requires two correlates, one denoting the thing given, the other the recipient of the gift.

We must be able to distinguish, in our notation, the giver of \(\mathrm{A}\!\) to \(\mathrm{B}\!\) from the giver to \(\mathrm{A}\!\) of \(\mathrm{B}\!\), and, therefore, I suppose the signification of the letter equivalent to such a relative to distinguish the correlates as first, second, third, etc., so that "giver of —— to ——" and "giver to —— of ——" will be expressed by different letters.

Let \(\mathfrak{g}\) denote the latter of these conjugative terms. Then, the correlates or multiplicands of this multiplier cannot all stand directly after it, as is usual in multiplication, but may be ranged after it in regular order, so that:

\(\mathfrak{g}\mathit{x}\mathit{y}\)

will denote a giver to \(\mathit{x}\!\) of \(\mathit{y}\!\).

But according to the notation, \(\mathit{x}\!\) here multiplies \(\mathit{y}\!\), so that if we put for \(\mathit{x}\!\) owner (\(\mathit{o}\!\)), and for \(\mathit{y}\!\) horse (\(\mathrm{h}\!\)),

\(\mathfrak{g}\mathit{o}\mathrm{h}\)

appears to denote the giver of a horse to an owner of a horse. But let the individual horses be \(\mathrm{H}, \mathrm{H}^{\prime}, \mathrm{H}^{\prime\prime}\), etc.

Then:

\(\mathrm{h} ~=~ \mathrm{H} ~+\!\!,~ \mathrm{H}^{\prime} ~+\!\!,~ \mathrm{H}^{\prime\prime} ~+\!\!,~ \text{etc.}\)
\(\mathfrak{g}\mathit{o}\mathrm{h} ~=~ \mathfrak{g}\mathit{o}(\mathrm{H} ~+\!\!,~ \mathrm{H}^{\prime} ~+\!\!,~ \mathrm{H}^{\prime\prime} ~+\!\!,~ \text{etc.}) ~=~ \mathfrak{g}\mathit{o}\mathrm{H} ~+\!\!,~ \mathfrak{g}\mathit{o}\mathrm{H}^{\prime} ~+\!\!,~ \mathfrak{g}\mathit{o}\mathrm{H}^{\prime\prime} ~+\!\!,~ \text{etc.}\)

Now this last member must be interpreted as a giver of a horse to the owner of that horse, and this, therefore must be the interpretation of \(\mathfrak{g}\mathit{o}\mathrm{h}\). This is always very important. A term multiplied by two relatives shows that the same individual is in the two relations.

If we attempt to express the giver of a horse to a lover of a woman, and for that purpose write:

\(\mathfrak{g}\mathit{l}\mathrm{w}\mathrm{h}\),

we have written giver of a woman to a lover of her, and if we add brackets, thus,

\(\mathfrak{g}(\mathit{l}\mathrm{w})\mathrm{h}\),

we abandon the associative principle of multiplication.

A little reflection will show that the associative principle must in some form or other be abandoned at this point. But while this principle is sometimes falsified, it oftener holds, and a notation must be adopted which will show of itself when it holds. We already see that we cannot express multiplication by writing the multiplicand directly after the multiplier; let us then affix subjacent numbers after letters to show where their correlates are to be found. The first number shall denote how many factors must be counted from left to right to reach the first correlate, the second how many 'more' must be counted to reach the second, and so on.

Then, the giver of a horse to a lover of a woman may be written:

\(\mathfrak{g}_{12} \mathit{l}_1 \mathrm{w} \mathrm{h} ~=~ \mathfrak{g}_{11} \mathit{l}_2 \mathrm{h} \mathrm{w} ~=~ \mathfrak{g}_{2(-1)} \mathrm{h} \mathit{l}_1 \mathrm{w}\).

Of course a negative number indicates that the former correlate follows the latter by the corresponding positive number.

A subjacent zero makes the term itself the correlate.

Thus,

\(\mathit{l}_0\!\)

denotes the lover of that lover or the lover of himself, just as \(\mathfrak{g}\mathit{o}\mathrm{h}\) denotes that the horse is given to the owner of itself, for to make a term doubly a correlate is, by the distributive principle, to make each individual doubly a correlate, so that:

\(\mathit{l}_0 ~=~ \mathit{L}_0 ~+\!\!,~ \mathit{L}_0^{\prime} ~+\!\!,~ \mathit{L}_0^{\prime\prime} ~+\!\!,~ \text{etc.}\)

A subjacent sign of infinity may indicate that the correlate is indeterminate, so that:

\(\mathit{l}_\infty\)

will denote a lover of something. We shall have some confirmation of this presently.

If the last subjacent number is a one it may be omitted. Thus we shall have:

\(\mathit{l}_1 ~=~ \mathit{l}\),
\(\mathfrak{g}_{11} ~=~ \mathfrak{g}_1 ~=~ \mathfrak{g}\).

This enables us to retain our former expressions \(\mathit{l}\mathrm{w}\!\), \(\mathfrak{g}\mathit{o}\mathrm{h}\), etc.

(Peirce, CP 3.69–70).

Comment : Sets as Logical Sums

Peirce's way of representing sets as logical sums may seem archaic, but it is quite often used, and is actually the tool of choice in many branches of algebra, combinatorics, computing, and statistics to this very day.

Peirce's application to logic is fairly novel, and the degree of his elaboration of the logic of relative terms is certainly original with him, but this particular genre of representation, commonly going under the handle of generating functions, goes way back, well before anyone thought to stick a flag in set theory as a separate territory or to try to fence off our native possessions of it with expressly decreed axioms. And back in the days when a computer was just a person who computed, before we had the sorts of electronic register machines that we take so much for granted today, mathematicians were constantly using generating functions as a rough and ready type of addressable memory to sort, store, and keep track of their accounts of a wide variety of formal objects of thought.

Let us look at a few simple examples of generating functions, much as I encountered them during my own first adventures in the Fair Land Of Combinatoria.

Suppose that we are given a set of three elements, say, \(\{ a, b, c \}\!\), and we are asked to find all the ways of choosing a subset from this collection.

We can represent this problem setup as the problem of computing the following product:

\((1 + a)(1 + b)(1 + c)\!\).

The factor \((1 + a)\!\) represents the option that we have, in choosing a subset of \(\{ a, b, c \}\!\), to leave the element \(a\!\) out (signified by the "\(1\!\)"), or else to include it (signified by the "\(a\!\)"), and likewise for the other elements \(b\!\) and \(c\!\) in their turns.

Probably on account of all those years I flippered away playing the oldtime pinball machines, I tend to imagine a product like this being displayed in a vertical array:

\(\begin{matrix} (1 ~+~ a) \\ (1 ~+~ b) \\ (1 ~+~ c) \end{matrix}\)

I picture this as a playboard with six bumpers, the ball chuting down the board in such a career that it strikes exactly one of the two bumpers on each and every one of the three levels.

So a trajectory of the ball where it hits the \(a\!\) bumper on the 1st level, hits the \(1\!\) bumper on the 2nd level, hits the \(c\!\) bumper on the 3rd level, and then exits the board, represents a single term in the desired product and corresponds to the subset \(\{ a, c \}.\!\)

Multiplying out the product \((1 + a)(1 + b)(1 + c)\!\), one obtains:

\(\begin{array}{*{15}{c}} 1 & + & a & + & b & + & c & + & ab & + & ac & + & bc & + & abc \end{array}\)

And this informs us that the subsets of choice are:

\(\begin{matrix} \varnothing, & \{ a \}, & \{ b \}, & \{ c \}, & \{ a, b \}, & \{ a, c \}, & \{ b, c \}, & \{ a, b, c \} \end{matrix}\)

Selection 7

The Signs for Multiplication (cont.)

The associative principle does not hold in this counting of factors. Because it does not hold, these subjacent numbers are frequently inconvenient in practice, and I therefore use also another mode of showing where the correlate of a term is to be found. This is by means of the marks of reference, \(\dagger ~ \ddagger ~ \parallel ~ \S ~ \P\), which are placed subjacent to the relative term and before and above the correlate. Thus, giver of a horse to a lover of a woman may be written:

\(\mathfrak{g}_{\dagger\ddagger} \, ^\dagger\mathit{l}_\parallel \, ^\parallel\mathrm{w} \, ^\ddagger\mathrm{h}\)

The asterisk I use exclusively to refer to the last correlate of the last relative of the algebraic term.

Now, considering the order of multiplication to be: — a term, a correlate of it, a correlate of that correlate, etc. — there is no violation of the associative principle. The only violations of it in this mode of notation are that in thus passing from relative to correlate, we skip about among the factors in an irregular manner, and that we cannot substitute in such an expression as \(\mathfrak{g}\mathit{o}\mathrm{h}\) a single letter for \(\mathit{o}\mathrm{h}.\!\)

I would suggest that such a notation may be found useful in treating other cases of non-associative multiplication. By comparing this with what was said above [in CP 3.55] concerning functional multiplication, it appears that multiplication by a conjugative term is functional, and that the letter denoting such a term is a symbol of operation. I am therefore using two alphabets, the Greek and Kennerly, where only one was necessary. But it is convenient to use both.

(Peirce, CP 3.71–72).

Comment : Proto-Graphical Syntax

It is clear from our last excerpt that Peirce is already on the verge of a graphical syntax for the logic of relatives. Indeed, it seems likely that he had already reached this point in his own thinking.

For instance, it seems quite impossible to read his last variation on the theme of a "giver of a horse to a lover of a woman" without drawing lines of identity to connect up the corresponding marks of reference, like this:

o---------------------------------------o
|                                       |
|            !        #                 |
|           / \      / \                |
|          o   o    o   o               |
|      `g`_!@  !'l'_#   #w  @h          |
|           o               o           |
|            \_____________/            |
|                   @                   |
|                                       |
o---------------------------------------o
Giver of a Horse to a Lover of a Woman

Selection 8

The Signs for Multiplication (cont.)

Thus far, we have considered the multiplication of relative terms only. Since our conception of multiplication is the application of a relation, we can only multiply absolute terms by considering them as relatives.

Now the absolute term "man" is really exactly equivalent to the relative term "man that is ——", and so with any other. I shall write a comma after any absolute term to show that it is so regarded as a relative term.

Then "man that is black" will be written:

\(\mathrm{m},\!\mathrm{b}\!\)

But not only may any absolute term be thus regarded as a relative term, but any relative term may in the same way be regarded as a relative with one correlate more. It is convenient to take this additional correlate as the first one.

Then:

\(\mathit{l},\!\mathit{s}\mathrm{w}\)

will denote a lover of a woman that is a servant of that woman.

The comma here after \(\mathit{l}\!\) should not be considered as altering at all the meaning of \(\mathit{l}\!\), but as only a subjacent sign, serving to alter the arrangement of the correlates.

In point of fact, since a comma may be added in this way to any relative term, it may be added to one of these very relatives formed by a comma, and thus by the addition of two commas an absolute term becomes a relative of two correlates.

So:

\(\mathrm{m},\!,\!\mathrm{b},\!\mathrm{r}\)

interpreted like

\(\mathfrak{g}\mathit{o}\mathrm{h}\)

means a man that is a rich individual and is a black that is that rich individual.

But this has no other meaning than:

\(\mathrm{m},\!\mathrm{b},\!\mathrm{r}\)

or a man that is a black that is rich.

Thus we see that, after one comma is added, the addition of another does not change the meaning at all, so that whatever has one comma after it must be regarded as having an infinite number.

If, therefore, \(\mathit{l},\!,\!\mathit{s}\mathrm{w}\) is not the same as \(\mathit{l},\!\mathit{s}\mathrm{w}\) (as it plainly is not, because the latter means a lover and servant of a woman, and the former a lover of and servant of and same as a woman), this is simply because the writing of the comma alters the arrangement of the correlates.

And if we are to suppose that absolute terms are multipliers at all (as mathematical generality demands that we should}, we must regard every term as being a relative requiring an infinite number of correlates to its virtual infinite series "that is —— and is —— and is —— etc."

Now a relative formed by a comma of course receives its subjacent numbers like any relative, but the question is, What are to be the implied subjacent numbers for these implied correlates?

Any term may be regarded as having an infinite number of factors, those at the end being ones, thus:

\(\mathit{l},\!\mathit{s}\mathrm{w} ~=~ \mathit{l},\!\mathit{s}\mathit{w},\!\mathit{1},\!\mathit{1},\!\mathit{1},\!\mathit{1},\!\mathit{1},\!\mathit{1},\!\mathit{1}, ~\text{etc.}\)

A subjacent number may therefore be as great as we please.

But all these ones denote the same identical individual denoted by \(\mathrm{w}\!\); what then can be the subjacent numbers to be applied to \(\mathit{s}\!\), for instance, on account of its infinite "that is"'s? What numbers can separate it from being identical with \(\mathrm{w}\!\)? There are only two. The first is zero, which plainly neutralizes a comma completely, since

\(\mathit{s},_0\!\mathrm{w} ~=~ \mathit{s}\mathrm{w}\)

and the other is infinity; for as \(1^\infty\) is indeterminate in ordinary algbra, so it will be shown hereafter to be here, so that to remove the correlate by the product of an infinite series of ones is to leave it indeterminate.

Accordingly,

\(\mathrm{m},_\infty\)

should be regarded as expressing some man.

Any term, then, is properly to be regarded as having an infinite number of commas, all or some of which are neutralized by zeros.

"Something" may then be expressed by:

\(\mathit{1}_\infty\!\)

I shall for brevity frequently express this by an antique figure one \((\mathfrak{1}).\)

"Anything" by:

\(\mathit{1}_0\!\)

I shall often also write a straight \(1\!\) for anything.

(Peirce, CP 3.73).

Commentary Note 8.1

To my way of thinking, CP 3.73 is one of the most remarkable passages in the history of logic. In this first pass over its deeper contents I won't be able to accord it much more than a superficial dusting off.

Let us imagine a concrete example that will serve in developing the uses of Peirce's notation. Entertain a discourse whose universe \(X\!\) will remind us a little of the cast of characters in Shakespeare's Othello.

\(X ~=~ \{ \mathrm{Bianca}, \mathrm{Cassio}, \mathrm{Clown}, \mathrm{Desdemona}, \mathrm{Emilia}, \mathrm{Iago}, \mathrm{Othello} \}.\)

The universe \(X\!\) is "that class of individuals about which alone the whole discourse is understood to run" but its marking out for special recognition as a universe of discourse in no way rules out the possibility that "discourse may run upon something which is not a subjective part of the universe; for instance, upon the qualities or collections of the individuals it contains" (CP 3.65).

In order to provide ourselves with the convenience of abbreviated terms, while preserving Peirce's conventions about capitalization, we may use the alternate names \(^{\backprime\backprime}\mathrm{u}^{\prime\prime}\) for the universe \(X\!\) and \(^{\backprime\backprime}\mathrm{Jeste}^{\prime\prime}\) for the character \(\mathrm{Clown}.\!\) This permits the above description of the universe of discourse to be rewritten in the following fashion:

\(\mathrm{u} ~=~ \{ \mathrm{B}, \mathrm{C}, \mathrm{D}, \mathrm{E}, \mathrm{I}, \mathrm{J}, \mathrm{O} \}\)

This specification of the universe of discourse could be summed up in Peirce's notation by the following equation:

\(\begin{array}{*{15}{c}} 1 & = & \mathrm{B} & +\!\!, & \mathrm{C} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \end{array}\)

Within this discussion, then, the individual terms are as follows:

\(\begin{matrix} ^{\backprime\backprime}\mathrm{B}^{\prime\prime}, & ^{\backprime\backprime}\mathrm{C}^{\prime\prime}, & ^{\backprime\backprime}\mathrm{D}^{\prime\prime}, & ^{\backprime\backprime}\mathrm{E}^{\prime\prime}, & ^{\backprime\backprime}\mathrm{I}^{\prime\prime}, & ^{\backprime\backprime}\mathrm{J}^{\prime\prime}, & ^{\backprime\backprime}\mathrm{O}^{\prime\prime} \end{matrix}\)

Each of these terms denotes in a singular fashion the corresponding individual in \(X.\!\)

By way of general terms in this discussion, we may begin with the following set:

\(\begin{array}{ccl} ^{\backprime\backprime}\mathrm{b}^{\prime\prime} & = & ^{\backprime\backprime}\mathrm{black}^{\prime\prime} \\[6pt] ^{\backprime\backprime}\mathrm{m}^{\prime\prime} & = & ^{\backprime\backprime}\mathrm{man}^{\prime\prime} \\[6pt] ^{\backprime\backprime}\mathrm{w}^{\prime\prime} & = & ^{\backprime\backprime}\mathrm{woman}^{\prime\prime} \end{array}\)

The denotation of a general term may be given by means of an equation between terms:

\(\begin{array}{*{15}{c}} \mathrm{b} & = & \mathrm{O} \\[6pt] \mathrm{m} & = & \mathrm{C} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{w} & = & \mathrm{B} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} \end{array}\)

Commentary Note 8.2

I will continue with my commentary on CP 3.73, developing the Othello example as a way of illustrating its concepts.

In the development of the story so far, we have a universe of discourse that can be characterized by means of the following system of equations:

\(\begin{array}{*{15}{c}} 1 & = & \mathrm{B} & +\!\!, & \mathrm{C} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{b} & = & \mathrm{O} \\[6pt] \mathrm{m} & = & \mathrm{C} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{w} & = & \mathrm{B} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} \end{array}\)

This much provides a basis for collection of absolute terms that I plan to use in this example. Let us now consider how we might represent a sufficiently exemplary collection of relative terms.

Consider the genesis of relative terms, for example:

\(\begin{array}{l} ^{\backprime\backprime}\, \text{lover of}\, \underline{~~~~}\, ^{\prime\prime} \\[6pt] ^{\backprime\backprime}\, \text{betrayer to}\, \underline{~~~~}\, \text{of}\, \underline{~~~~}\, ^{\prime\prime} \\[6pt] ^{\backprime\backprime}\, \text{winner over of}\, \underline{~~~~}\, \text{to}\, \underline{~~~~}\, \text{from}\, \underline{~~~~}\, ^{\prime\prime} \end{array}\)

We may regard these fill-in-the-blank forms as being derived by a kind of rhematic abstraction from the corresponding instances of absolute terms.

In other words:

The relative term \(^{\backprime\backprime}\, \text{lover of}\, \underline{~~~~}\, ^{\prime\prime}\)

can be reached by removing the absolute term \(^{\backprime\backprime}\, \text{Emilia}\, ^{\prime\prime}\)

from the absolute term \(^{\backprime\backprime}\, \text{lover of Emilia}\, ^{\prime\prime}.\)

\(\operatorname{Iago}\) is a lover of \(\operatorname{Emilia},\) so the relate-correlate pair \(\operatorname{I}:\operatorname{E}\)

lies in the 2-adic relation associated with the relative term \(^{\backprime\backprime}\, \text{lover of}\, \underline{~~~~}\, ^{\prime\prime}.\)

The relative term \(^{\backprime\backprime}\, \text{betrayer to}\, \underline{~~~~}\, \text{of}\, \underline{~~~~}\, ^{\prime\prime}\)

can be reached by removing the absolute terms \(^{\backprime\backprime}\, \text{Othello}\, ^{\prime\prime}\) and \(^{\backprime\backprime}\, \text{Desdemona}\, ^{\prime\prime}\)

from the absolute term \(^{\backprime\backprime}\, \text{betrayer to Othello of Desdemona}\, ^{\prime\prime}.\)

\(\operatorname{Iago}\) is a betrayer to \(\operatorname{Othello}\) of \(\operatorname{Desdemona},\) so the relate-correlate-correlate triple \(\operatorname{I}:\operatorname{O}:\operatorname{D}\)

lies in the 3-adic relation assciated with the relative term \(^{\backprime\backprime}\, \text{betrayer to}\, \underline{~~~~}\, \text{of}\, \underline{~~~~}\, ^{\prime\prime}.\)

The relative term \(^{\backprime\backprime}\, \text{winner over of}\, \underline{~~~~}\, \text{to}\, \underline{~~~~}\, \text{from}\, \underline{~~~~}\, ^{\prime\prime}\)

can be reached by removing the absolute terms \(^{\backprime\backprime}\, \text{Othello}\, ^{\prime\prime},\) \(^{\backprime\backprime}\, \text{Iago}\, ^{\prime\prime},\) and \(^{\backprime\backprime}\, \text{Cassio}\, ^{\prime\prime}\)

from the absolute term \(^{\backprime\backprime}\, \text{winner over of Othello to Iago from Cassio}\, ^{\prime\prime}.\)

\(\operatorname{Iago}\) is a winner over of \(\operatorname{Othello}\) to \(\operatorname{Iago}\) from \(\operatorname{Cassio},\) so the elementary relative term \(\operatorname{I}:\operatorname{O}:\operatorname{I}:\operatorname{C}\)

lies in the 4-adic relation associated with the relative term \(^{\backprime\backprime}\, \text{winner over of}\, \underline{~~~~}\, \text{to}\, \underline{~~~~}\, \text{from}\, \underline{~~~~}\, ^{\prime\prime}.\)

Commentary Note 8.3

Speaking very strictly, we need to be careful to distinguish a relation from a relative term.

The relation is an object of thought that may be regarded in extension as a set of ordered tuples that are known as its elementary relations.

The relative term is a sign that denotes certain objects, called its relates, as these are determined in relation to certain other objects, called its correlates. Under most circumstances, one may also regard the relative term as denoting the corresponding relation.

Returning to the Othello example, let us take up the 2-adic relatives \(^{\backprime\backprime}\, \text{lover of}\, \underline{~~~~}\, ^{\prime\prime}\) and \(^{\backprime\backprime}\, \text{servant of}\, \underline{~~~~}\, ^{\prime\prime}.\)

Ignoring the many splendored nuances appurtenant to the idea of love, we may regard the relative term \(\mathit{l}\!\) for \(^{\backprime\backprime}\, \text{lover of}\, \underline{~~~~}\, ^{\prime\prime}\) to be given by the following equation:

\(\begin{array}{*{13}{c}} \mathit{l} & = & \mathrm{B}:\mathrm{C} & +\!\!, & \mathrm{C}:\mathrm{B} & +\!\!, & \mathrm{D}:\mathrm{O} & +\!\!, & \mathrm{E}:\mathrm{I} & +\!\!, & \mathrm{I}:\mathrm{E} & +\!\!, & \mathrm{O}:\mathrm{D} \end{array}\)

If for no better reason than to make the example more interesting, let us put aside all distinctions of rank and fealty, collapsing the motley crews of attendant, servant, subordinate, and so on, under the heading of a single service, denoted by the relative term \(\mathit{s}\!\) for \(^{\backprime\backprime}\, \text{servant of}\, \underline{~~~~}\, ^{\prime\prime}.\) The terms of this service are:

\(\begin{array}{*{11}{c}} \mathit{s} & = & \mathrm{C}:\mathrm{O} & +\!\!, & \mathrm{E}:\mathrm{D} & +\!\!, & \mathrm{I}:\mathrm{O} & +\!\!, & \mathrm{J}:\mathrm{D} & +\!\!, & \mathrm{J}:\mathrm{O} \end{array}\)

The term \(\mathrm{I}:\mathrm{C}\!\) may also be implied, but, since it is so hotly arguable, I will leave it out of the toll.

One more thing that we need to be duly wary about: There are many different conventions in the field as to the ordering of terms in their applications, and it happens that different conventions will be more convenient under different circumstances, so there does not appear to be much of a chance that any one of them can be canonized once and for all.

In the current reading, we are applying relative terms from right to left, and so our conception of relative multiplication, or relational composition, will need to be adjusted accordingly.

Commentary Note 8.4

To familiarize ourselves with the forms of calculation that are available in Peirce's notation, let us compute a few of the simplest products that we find at hand in the Othello case.

Here are the absolute terms:

\(\begin{array}{*{15}{c}} 1 & = & \mathrm{B} & +\!\!, & \mathrm{C} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{b} & = & \mathrm{O} \\[6pt] \mathrm{m} & = & \mathrm{C} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{w} & = & \mathrm{B} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} \end{array}\)

Here are the 2-adic relative terms:

\(\begin{array}{*{13}{c}} \mathit{l} & = & \mathrm{B}:\mathrm{C} & +\!\!, & \mathrm{C}:\mathrm{B} & +\!\!, & \mathrm{D}:\mathrm{O} & +\!\!, & \mathrm{E}:\mathrm{I} & +\!\!, & \mathrm{I}:\mathrm{E} & +\!\!, & \mathrm{O}:\mathrm{D} \\[6pt] \mathit{s} & = & \mathrm{C}:\mathrm{O} & +\!\!, & \mathrm{E}:\mathrm{D} & +\!\!, & \mathrm{I}:\mathrm{O} & +\!\!, & \mathrm{J}:\mathrm{D} & +\!\!, & \mathrm{J}:\mathrm{O} \end{array}\)

Here are a few of the simplest products among these terms:

\(\begin{array}{lll} \mathit{l}1 & = & \text{lover of anything} \\[6pt] & = & (\mathrm{B}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{C}\!:\!\mathrm{B} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{I} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{E} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{D}) \\ & & \times \\ & & (\mathrm{B} ~+\!\!,~ \mathrm{C} ~+\!\!,~ \mathrm{D} ~+\!\!,~ \mathrm{E} ~+\!\!,~ \mathrm{I} ~+\!\!,~ \mathrm{J} ~+\!\!,~ \mathrm{O}) \\[6pt] & = & \mathrm{B} ~+\!\!,~ \mathrm{C} ~+\!\!,~ \mathrm{D} ~+\!\!,~ \mathrm{E} ~+\!\!,~ \mathrm{I} ~+\!\!,~ \mathrm{O} \\[6pt] & = & \text{anything except}~\mathrm{J} \end{array}\)

\(\begin{array}{lll} \mathit{l}\mathrm{b} & = & \text{lover of a black} \\[6pt] & = & (\mathrm{B}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{C}\!:\!\mathrm{B} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{I} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{E} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{D}) \\ & & \times \\ & & \mathrm{O} \\[6pt] & = & \mathrm{D} \end{array}\)

\(\begin{array}{lll} \mathit{l}\mathrm{m} & = & \text{lover of a man} \\[6pt] & = & (\mathrm{B}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{C}\!:\!\mathrm{B} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{I} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{E} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{D}) \\ & & \times \\ & & (\mathrm{C} ~+\!\!,~ \mathrm{I} ~+\!\!,~ \mathrm{J} ~+\!\!,~ \mathrm{O}) \\[6pt] & = & \mathrm{B} ~+\!\!,~ \mathrm{D} ~+\!\!,~ \mathrm{E} \end{array}\)

\(\begin{array}{lll} \mathit{l}\mathrm{w} & = & \text{lover of a woman} \\[6pt] & = & (\mathrm{B}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{C}\!:\!\mathrm{B} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{I} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{E} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{D}) \\ & & \times \\ & & (\mathrm{B} ~+\!\!,~ \mathrm{D} ~+\!\!,~ \mathrm{E}) \\[6pt] & = & \mathrm{C} ~+\!\!,~ \mathrm{I} ~+\!\!,~ \mathrm{O} \end{array}\)

\(\begin{array}{lll} \mathit{s}1 & = & \text{servant of anything} \\[6pt] & = & (\mathrm{C}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{O}) \\ & & \times \\ & & (\mathrm{B} ~+\!\!,~ \mathrm{C} ~+\!\!,~ \mathrm{D} ~+\!\!,~ \mathrm{E} ~+\!\!,~ \mathrm{I} ~+\!\!,~ \mathrm{J} ~+\!\!,~ \mathrm{O}) \\[6pt] & = & \mathrm{C} ~+\!\!,~ \mathrm{E} ~+\!\!,~ \mathrm{I} ~+\!\!,~ \mathrm{J} \end{array}\)

\(\begin{array}{lll} \mathit{s}\mathrm{b} & = & \text{servant of a black} \\[6pt] & = & (\mathrm{C}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{O}) \\ & & \times \\ & & \mathrm{O} \\[6pt] & = & \mathrm{C} ~+\!\!,~ \mathrm{I} ~+\!\!,~ \mathrm{J} \end{array}\)

\(\begin{array}{lll} \mathit{s}\mathrm{m} & = & \text{servant of a man} \\[6pt] & = & (\mathrm{C}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{O}) \\ & & \times \\ & & (\mathrm{C} ~+\!\!,~ \mathrm{I} ~+\!\!,~ \mathrm{J} ~+\!\!,~ \mathrm{O}) \\[6pt] & = & \mathrm{C} ~+\!\!,~ \mathrm{I} ~+\!\!,~ \mathrm{J} \end{array}\)

\(\begin{array}{lll} \mathit{s}\mathrm{w} & = & \text{servant of a woman} \\[6pt] & = & (\mathrm{C}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{O}) \\ & & \times \\ & & (\mathrm{B} ~+\!\!,~ \mathrm{D} ~+\!\!,~ \mathrm{E}) \\[6pt] & = & \mathrm{E} ~+\!\!,~ \mathrm{J} \end{array}\)

\(\begin{array}{lll} \mathit{l}\mathit{s} & = & \text{lover of a servant of}\, \underline{~~~~} \\[6pt] & = & (\mathrm{B}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{C}\!:\!\mathrm{B} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{I} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{E} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{D}) \\ & & \times \\ & & (\mathrm{C}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{O}) \\[6pt] & = & \mathrm{B}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{D} \end{array}\)

\(\begin{array}{lll} \mathit{s}\mathit{l} & = & \text{servant of a lover of}\, \underline{~~~~} \\[6pt] & = & (\mathrm{C}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{O}) \\ & & \times \\ & & (\mathrm{B}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{C}\!:\!\mathrm{B} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{I} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{E} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{D}) \\[6pt] & = & \mathrm{C}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{O} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{O} \end{array}\)

Among other things, one observes that the relative terms \(\mathit{l}\!\) and \(\mathit{s}\!\) do not commute, that is, \(\mathit{l}\mathit{s}\!\) is not equal to \(\mathit{s}\mathit{l}.\!\)

Commentary Note 8.5

Since multiplication by a 2-adic relative term is a logical analogue of matrix multiplication in linear algebra, all of the products that we computed above can be represented in terms of logical matrices and logical vectors.

Here are the absolute terms again, followed by their representation as coefficient tuples, otherwise thought of as coordinate vectors.

\(\begin{array}{ccrcccccccccccl} \mathbf{1} & = & \mathrm{B} & +\!\!, & \mathrm{C} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[10pt] & = & (1 & , & 1 & , & 1 & , & 1 & , & 1 & , & 1 & , & 1) \\[20pt] \mathrm{b} & = & & & & & & & & & & & & & \mathrm{O} \\[10pt] & = & (0 & , & 0 & , & 0 & , & 0 & , & 0 & , & 0 & , & 1) \\[20pt] \mathrm{m} & = & & & \mathrm{C} & & & & & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[10pt] & = & (0 & , & 1 & , & 0 & , & 0 & , & 1 & , & 1 & , & 1) \\[20pt] \mathrm{w} & = & \mathrm{B} & & & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} & & & & & & \\[10pt] & = & (1 & , & 0 & , & 1 & , & 1 & , & 0 & , & 0 & , & 0) \end{array}\)

Since we are going to be regarding these tuples as column vectors, it is convenient to arrange them into a table of the following form:

\(\begin{array}{c|cccc} \text{ } & \mathbf{1} & \mathrm{b} & \mathrm{m} & \mathrm{w} \\ \text{---} & \text{---} & \text{---} & \text{---} & \text{---} \\ \mathrm{B} & 1 & 0 & 0 & 1 \\ \mathrm{C} & 1 & 0 & 1 & 0 \\ \mathrm{D} & 1 & 0 & 0 & 1 \\ \mathrm{E} & 1 & 0 & 0 & 1 \\ \mathrm{I} & 1 & 0 & 1 & 0 \\ \mathrm{J} & 1 & 0 & 1 & 0 \\ \mathrm{O} & 1 & 1 & 1 & 0 \end{array}\)

Here are the 2-adic relative terms again, followed by their representation as coefficient matrices, in this case bordered by row and column labels to remind us what the coefficient values are meant t|o signify.

\(\begin{array}{*{13}{c}} \mathit{l} & = & \mathrm{B}:\mathrm{C} & +\!\!, & \mathrm{C}:\mathrm{B} & +\!\!, & \mathrm{D}:\mathrm{O} & +\!\!, & \mathrm{E}:\mathrm{I} & +\!\!, & \mathrm{I}:\mathrm{E} & +\!\!, & \mathrm{O}:\mathrm{D} \end{array}\)

\(\begin{array}{c|ccccccc} \mathit{l} & \mathrm{B} & \mathrm{C} & \mathrm{D} & \mathrm{E} & \mathrm{I} & \mathrm{J} & \mathrm{O} \\ \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} \\ \mathrm{B} & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ \mathrm{C} & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \mathrm{D} & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \mathrm{E} & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ \mathrm{I} & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ \mathrm{J} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \mathrm{O} & 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{array}\)

\(\begin{array}{*{13}{c}} \mathit{s} & = & \mathrm{C}:\mathrm{O} & +\!\!, & \mathrm{E}:\mathrm{D} & +\!\!, & \mathrm{I}:\mathrm{O} & +\!\!, & \mathrm{J}:\mathrm{D} & +\!\!, & \mathrm{J}:\mathrm{O} \end{array}\)

\(\begin{array}{c|ccccccc} \mathit{s} & \mathrm{B} & \mathrm{C} & \mathrm{D} & \mathrm{E} & \mathrm{I} & \mathrm{J} & \mathrm{O} \\ \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} \\ \mathrm{B} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \mathrm{C} & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \mathrm{D} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \mathrm{E} & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \mathrm{I} & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \mathrm{J} & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ \mathrm{O} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\)

Here are the matrix representations of the products that we calculated before:

\(\begin{matrix} \mathit{l}\mathbf{1} & = & \text{lover of anything} & = \end{matrix}\)

\( \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 0 \\ 1 \end{bmatrix} \)

\(\begin{matrix} \mathit{l}\mathrm{b} & = & \text{lover of a black} & = \end{matrix}\)

\( \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \)

\(\begin{matrix} \mathit{l}\mathrm{m} & = & \text{lover of a man} & = \end{matrix}\)

\( \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \)

\(\begin{matrix} \mathit{l}\mathrm{w} & = & \text{lover of a woman} & = \end{matrix}\)

\( \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 1 \\ 0 \\ 1 \end{bmatrix} \)

\(\begin{matrix} \mathit{s}\mathbf{1} & = & \text{servant of anything} & = \end{matrix}\)

\( \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 1 \\ 1 \\ 1 \\ 0 \end{bmatrix} \)

\(\begin{matrix} \mathit{s}\mathrm{b} & = & \text{servant of a black} & = \end{matrix}\)

\( \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \end{bmatrix} \)

\(\begin{matrix} \mathit{s}\mathrm{m} & = & \text{servant of a man} & = \end{matrix}\)

\( \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \end{bmatrix} \)

\(\begin{matrix} \mathit{s}\mathrm{w} & = & \text{servant of a woman} & = \end{matrix}\)

\( \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 1 \\ 0 \end{bmatrix} \)

\(\begin{matrix} \mathit{l}\mathit{s} & = & \operatorname{lover~of~a~servant~of}~(~) = \end{matrix}\)

\( \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \)

\(\begin{matrix} \mathit{s}\mathit{l} & = & \operatorname{servant~of~a~lover~of}~(~) = \end{matrix}\)

\( \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \)

Commentary Note 8.6

The foregoing has hopefully filled in enough background that we can begin to make sense of the more mysterious parts of CP 3.73.

Thus far, we have considered the multiplication of relative terms only. Since our conception of multiplication is the application of a relation, we can only multiply absolute terms by considering them as relatives.

Now the absolute term "man" is really exactly equivalent to the relative term "man that is ——", and so with any other. I shall write a comma after any absolute term to show that it is so regarded as a relative term.

Then "man that is black" will be written:

\(\mathrm{m},\!\mathrm{b}\)

(Peirce, CP 3.73).

In any system where elements are organized according to types, there tend to be any number of ways in which elements of one type are naturally associated with elements of another type. If the association is anything like a logical equivalence, but with the first type being lower and the second type being higher in some sense, then one may speak of a semantic ascent from the lower to the higher type.

For example, it is common in mathematics to associate an element \(a\!\) of a set \(A\!\) with the constant function \(f_a : X \to A\) that has \(f_a (x) = a\!\) for all \(x\!\) in \(X,\!\) where \(X\!\) is an arbitrary set. Indeed, the correspondence is so close that one often uses the same name \({}^{\backprime\backprime} a {}^{\prime\prime}\) to denote both the element \(a\!\) in \(A\!\) and the function \(a = f_a : X \to A,\) relying on the context or an explicit type indication to tell them apart.

For another example, we have the tacit extension of a \(k\!\)-place relation \(L \subseteq X_1 \times \ldots \times X_k\!\) to a \((k+1)\!\)-place relation \(L^\prime \subseteq X_1 \times \ldots \times X_{k+1}\!\) that we get by letting \(L^\prime = L \times X_{k+1},\) that is, by maintaining the constraints of \(L\!\) on the first \(k\!\) variables and letting the last variable wander freely.

What we have here, if I understand Peirce correctly, is another such type of natural extension, sometimes called the diagonal extension. This extension associates a \(k\!\)-adic relative or a \(k\!\)-adic relation, counting the absolute term and the set whose elements it denotes as the cases for \(k = 0,\!\) with a series of relatives and relations of higher adicities.

A few examples will suffice to anchor these ideas.

Absolute terms:

\(\begin{array}{*{11}{c}} \mathrm{m} & = & \text{man} & = & \mathrm{C} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{n} & = & \text{noble} & = & \mathrm{C} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{w} & = & \text{woman} & = & \mathrm{B} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} \end{array}\)

Diagonal extensions:

\(\begin{array}{*{11}{c}} \mathrm{m,} & = & \text{man that is}\, \underline{~~~~} & = & \mathrm{C}:\mathrm{C} & +\!\!, & \mathrm{I}:\mathrm{I} & +\!\!, & \mathrm{J}:\mathrm{J} & +\!\!, & \mathrm{O}:\mathrm{O} \\[6pt] \mathrm{n,} & = & \text{noble that is}\, \underline{~~~~} & = & \mathrm{C}:\mathrm{C} & +\!\!, & \mathrm{D}:\mathrm{D} & +\!\!, & \mathrm{O}:\mathrm{O} \\[6pt] \mathrm{w,} & = & \text{woman that is}\, \underline{~~~~} & = & \mathrm{B}:\mathrm{B} & +\!\!, & \mathrm{D}:\mathrm{D} & +\!\!, & \mathrm{E}:\mathrm{E} \end{array}\)

Sample products:

\(\begin{array}{lll} \mathrm{m},\!\mathrm{n} & = & \text{man that is noble} \\[6pt] & = & (\mathrm{C}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{I} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{J} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{O}) \\ & & \times \\ & & (\mathrm{C} ~+\!\!,~ \mathrm{D} ~+\!\!,~ \mathrm{O}) \\[6pt] & = & \mathrm{C} ~+\!\!,~ \mathrm{O} \end{array}\)

\(\begin{array}{lll} \mathrm{n},\!\mathrm{m} & = & \text{noble that is a man} \\[6pt] & = & (\mathrm{C}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{O}) \\ & & \times \\ & & (\mathrm{C} ~+\!\!,~ \mathrm{I} ~+\!\!,~ \mathrm{J} ~+\!\!,~ \mathrm{O}) \\[6pt] & = & \mathrm{C} ~+\!\!,~ \mathrm{O} \end{array}\)

\(\begin{array}{lll} \mathrm{w},\!\mathrm{n} & = & \text{woman that is noble} \\[6pt] & = & (\mathrm{B}\!:\!\mathrm{B} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{E}) \\ & & \times \\ & & (\mathrm{C} ~+\!\!,~ \mathrm{D} ~+\!\!,~ \mathrm{O}) \\[6pt] & = & \mathrm{D} \end{array}\)

\(\begin{array}{lll} \mathrm{n},\!\mathrm{w} & = & \text{noble that is a woman} \\[6pt] & = & (\mathrm{C}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{O}) \\ & & \times \\ & & (\mathrm{B} ~+\!\!,~ \mathrm{D} ~+\!\!,~ \mathrm{E}) \\[6pt] & = & \mathrm{D} \end{array}\)

Selection 9

The Signs for Multiplication (cont.)

It is obvious that multiplication into a multiplicand indicated by a comma is commutative1, that is,

\(\mathit{s},\!\mathit{l} ~=~ \mathit{l},\!\mathit{s}\)

This multiplication is effectively the same as that of Boole in his logical calculus. Boole's unity is my \(\mathbf{1},\) that is, it denotes whatever is.

  1. It will often be convenient to speak of the whole operation of affixing a comma and then multiplying as a commutative multiplication, the sign for which is the comma. But though this is allowable, we shall fall into confusion at once if we ever forget that in point of fact it is not a different multiplication, only it is multiplication by a relative whose meaning — or rather whose syntax — has been slightly altered; and that the comma is really the sign of this modification of the foregoing term.

(Peirce, CP 3.74).

Commentary Note 9.1

Let us backtrack a few years, and consider how George Boole explained his twin conceptions of selective operations and selective symbols.

Let us then suppose that the universe of our discourse is the actual universe, so that words are to be used in the full extent of their meaning, and let us consider the two mental operations implied by the words "white" and "men". The word "men" implies the operation of selecting in thought from its subject, the universe, all men; and the resulting conception, men, becomes the subject of the next operation. The operation implied by the word "white" is that of selecting from its subject, "men", all of that class which are white. The final resulting conception is that of "white men".

Now it is perfectly apparent that if the operations above described had been performed in a converse order, the result would have been the same. Whether we begin by forming the conception of "men", and then by a second intellectual act limit that conception to "white men", or whether we begin by forming the conception of "white objects", and then limit it to such of that class as are "men", is perfectly indifferent so far as the result is concerned. It is obvious that the order of the mental processes would be equally indifferent if for the words "white" and "men" we substituted any other descriptive or appellative terms whatever, provided only that their meaning was fixed and absolute. And thus the indifference of the order of two successive acts of the faculty of Conception, the one of which furnishes the subject upon which the other is supposed to operate, is a general condition of the exercise of that faculty. It is a law of the mind, and it is the real origin of that law of the literal symbols of Logic which constitutes its formal expression (1) Chap. II, [ namely, \(xy = yx\!\) ].

It is equally clear that the mental operation above described is of such a nature that its effect is not altered by repetition. Suppose that by a definite act of conception the attention has been fixed upon men, and that by another exercise of the same faculty we limit it to those of the race who are white. Then any further repetition of the latter mental act, by which the attention is limited to white objects, does not in any way modify the conception arrived at, viz., that of white men. This is also an example of a general law of the mind, and it has its formal expression in the law ((2) Chap. II) of the literal symbols [ namely, \(x^2 = x\!\) ].

(Boole, Laws of Thought, 44–45).

Commentary Note 9.2

In setting up his discussion of selective operations and their corresponding selective symbols, Boole writes this:

The operation which we really perform is one of selection according to a prescribed principle or idea. To what faculties of the mind such an operation would be referred, according to the received classification of its powers, it is not important to inquire, but I suppose that it would be considered as dependent upon the two faculties of Conception or Imagination, and Attention. To the one of these faculties might be referred the formation of the general conception; to the other the fixing of the mental regard upon those individuals within the prescribed universe of discourse which answer to the conception. If, however, as seems not improbable, the power of Attention is nothing more than the power of continuing the exercise of any other faculty of the mind, we might properly regard the whole of the mental process above described as referrible to the mental faculty of Imagination or Conception, the first step of the process being the conception of the Universe itself, and each succeeding step limiting in a definite manner the conception thus formed. Adopting this view, I shall describe each such step, or any definite combination of such steps, as a definite act of conception.

(Boole, Laws of Thought, 43).

Commentary Note 9.3

In algebra, an idempotent element \(x\!\) is one that obeys the idempotent law, that is, it satisfies the equation \(xx = x.\!\) Under most circumstances, it is usual to write this as \(x^2 = x.\!\)

If the algebraic system in question falls under the additional laws that are necessary to carry out the requisite transformations, then \(x^2 = x\!\) is convertible into \(x - x^2 = 0,\!\) and this into \(x(1 - x) = 0.\!\)

If the algebraic system in question happens to be a boolean algebra, then the equation \(x(1 - x) = 0\!\) says that \(x \land \lnot x\) is identically false, in effect, a statement of the classical principle of non-contradiction.

We have already seen how Boole found rationales for the commutative law and the idempotent law by contemplating the properties of selective operations.

It is time to bring these threads together, which we can do by considering the so-called idempotent representation of sets. This will give us one of the best ways to understand the significance that Boole attached to selective operations. It will also link up with the statements that Peirce makes about his adicity-augmenting comma operation.

Commentary Note 9.4

Boole rationalized the properties of what we now call boolean multiplication, roughly equivalent to logical conjunction, in terms of the laws that apply to selective operations. Peirce, in his turn, taking a very significant step of analysis that has seldom been recognized for what it would lead to, does not consider this multiplication to be a fundamental operation, but derives it as a by-product of relative multiplication by a comma relative. Thus, Peirce makes logical conjunction a special case of relative composition.

This opens up a very wide field of investigation, the operational significance of logical terms, one might say, but it will be best to advance bit by bit, and to lean on simple examples.

Back to Venice, and the close-knit party of absolutes and relatives that we were entertaining when last we were there.

Here is the list of absolute terms that we were considering before, to which I have thrown in \(\mathbf{1},\) the universe of anything, just for good measure:

\(\begin{array}{*{17}{l}} \mathbf{1} & = & \text{anything} & = & \mathrm{B} & +\!\!, & \mathrm{C} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{m} & = & \text{man} & = & \mathrm{C} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{n} & = & \text{noble} & = & \mathrm{C} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{w} & = & \text{woman} & = & \mathrm{B} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} \end{array}\)

Here is the list of comma inflexions or diagonal extensions of these terms:

\(\begin{array}{lll} \mathbf{1,} & = & \text{anything that is}\, \underline{~~~~} \\[6pt] & = & \mathrm{B}\!:\!\mathrm{B} ~+\!\!,~ \mathrm{C}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{E} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{I} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{J} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{O} \\[9pt] \mathrm{m,} & = & \text{man that is}\, \underline{~~~~} \\[6pt] & = & \mathrm{C}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{I} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{J} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{O} \\[9pt] \mathrm{n,} & = & \text{noble that is}\, \underline{~~~~} \\[6pt] & = & \mathrm{C}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{O} \\[9pt] \mathrm{w,} & = & \text{woman that is}\, \underline{~~~~} \\[6pt] & = & \mathrm{B}\!:\!\mathrm{B} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{E} \end{array}\)

One observes that the diagonal extension of \(\mathbf{1}\) is the same thing as the identity relation \(\mathit{1}.\!\)

Working within our smaller sample of absolute terms, we have already computed the sorts of products that apply the diagonal extension of an absolute term to another absolute term, for instance, these products:

\(\begin{array}{lllll} \mathrm{m},\!\mathrm{n} & = & \text{man that is noble} & = & \mathrm{C} ~+\!\!,~ \mathrm{O} \\[6pt] \mathrm{n},\!\mathrm{m} & = & \text{noble that is a man} & = & \mathrm{C} ~+\!\!,~ \mathrm{O} \\[6pt] \mathrm{w},\!\mathrm{n} & = & \text{woman that is noble} & = & \mathrm{D} \\[6pt] \mathrm{n},\!\mathrm{w} & = & \text{noble that is a woman} & = & \mathrm{D} \end{array}\)

This exercise gave us a bit of practical insight into why the commutative law holds for logical conjunction.

Further insight into the laws that govern this realm of logic, and the underlying reasons why they apply, might be gained by systematically working through the whole variety of different products that are generated by the operational means in sight, namely, the products indicated by \(\{\mathbf{1}, \mathrm{m}, \mathrm{n}, \mathrm{w} \} , \{\mathbf{1}, \mathrm{m}, \mathrm{n}, \mathrm{w} \}.\)

But before we try to explore this territory more systematically, let us equip our intuitions with the forms of graphical and matrical representation that served us so well in our previous adventures.

Commentary Note 9.5

Peirce's comma operation, in its application to an absolute term, is tantamount to the representation of that term's denotation as an idempotent transformation, which is commonly represented as a diagonal matrix. This is why I call it the diagonal extension.

An idempotent element \(x\!\) is given by the abstract condition that \(xx = x,\!\) but we commonly encounter such elements in more concrete circumstances, acting as operators or transformations on other sets or spaces, and in that action they will often be represented as matrices of coefficients.

Let's see how all of this looks from the graphical and matrical perspectives.

Absolute terms:

\(\begin{array}{*{17}{l}} \mathbf{1} & = & \text{anything} & = & \mathrm{B} & +\!\!, & \mathrm{C} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{m} & = & \text{man} & = & \mathrm{C} & +\!\!, & \mathrm{I} & +\!\!, & \mathrm{J} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{n} & = & \text{noble} & = & \mathrm{C} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{O} \\[6pt] \mathrm{w} & = & \text{woman} & = & \mathrm{B} & +\!\!, & \mathrm{D} & +\!\!, & \mathrm{E} \end{array}\)

Previously, we represented absolute terms as column vectors. The above four terms are given by the columns of the following table:

\(\begin{array}{c|cccc} \text{ } & \mathbf{1} & \mathrm{m} & \mathrm{n} & \mathrm{w} \\ \text{---} & \text{---} & \text{---} & \text{---} & \text{---} \\ \mathrm{B} & 1 & 0 & 0 & 1 \\ \mathrm{C} & 1 & 1 & 1 & 0 \\ \mathrm{D} & 1 & 0 & 1 & 1 \\ \mathrm{E} & 1 & 0 & 0 & 1 \\ \mathrm{I} & 1 & 1 & 0 & 0 \\ \mathrm{J} & 1 & 1 & 0 & 0 \\ \mathrm{O} & 1 & 1 & 1 & 0 \end{array}\)

One way to represent sets in the bigraph picture is simply to mark the nodes in some way, like so:

    B   C   D   E   I   J   O
1   +   +   +   +   +   +   +

    B   C   D   E   I   J   O
m   o   +   o   o   +   +   +

    B   C   D   E   I   J   O
n   o   +   +   o   o   o   +

    B   C   D   E   I   J   O
w   +   o   +   +   o   o   o

Diagonal extensions of the absolute terms:

\(\begin{array}{lll} \mathbf{1,} & = & \text{anything that is}\, \underline{~~~~} \\[6pt] & = & \mathrm{B}\!:\!\mathrm{B} ~+\!\!,~ \mathrm{C}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{E} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{I} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{J} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{O} \\[9pt] \mathrm{m,} & = & \text{man that is}\, \underline{~~~~} \\[6pt] & = & \mathrm{C}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{I}\!:\!\mathrm{I} ~+\!\!,~ \mathrm{J}\!:\!\mathrm{J} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{O} \\[9pt] \mathrm{n,} & = & \text{noble that is}\, \underline{~~~~} \\[6pt] & = & \mathrm{C}\!:\!\mathrm{C} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{O}\!:\!\mathrm{O} \\[9pt] \mathrm{w,} & = & \text{woman that is}\, \underline{~~~~} \\[6pt] & = & \mathrm{B}\!:\!\mathrm{B} ~+\!\!,~ \mathrm{D}\!:\!\mathrm{D} ~+\!\!,~ \mathrm{E}\!:\!\mathrm{E} \end{array}\)

Naturally enough, the diagonal extensions are represented by diagonal matrices:


\(\begin{array}{c|ccccccc} \mathbf{1,} & \mathrm{B} & \mathrm{C} & \mathrm{D} & \mathrm{E} & \mathrm{I} & \mathrm{J} & \mathrm{O} \\ \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} \\ \mathrm{B} & 1 & & & & & & \\ \mathrm{C} & & 1 & & & & & \\ \mathrm{D} & & & 1 & & & & \\ \mathrm{E} & & & & 1 & & & \\ \mathrm{I} & & & & & 1 & & \\ \mathrm{J} & & & & & & 1 & \\ \mathrm{O} & & & & & & & 1 \end{array}\)


\(\begin{array}{c|ccccccc} \mathrm{m,} & \mathrm{B} & \mathrm{C} & \mathrm{D} & \mathrm{E} & \mathrm{I} & \mathrm{J} & \mathrm{O} \\ \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} & \text{---} \\ \mathrm{B} & 0 & & & & & & \\ \mathrm{C} & & 1 & & & & & \\ \mathrm{D} & & & 0 & & & & \\ \mathrm{E} & & & & 0 & & & \\ \mathrm{I} & & & & & 1 & & \\ \mathrm{J} & & & & & & 1 & \\ \mathrm{O} & & & & & & & 1 \end{array}\)

!n!| B C D E I J O
---o---------------
 B | 0 0 0 0 0 0 0
 C | 0 1 0 0 0 0 0
 D | 0 0 1 0 0 0 0
 E | 0 0 0 0 0 0 0
 I | 0 0 0 0 0 0 0
 J | 0 0 0 0 0 0 0
 O | 0 0 0 0 0 0 1
!w!| B C D E I J O
---o---------------
 B | 1 0 0 0 0 0 0
 C | 0 0 0 0 0 0 0
 D | 0 0 1 0 0 0 0
 E | 0 0 0 1 0 0 0
 I | 0 0 0 0 0 0 0
 J | 0 0 0 0 0 0 0
 O | 0 0 0 0 0 0 0

Cast into the bigraph picture of 2-adic relations, the diagonal extension of an absolute term takes on a very distinctive sort of "straight-laced" character:

    B   C   D   E   I   J   O
u   o   o   o   o   o   o   o
    |   |   |   |   |   |   |
1,  |   |   |   |   |   |   |
    |   |   |   |   |   |   |
u   o   o   o   o   o   o   o
    B   C   D   E   I   J   O
    B   C   D   E   I   J   O
u   o   o   o   o   o   o   o
        |           |   |   |
m,      |           |   |   |
        |           |   |   |
u   o   o   o   o   o   o   o
    B   C   D   E   I   J   O
    B   C   D   E   I   J   O
u   o   o   o   o   o   o   o
        |   |               |
n,      |   |               |
        |   |               |
u   o   o   o   o   o   o   o
    B   C   D   E   I   J   O
    B   C   D   E   I   J   O
u   o   o   o   o   o   o   o
    |       |   |
w,  |       |   |
    |       |   |
u   o   o   o   o   o   o   o
    B   C   D   E   I   J   O

Commentary Note 9.6

Just to be doggedly persistent about it all, here is what ought to be a sufficient sample of products involving the multiplication of a comma relative onto an absolute term, presented in both graphical and matrical representations.

Example 1. Anything That Is Anything

1,1 = 1
"anything that is anything" = "anything"
B   C   D   E   I   J   O
+   +   +   +   +   +   +  1
|   |   |   |   |   |   |
|   |   |   |   |   |   |  1,
|   |   |   |   |   |   |
o   o   o   o   o   o   o  =

+   +   +   +   +   +   +  1
B   C   D   E   I   J   O
| 1 0 0 0 0 0 0 | | 1 |     | 1 |
| 0 1 0 0 0 0 0 | | 1 |     | 1 |
| 0 0 1 0 0 0 0 | | 1 |     | 1 |
| 0 0 0 1 0 0 0 | | 1 |  =  | 1 |
| 0 0 0 0 1 0 0 | | 1 |     | 1 |
| 0 0 0 0 0 1 0 | | 1 |     | 1 |
| 0 0 0 0 0 0 1 | | 1 |     | 1 |

Example 2. Anything That Is Man

1,m = m
"anything that is man" = "man"
B   C   D   E   I   J   O
o   +   o   o   +   +   +  m
|   |   |   |   |   |   |
|   |   |   |   |   |   |  1,
|   |   |   |   |   |   |
o   o   o   o   o   o   o  =

o   +   o   o   +   +   +  m
B   C   D   E   I   J   O
| 1 0 0 0 0 0 0 | | 0 |     | 0 |
| 0 1 0 0 0 0 0 | | 1 |     | 1 |
| 0 0 1 0 0 0 0 | | 0 |     | 0 |
| 0 0 0 1 0 0 0 | | 0 |  =  | 0 |
| 0 0 0 0 1 0 0 | | 1 |     | 1 |
| 0 0 0 0 0 1 0 | | 1 |     | 1 |
| 0 0 0 0 0 0 1 | | 1 |     | 1 |

Example 3. Man That Is Anything

m,1 = m
"man that is anything" = "man"
B   C   D   E   I   J   O
+   +   +   +   +   +   +  1
    |           |   |   |
    |           |   |   |  m,
    |           |   |   |
o   o   o   o   o   o   o  =

o   +   o   o   +   +   +  m
B   C   D   E   I   J   O
| 0 0 0 0 0 0 0 | | 1 |     | 0 |
| 0 1 0 0 0 0 0 | | 1 |     | 1 |
| 0 0 0 0 0 0 0 | | 1 |     | 0 |
| 0 0 0 0 0 0 0 | | 1 |  =  | 0 |
| 0 0 0 0 1 0 0 | | 1 |     | 1 |
| 0 0 0 0 0 1 0 | | 1 |     | 1 |
| 0 0 0 0 0 0 1 | | 1 |     | 1 |

Example 4. Man That Is Noble

m,n = "man that is noble"
B   C   D   E   I   J   O
o   +   +   o   o   o   +  n
    |           |   |   |
    |           |   |   |  m,
    |           |   |   |
o   o   o   o   o   o   o  =

o   +   o   o   o   o   +  m,n
B   C   D   E   I   J   O
| 0 0 0 0 0 0 0 | | 0 |     | 0 |
| 0 1 0 0 0 0 0 | | 1 |     | 1 |
| 0 0 0 0 0 0 0 | | 1 |     | 0 |
| 0 0 0 0 0 0 0 | | 0 |  =  | 0 |
| 0 0 0 0 1 0 0 | | 0 |     | 0 |
| 0 0 0 0 0 1 0 | | 0 |     | 0 |
| 0 0 0 0 0 0 1 | | 1 |     | 1 |

Example 5. Noble That Is Man

n,m = "noble that is man"
B   C   D   E   I   J   O
o   +   o   o   +   +   +  m
    |   |               |
    |   |               |  n,
    |   |               |
o   o   o   o   o   o   o  =

o   +   o   o   o   o   +  n,m
B   C   D   E   I   J   O
| 0 0 0 0 0 0 0 | | 0 |     | 0 |
| 0 1 0 0 0 0 0 | | 1 |     | 1 |
| 0 0 1 0 0 0 0 | | 0 |     | 0 |
| 0 0 0 0 0 0 0 | | 0 |  =  | 0 |
| 0 0 0 0 0 0 0 | | 1 |     | 0 |
| 0 0 0 0 0 0 0 | | 1 |     | 0 |
| 0 0 0 0 0 0 1 | | 1 |     | 1 |

Commentary Note 9.7

From this point forward we may think of idempotents, selectives, and zero-one diagonal matrices as being roughly equivalent notions. The only reason that I say "roughly" is that we are comparing ideas at different levels of abstraction when we propose these connections.

We have covered the way that Peirce uses his invention of the comma modifier to assimilate boolean multiplication, logical conjunction, or what we may think of as "serial selection" under his more general account of relative multiplication.

But the comma functor has its application to relative terms of any arity, not just the zeroth arity of absolute terms, and so there will be a lot more to explore on this point. But now I must return to the anchorage of Peirce's text, and hopefully get a chance to revisit this topic later.

Selection 10

The Signs for Multiplication (cont.)

The sum 'x' + 'x' generally denotes no logical term. But 'x', + 'x', may be considered as denoting some two 'x's.

It is natural to write:

'x' + 'x' = !2!.'x'

and

'x', + 'x', = !2!.'x',

where the dot shows that this multiplication is invertible.

We may also use the antique figures so that:

!2!.'x', = `2`'x'

just as

!1! = `1`.

Then `2` alone will denote some two things.

But this multiplication is not in general commutative, and only becomes so when it affects a relative which imparts a relation such that a thing only bears it to one thing, and one thing alone bears it to a thing.

For instance, the lovers of two women are not the same as two lovers of women, that is:

'l'`2`.w

and

`2`.'l'w

are unequal;

but the husbands of two women are the same as two husbands of women, that is:

'h'`2`.w = `2`.'h'w

and in general:

'x',`2`.'y' = `2`.'x','y'.

(Peirce, CP 3.75).

Commentary Note 10.1

What Peirce is attempting to do in CP 3.75 is absolutely amazing, and I personally did not see anything on par with it again until I began to study the application of mathematical category theory to computation and logic, back in the mid 1980's. To completely evaluate the success of this attempt, we would have to return to Peirce's earlier paper "Upon the Logic of Mathematics" (1867) to pick up some of the ideas about arithmetic that he set out there.

Another branch of the investigation would require that we examine more careully the entire syntactic mechanics of "subjacent signs" that Peirce uses to establish linkages among relational domains. It is important to note that these types of indices constitute a diacritical, interpretive, syntactic category under which Peirce also places the comma functor.

The way that I would currently approach both of these branches of the investigation would be to open up a wider context for the study of relational compositions, attempting to get at the essence of what is going on we when relate relations, possibly complex, to other relations, possibly simple.

But that will take another cup of java ('c'j) — or maybe two, `2`'c'j = (!2!.'c',∞ )j …

Commentary Note 10.2

To say that a relative term "imparts a relation" is to say that it conveys information about the space of tuples in a cartesian product, that is, it determines a particular subset of that space.

When we study the combinations of relative terms, from the most elementary forms of composition to the most complex patterns of correlation, we are considering the ways that these constraints, determinations, and informations, as imparted by relative terms, can be compounded in the formation of syntax.

Let us go back and look more carefully at just how it happens that Peirce's jacent terms and subjacent indices manage to impart their respective measures of information about relations.

I will begin with the two examples illustrated in Figures 1 and 2, where I have drawn in the corresponding lines of identity between the subjacent marks of reference #, $, %.

o-------------------------------------------------o
|                                                 |
|                                                 |
|        'l'__#       #'s'__$   $w                |
|             o       o     o   o                 |
|              \     /       \ /                  |
|               \   /         o                   |
|                \ /          $                   |
|                 o                               |
|                 #                               |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 1.  Lover of a Servant of a Woman
o-------------------------------------------------o
|                                                 |
|                                                 |
|        `g`__#__$    #'l'__%   %w   $h           |
|             o  o    o     o   o    o            |
|              \  \  /       \ /    /             |
|               \  \/         o    /              |
|                \ /\         %   /               |
|                 o  ------o------                |
|                 #        $                      |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 2.  Giver of a Horse to a Lover of a Woman

One way to approach the problem of "information fusion" in Peirce's syntax is to soften the distinction between jacent terms and subjacent signs, and to treat the types of constraints that they separately signify more on a par with each other.

To that purpose, I will set forth a way of thinking about relational composition that emphasizes the set-theoretic constraints involved in the construction of a composite.

For example, suppose that we are given the relations L ⊆ X × Y, M ⊆ Y × Z. Table 3 and Figure 4 present a couple of ways of picturing the constraints that are involved in constructing the relational composition L o M ⊆ X × Z.

Table 3.  Relational Composition
o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o
|    L    #    X    |    Y    |         |
o---------o---------o---------o---------o
|    M    #         |    Y    |    Z    |
o---------o---------o---------o---------o
|  L o M  #    X    |         |    Z    |
o---------o---------o---------o---------o

The way to read Table 3 is to imagine that you are playing a game that involves placing tokens on the squares of a board that is marked in just this way. The rules are that you have to place a single token on each marked square in the middle of the board in such a way that all of the indicated constraints are satisfied. That is to say, you have to place a token whose denomination is a value in the set X on each of the squares marked "X", and similarly for the squares marked "Y" and "Z", meanwhile leaving all of the blank squares empty. Furthermore, the tokens placed in each row and column have to obey the relational constraints that are indicated at the heads of the corresponding row and column. Thus, the two tokens from X have to denominate the very same value from X, and likewise for Y and Z, while the pairs of tokens on the rows marked "L" and "M" are required to denote elements that are in the relations L and M, respectively. The upshot is that when just this much is done, that is, when the L, M, and !1! relations are satisfied, then the row marked "L o M" will automatically bear the tokens of a pair of elements in the composite relation L o M.

Figure 4 shows a different way of viewing the same situation.

o-------------------------------------------------o
|                                                 |
|                L     L o M     M                |
|                @       @       @                |
|               / \     / \     / \               |
|              o   o   o   o   o   o              |
|              X   Y   X   Z   Y   Z              |
|              o   o   o   o   o   o              |
|               \   \ /     \ /   /               |
|                \   /       \   /                |
|                 \ / \__ __/ \ /                 |
|                  @     @     @                  |
|                 !1!   !1!   !1!                 |
|                                                 |
o-------------------------------------------------o
Figure 4.  Relational Composition

Commentary Note 10.3

I will devote some time to drawing out the relationships that exist among the different pictures of relations and relative terms that were shown above, or as redrawn here:

o-------------------------------------------------o
|                                                 |
|                                                 |
|        'l'__$       $'s'__%   %w                |
|             o       o     o   o                 |
|              \     /       \ /                  |
|               \   /         o                   |
|                \ /          %                   |
|                 o                               |
|                 $                               |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 1.  Lover of a Servant of a Woman
o-------------------------------------------------o
|                                                 |
|                                                 |
|        `g`__$__%    $'l'__*   *w   %h           |
|             o  o    o     o   o    o            |
|              \  \  /       \ /    /             |
|               \  \/         o    /              |
|                \ /\         *   /               |
|                 o  ------o------                |
|                 $        %                      |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 2.  Giver of a Horse to a Lover of a Woman
Table 3.  Relational Composition
o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o
|    L    #    X    |    Y    |         |
o---------o---------o---------o---------o
|    S    #         |    Y    |    Z    |
o---------o---------o---------o---------o
|  L o S  #    X    |         |    Z    |
o---------o---------o---------o---------o
o-------------------------------------------------o
|                                                 |
|                L     L o S     S                |
|                @       @       @                |
|               / \     / \     / \               |
|              o   o   o   o   o   o              |
|              X   Y   X   Z   Y   Z              |
|              o   o   o   o   o   o              |
|               \   \ /     \ /   /               |
|                \   /       \   /                |
|                 \ / \__ __/ \ /                 |
|                  @     @     @                  |
|                 !1!   !1!   !1!                 |
|                                                 |
o-------------------------------------------------o
Figure 4.  Relational Composition

Figures 1 and 2 exhibit examples of relative multiplication in one of Peirce's styles of syntax, to which I subtended lines of identity to mark the anaphora of the correlates. These pictures are adapted to showing the anatomy of the relative terms, while the forms of analysis illustrated in Table 3 and Figure 4 are designed to highlight the structures of the objective relations themselves.

There are many ways that Peirce might have gotten from his 1870 Notation for the Logic of Relatives to his more evolved systems of Logical Graphs. For my part, I find it interesting to speculate on how the metamorphosis might have been accomplished by way of transformations that act on these nascent forms of syntax and that take place not too far from the pale of its means, that is, as nearly as possible according to the rules and the permissions of the initial system itself.

In Existential Graphs, a relation is represented by a node whose degree is the adicity of that relation, and which is adjacent via lines of identity to the nodes that represent its correlative relations, including as a special case any of its terminal individual arguments.

In the 1870 Logic of Relatives, implicit lines of identity are invoked by the subjacent numbers and marks of reference only when a correlate of some relation is the relate of some relation. Thus, the principal relate, which is not a correlate of any explicit relation, is not singled out in this way.

Remarkably enough, the comma modifier itself provides us with a mechanism to abstract the logic of relations from the logic of relatives, and thus to forge a possible link between the syntax of relative terms and the more graphical depiction of the objective relations themselves.

Figure 5 demonstrates this possibility, posing a transitional case between the style of syntax in Figure 1 and the picture of composition in Figure 4.

o-----------------------------------------------------------o
|                                                           |
|                           L o S                           |
|                 ____________@____________                 |
|                /                         \                |
|               /      L             S      \               |
|              /       @             @       \              |
|             /       / \           / \       \             |
|            /       /   \         /   \       \            |
|           o       o     o       o     o       o           |
|           X       X     Y       Y     Z       Z           |
|       1,__#       #'l'__$       $'s'__%       %1          |
|           o       o     o       o     o       o           |
|            \     /       \     /       \     /            |
|             \   /         \   /         \   /             |
|              \ /           \ /           \ /              |
|               @             @             @               |
|              !1!           !1!           !1!              |
|                                                           |
o-----------------------------------------------------------o
Figure 5.  Anything that is a Lover of a Servant of Anything

In this composite sketch, the diagonal extension of the universe 1 is invoked up front to anchor an explicit line of identity for the leading relate of the composition, while the terminal argument w has been generalized to the whole universe 1, in effect, executing an act of abstraction. This type of universal bracketing isolates the composing of the relations L and S to form the composite L o S. The three relational domains X, Y, Z may be distinguished from one another, or else rolled up into a single universe of discourse, as one prefers.

Commentary Note 10.4

From now on I will use the forms of analysis exemplified in the last set of Figures and Tables as a routine bridge between the logic of relative terms and the logic of their extended relations. For future reference, we may think of Table 3 as illustrating the "solitaire" or "spreadsheet" model of relational composition, while Figure 4 may be thought of as making a start toward the "hyper(di)graph" model of generalized compositions. I will explain the hypergraph model in some detail at a later point. The transitional form of analysis represented by Figure 5 may be called the "universal bracketing" of relatives as relations.

Commentary Note 10.5

We have sufficiently covered the application of the comma functor, or the diagonal extension, to absolute terms, so let us return to where we were in working our way through CP 3.73, and see whether we can validate Peirce's statements about the "commifications" of 2-adic relative terms that yield their 3-adic diagonal extensions.

But not only may any absolute term be thus regarded as a relative term, but any relative term may in the same way be regarded as a relative with one correlate more. It is convenient to take this additional correlate as the first one.

Then:

'l','s'w

will denote a lover of a woman that is a servant of that woman.

The comma here after 'l' should not be considered as altering at all the meaning of 'l', but as only a subjacent sign, serving to alter the arrangement of the correlates. (Peirce, CP 3.73).

Just to plant our feet on a more solid stage, let's apply this idea to the Othello example.

For this performance only, just to make the example more interesting, let us assume that Jeste (J) is secretly in love with Desdemona (D).

Then we begin with the modified data set:

w = "woman" = B +, D +, E
'l' = "lover of ---" = B:C +, C:B +, D:O +, E:I +, I:E +, J:D +, O:D
's' = "servant of ---" = C:O +, E:D +, I:O +, J:D +, J:O

And next we derive the following results:

'l', = "lover that is --- of ---"
  = B:B:C +, C:C:B +, D:D:O +, E:E:I +, I:I:E +, J:J:D +, O:O:D
'l','s'w = (B:B:C +, C:C:B +, D:D:O +, E:E:I +, I:I:E +, J:J:D +, O:O:D)
    × (C:O +, E:D +, I:O +, J:D +, J:O)
    × (B +, D +, E)

Now what are we to make of that?

If we operate in accordance with Peirce's example of `g`'o'h as the "giver of a horse to an owner of that horse", then we may assume that the associative law and the distributive law are by default in force, allowing us to derive this equation:

'l','s'w = 'l','s'(B +, D +, E)
  = 'l','s'B +, 'l','s'D +, 'l','s'E

Evidently what Peirce means by the associative principle, as it applies to this type of product, is that a product of elementary relatives having the form (R:S:T)(S:T)(T) is equal to R but that no other form of product yields a non-null result. Scanning the implied terms of the triple product tells us that only the following case is non-null: J = (J:J:D)(J:D)(D). It follows that:

'l','s'w = "lover and servant of a woman"
  = "lover that is a servant of a woman"
  = "lover of a woman that is a servant of that woman"
  = J

And so what Peirce says makes sense in this case.

Commentary Note 10.6

As Peirce observes, it is not possible to work with relations in general without eventually abandoning all of one's algebraic principles, in due time the associative and maybe even the distributive, just as we have already left behind the commutative. It cannot be helped, as we cannot reflect on a law if not from a perspective outside it, that is to say, at any rate, virtually so.

One way to do this would be from the standpoint of the combinator calculus, and there are places where Peirce verges on systems that are very similar, but I am making a deliberate effort to remain here as close as possible within the syntactoplastic chronism of his 1870 Logic of Relatives. So let us make use of the smoother transitions that are afforded by the paradigmatic Figures and Tables that I drew up earlier.

For the next few episodes, then, I will examine the examples that Peirce gives at the next level of complication in the multiplication of relative terms, for instance, the three that I have redrawn below.

o-------------------------------------------------o
|                                                 |
|                                                 |
|         `g`__$__%    $'l'__*   *w   %h          |
|              o  o    o     o   o    o           |
|               \  \  /       \ /    /            |
|                \  \/         @    /             |
|                 \ /\______ ______/              |
|                  @        @                     |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 6.  Giver of a Horse to a Lover of a Woman
o-------------------------------------------------o
|                                                 |
|                                                 |
|         `g`__$__%    $'o'__*   *%h              |
|              o  o    o     o   oo               |
|               \  \  /       \ //                |
|                \  \/         @/                 |
|                 \ /\____ ____/                  |
|                  @      @                       |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 7.  Giver of a Horse to an Owner of It
o-------------------------------------------------o
|                                                 |
|                                                 |
|        'l',__$__%    $'s'__*   *%w              |
|              o  o    o     o   oo               |
|               \  \  /       \ //                |
|                \  \/         @/                 |
|                 \ /\____ ____/                  |
|                  @      @                       |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 8.  Lover that is a Servant of a Woman

Commentary Note 10.7

Here is what I get when I try to analyze Peirce's "giver of a horse to a lover of a woman" example along the same lines as the 2-adic compositions.

We may begin with the mark-up shown in Figure 6.

o-------------------------------------------------o
|                                                 |
|                                                 |
|         `g`__$__%    $'l'__*   *w   %h          |
|              o  o    o     o   o    o           |
|               \  \  /       \ /    /            |
|                \  \/         @    /             |
|                 \ /\______ ______/              |
|                  @        @                     |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 6.  Giver of a Horse to a Lover of a Woman

If we analyze this in accord with the "spreadsheet" model of relational composition, the core of it is a particular way of composing a 3-adic "giving" relation G ⊆ T × U × V with a 2-adic "loving" relation L ⊆ U × W so as to obtain a specialized sort of 3-adic relation (G o L) ⊆ T × W × V. The applicable constraints on tuples are shown in Table 9.

Table 9.  Composite of Triadic and Dyadic Relations
o---------o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o=========o
|    G    #    T    |    U    |         |    V    |
o---------o---------o---------o---------o---------o
|    L    #         |    U    |    W    |         |
o---------o---------o---------o---------o---------o
|  G o L  #    T    |         |    W    |    V    |
o---------o---------o---------o---------o---------o

The hypergraph picture of the abstract composition is given in Figure 10.

o---------------------------------------------------------------------o
|                                                                     |
|                                G o L                                |
|                       ___________@___________                       |
|                      /                  \    \                      |
|                     /  G              L  \    \                     |
|                    /   @              @   \    \                    |
|                   /   /|\            / \   \    \                   |
|                  /   / | \          /   \   \    \                  |
|                 /   /  |  \        /     \   \    \                 |
|                /   /   |   \      /       \   \    \                |
|               o   o    o    o    o         o   o    o               |
|               T   T    U    V    U         W   W    V               |
|            1,_#   #`g`_$____%    $'l'______*   *1   %1              |
|               o   o    o    o    o         o   o    o               |
|                \ /      \    \  /           \ /    /                |
|                 @        \    \/             @    /                 |
|                !1!        \   /\            !1!  /                  |
|                            \ /  \_______ _______/                   |
|                             @           @                           |
|                            !1!         !1!                          |
|                                                                     |
o---------------------------------------------------------------------o
Figure 10.  Anything that is a Giver of Anything to a Lover of Anything

Commentary Note 10.8

In taking up the next example of relational composition, let's exchange the relation 't' = "trainer of ---" for Peirce's relation 'o' = "owner of ---", simply for the sake of avoiding conflicts in the symbols that we use. In this way, Figure 7 is transformed into Figure 11.

o-------------------------------------------------o
|                                                 |
|                                                 |
|         `g`__$__%    $'t'__*   *%h              |
|              o  o    o     o   oo               |
|               \  \  /       \ //                |
|                \  \/         @/                 |
|                 \ /\____ ____/                  |
|                  @      @                       |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 11.  Giver of a Horse to a Trainer of It

Now here's an interesting point, in fact, a critical transition point, that we see resting in potential but a stone's throw removed from the chronism, the secular neigborhood, the temporal vicinity of Peirce's 1870 LOR, and it's a vertex that turns on the teridentity relation.

The hypergraph picture of the abstract composition is given in Figure 12.

o---------------------------------------------------------------------o
|                                                                     |
|                                G o T                                |
|                 _________________@_________________                 |
|                /                                   \                |
|               /        G              T             \               |
|              /         @              @              \              |
|             /         /|\            / \              \             |
|            /         / | \          /   \              \            |
|           /         /  |  \        /     \              \           |
|          /         /   |   \      /       \              \          |
|         o         o    o    o    o         o              o         |
|         X         X    Y    Z    Y         Z              Z         |
|      1,_#         #`g`_$____%    $'t'______%              %1        |
|         o         o    o    o    o         o              o         |
|          \       /      \    \  /          |             /          |
|           \     /        \    \/           |            /           |
|            \   /          \   /\           |           /            |
|             \ /            \ /  \__________|__________/             |
|              @              @              @                        |
|             !1!            !1!            !1!                       |
|                                                                     |
o---------------------------------------------------------------------o
Figure 12.  Anything that is a Giver of Anything to a Trainer of It

If we analyze this in accord with the "spreadsheet" model of relational composition, the core of it is a particular way of composing a 3-adic "giving" relation G ⊆ X × Y × Z with a 2-adic "training" relation T ⊆ Y × Z in such a way as to determine a certain 2-adic relation (G o T) ⊆ X × Z. Table 13 schematizes the associated constraints on tuples.

Table 13.  Another Brand of Composition
o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o
|    G    #    X    |    Y    |    Z    |
o---------o---------o---------o---------o
|    T    #         |    Y    |    Z    |
o---------o---------o---------o---------o
|  G o T  #    X    |         |    Z    |
o---------o---------o---------o---------o

So we see that the notorious teridentity relation, which I have left equivocally denoted by the same symbol as the identity relation !1!, is already implicit in Peirce's discussion at this point.

Commentary Note 10.9

The use of the concepts of identity and teridentity is not to identify a thing in itself with itself, much less twice or thrice over, since there is no need and thus no utility in that. I can imagine Peirce asking, on Kantian principles if not entirely on Kantian premisses, "Where is the manifold to be unified?" The manifold that demands unification does not reside in the object but in the phenomena, that is, in the appearances that might have been appearances of different objects but that happen to be constrained by these identities to being just so many aspects, facets, parts, roles, or signs of one and the same object.

For example, notice how the various identity concepts actually functioned in the last example, where they had the opportunity to show their behavior in something like their natural habitat.

The use of the teridentity concept in the case of the "giver of a horse to a trainer of it" is to stipulate that the thing appearing with respect to its quality under the aspect of an absolute term, a horse, and the thing appearing with respect to its recalcitrance in the role of the correlate of a 2-adic relative, a brute to be trained, and the thing appearing with respect to its synthesis in the role of a correlate of a 3-adic relative, a gift, are one and the same thing.

Commentary Note 10.10

Figure 8 depicts the last of the three examples involving the composition of 3-adic relatives with 2-adic relatives:

o-------------------------------------------------o
|                                                 |
|                                                 |
|        'l',__$__%    $'s'__*   *%w              |
|              o  o    o     o   oo               |
|               \  \  /       \ //                |
|                \  \/         @/                 |
|                 \ /\____ ____/                  |
|                  @      @                       |
|                                                 |
|                                                 |
o-------------------------------------------------o
Figure 8.  Lover that is a Servant of a Woman

The hypergraph picture of the abstract composition is given in Figure 14.

o---------------------------------------------------------------------o
|                                                                     |
|                                L , S                                |
|                __________________^__________________                |
|               /                                     \               |
|              /       L_,              S              \              |
|             /         @               @               \             |
|            /         /|\             / \               \            |
|           /         / | \           /   \               \           |
|          /         /  |  \         /     \               \          |
|         /         /   |   \       /       \               \         |
|        /         /    |    \     /         \               \        |
|       o         o     o     o   o           o               o       |
|       X         X     X     Y   X           Y               Y       |
|    1,_#         #'l',_$_____%   $'t'________%               %1      |
|       o         o     o     o   o           o               o       |
|        \       /       \     \ /            |              /        |
|         \     /         \     \             |             /         |
|          \   /           \   / \            |            /          |
|           \ /             \ /   \___________|___________/           |
|            @               @                @                       |
|           !1!             !1!              !1!                      |
|                                                                     |
o---------------------------------------------------------------------o
Figure 14.  Anything that's a Lover of Anything and that's a Servant of It

This example illustrates the way that Peirce analyzes the logical conjunction, we might even say the "parallel conjunction", of a couple of 2-adic relatives in terms of the comma extension and the same style of composition that we saw in the last example, that is, according to a pattern of anaphora that invokes the teridentity relation.

If we lay out this analysis of conjunction on the spreadsheet model of relational composition, the gist of it is the diagonal extension of a 2-adic "loving" relation L ⊆ X × Y to the corresponding 3-adic "loving and being" relation L, ⊆ X × X × Y, which is then composed in a specific way with a 2-adic "serving" relation S ⊆ X × Y, so as to determine the 2-adic relation L,S ⊆ X × Y. Table 15 schematizes the associated constraints on tuples.

Table 15.  Conjunction Via Composition
o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o
|    L,   #    X    |    X    |    Y    |
o---------o---------o---------o---------o
|    S    #         |    X    |    Y    |
o---------o---------o---------o---------o
|  L , S  #    X    |         |    Y    |
o---------o---------o---------o---------o

Commentary Note 10.11

I return to where we were in unpacking the contents of CP 3.73. Peirce remarks that the comma operator can be iterated at will:

In point of fact, since a comma may be added in this way to any relative term, it may be added to one of these very relatives formed by a comma, and thus by the addition of two commas an absolute term becomes a relative of two correlates.

So:

m,,b,r

interpreted like

`g`'o'h

means a man that is a rich individual and is a black that is that rich individual.

But this has no other meaning than:

m,b,r

or a man that is a black that is rich.

Thus we see that, after one comma is added, the addition of another does not change the meaning at all, so that whatever has one comma after it must be regarded as having an infinite number. (Peirce, CP 3.73).

Again, let us check whether this makes sense on the stage of our small but dramatic model.

Let's say that Desdemona and Othello are rich, and, among the persons of the play, only they.

With this premiss we obtain a sample of absolute terms that is sufficiently ample to work through our example:

1 = B +, C +, D +, E +, I +, J +, O
b = O
m = C +, I +, J +, O
r = D +, O

One application of the comma operator yields the following 2-adic relatives:

1, = B:B +, C:C +, D:D +, E:E +, I:I +, J:J +, O:O
b, = O:O
m, = C:C +, I:I +, J:J +, O:O
r, = D:D +, O:O

Another application of the comma operator generates the following 3-adic relatives:

1,, = B:B:B +, C:C:C +, D:D:D +, E:E:E +, I:I:I +, J:J:J +, O:O:O b,, = O:O:O
m,, = C:C:C +, I:I:I +, J:J:J +, O:O:O
r,, = D:D:D +, O:O:O

Assuming the associativity of multiplication among 2-adic relatives, we may compute the product m,b,r by a brute force method as follows:

m,b,r = (C:C +, I:I +, J:J +, O:O)(O:O)(D +, O)
  = (C:C +, I:I +, J:J +, O:O)(O)
  = O

This avers that a man that is black that is rich is Othello, which is true on the premisses of our universe of discourse.

The stock associations of `g`'o'h lead us to multiply out the product m,,b,r along the following lines, where the trinomials of the form (X:Y:Z)(Y:Z)(Z) are the only ones that produce any non-null result, specifically, of the form (X:Y:Z)(Y:Z)(Z) = X.

m,,b,r = (C:C:C +, I:I:I +, J:J:J +, O:O:O)(O:O)(D +, O)
  = (O:O:O)(O:O)(O)
  = O

So we have that m,,b,r = m,b,r.

In closing, observe that the teridentity relation has turned up again in this context, as the second comma-ing of the universal term itself:

1,, = B:B:B +, C:C:C +, D:D:D +, E:E:E +, I:I:I +, J:J:J +, O:O:O.

Selection 11

The Signs for Multiplication (concl.)

The conception of multiplication we have adopted is that of the application of one relation to another. So, a quaternion being the relation of one vector to another, the multiplication of quaternions is the application of one such relation to a second.

Even ordinary numerical multiplication involves the same idea, for 2 × 3 is a pair of triplets, and 3 × 2 is a triplet of pairs, where "triplet of" and "pair of" are evidently relatives.

If we have an equation of the form:

xy = z

and there are just as many x's per y as there are, per things, things of the universe, then we have also the arithmetical equation:

[x][y] = [z].

For instance, if our universe is perfect men, and there are as many teeth to a Frenchman (perfect understood) as there are to any one of the universe, then:

['t'][f] = ['t'f]

holds arithmetically.

So if men are just as apt to be black as things in general:

[m,][b] = [m,b]

where the difference between [m] and [m,] must not be overlooked.

It is to be observed that:

[!1!] = `1`.

Boole was the first to show this connection between logic and probabilities. He was restricted, however, to absolute terms. I do not remember having seen any extension of probability to relatives, except the ordinary theory of expectation.

Our logical multiplication, then, satisfies the essential conditions of multiplication, has a unity, has a conception similar to that of admitted multiplications, and contains numerical multiplication as a case under it. (Peirce, CP 3.76).

Commentary Note 11.1

We have reached in our reading of Peirce's text a suitable place to pause — actually, it is more like to run as fast as we can along a parallel track — where I can due quietus make of a few IOU's that I've used to pave my way.

The more pressing debts that come to mind are concerned with the matter of Peirce's "number of" function, that maps a term t into a number [t], and with my justification for calling a certain style of illustration by the name of the "hypergraph" picture of relational composition. As it happens, there is a thematic relation between these topics, and so I can make my way forward by addressing them together.

At this point we have two good pictures of how to compute the relational compositions of arbitrary 2-adic relations, namely, the bigraph and the matrix representations, each of which has its differential advantages in different types of situations.

But we do not have a comparable picture of how to compute the richer variety of relational compositions that involve 3-adic or any higher adicity relations. As a matter of fact, we run into a non-trivial classification problem simply to enumerate the different types of compositions that arise in these cases.

Therefore, let us inaugurate a systematic study of relational composition, general enough to explicate the "generative potency" of Peirce's 1870 LOR.

Commentary Note 11.2

Let's bring together the various things that Peirce has said about the "number of function" up to this point in the paper.

NOF 1

I propose to assign to all logical terms, numbers; to an absolute term, the number of individuals it denotes; to a relative term, the average number of things so related to one individual.

Thus in a universe of perfect men (men), the number of "tooth of" would be 32.

The number of a relative with two correlates would be the average number of things so related to a pair of individuals; and so on for relatives of higher numbers of correlates.

I propose to denote the number of a logical term by enclosing the term in square brackets, thus [t]. (Peirce, CP 3.65).

NOF 2

But not only do the significations of '=' and '<' here adopted fulfill all absolute requirements, but they have the supererogatory virtue of being very nearly the same as the common significations. Equality is, in fact, nothing but the identity of two numbers; numbers that are equal are those which are predicable of the same collections, just as terms that are identical are those which are predicable of the same classes. So, to write 5 < 7 is to say that 5 is part of 7, just as to write f < m is to say that Frenchmen are part of men. Indeed, if f < m, then the number of Frenchmen is less than the number of men, and if v = p, then the number of Vice-Presidents is equal to the number of Presidents of the Senate; so that the numbers may always be substituted for the terms themselves, in case no signs of operation occur in the equations or inequalities. (Peirce, CP 3.66).

NOF 3

It is plain that both the regular non-invertible addition and the invertible addition satisfy the absolute conditions. But the notation has other recommendations. The conception of taking together involved in these processes is strongly analogous to that of summation, the sum of 2 and 5, for example, being the number of a collection which consists of a collection of two and a collection of five. Any logical equation or inequality in which no operation but addition is involved may be converted into a numerical equation or inequality by substituting the numbers of the several terms for the terms themselves — provided all the terms summed are mutually exclusive.

Addition being taken in this sense, nothing is to be denoted by 'zero', for then:

x +, 0 = x

whatever is denoted by x; and this is the definition of zero. This interpretation is given by Boole, and is very neat, on account of the resemblance between the ordinary conception of zero and that of nothing, and because we shall thus have

[0] = 0.

(Peirce, CP 3.67).

NOF 4

The conception of multiplication we have adopted is that of the application of one relation to another. …

Even ordinary numerical multiplication involves the same idea, for 2 x 3 is a pair of triplets, and 3 x 2 is a triplet of pairs, where "triplet of" and "pair of" are evidently relatives.

If we have an equation of the form:

xy = z

and there are just as many x’s per y as there are, per things, things of the universe, then we have also the arithmetical equation:

[x][y] = [z].

For instance, if our universe is perfect men, and there are as many teeth to a Frenchman (perfect understood) as there are to any one of the universe, then:

[t][f] = [tf]

holds arithmetically.

So if men are just as apt to be black as things in general:

[m,][b] = [m,b]

where the difference between [m] and [m,] must not be overlooked.

It is to be observed that:

[!1!] = `1`.

Boole was the first to show this connection between logic and probabilities. He was restricted, however, to absolute terms. I do not remember having seen any extension of probability to relatives, except the ordinary theory of expectation.

Our logical multiplication, then, satisfies the essential conditions of multiplication, has a unity, has a conception similar to that of admitted multiplications, and contains numerical multiplication as a case under it. (Peirce, CP 3.76).

Before I can discuss Peirce's "number of" function in greater detail I will need to deal with an expositional difficulty that I have been very carefully dancing around all this time, but that will no longer abide its assigned place under the rug.

Functions have long been understood, from well before Peirce's time to ours, as special cases of 2-adic relations, so the "number of" function itself is already to be numbered among the types of 2-adic relatives that we've been explictly mentioning and implicitly using all this time. But Peirce's way of talking about a 2-adic relative term is to list the "relate" first and the "correlate" second, a convention that goes over into functional terms as making the functional value first and the functional antecedent second, whereas almost anyone brought up in our present time frame has difficulty thinking of a function any other way than as a set of ordered pairs where the order in each pair lists the functional argument, or domain element, first and the functional value, or codomain element, second.

It is possible to work all this out in a very nice way within a very general context of flexible conventions, but not without introducing an order of anachronisms into Peirce's presentation that I am presently trying to avoid as much as possible. Thus, I will need to experiment with various sorts of compromise formations.

Commentary Note 11.3

Having spent a fair amount of time in earnest reflection on the issue, I cannot see a way to continue my interpretation of Peirce's 1870 LOR, to master the distance between his conventions of presentation and my present personal perspectives on relations, without introducing a few interpretive anachronisms and other artifacts in the process, and the only excuse that I can make for myself is that at least these will be novel sorts of anachronisms and artifacts in comparison with the ones that the reeder may alreedy have seen. A poor excuse, but all I have. The least that I can do, then, and I'm something of an expert on that, is to exposit my personal interpretive apparatus on a separate thread, where it will not distract too much from the intellectual canon, that is to opine, the "thinking panpipe" that we find in Peirce's 1870 LOR.

Ripped from the pages of my dissertation, then, I will lay out some samples of background material on "Relations In General", as spied from a combinatorial point of view, that I hope will serve in reeding Peirce's text, if we draw on it judiciously.

Commentary Note 11.4

The task before us now is to get very clear about the relationships among relative terms, relations, and the special cases of relations that are constituted by equivalence relations, functions, and so on.

I am optimistic that the some of the tethering material that I spun along the "Relations In General" (RIG) thread will help us to track the equivalential and functional properties of special relations in a way that will not weigh too heavy on the rather capricious lineal embedding of syntax in 1-dimensional strings on 2-dimensional pages. But I cannot see far enough ahead to forsee all the consequences of trying this tack, and so I cannot help but to be a bit experimental.

The first obstacle to get past is the order convention that Peirce's orientation to relative terms causes him to use for functions. By way of making our discussion concrete, and directing our attentions to an immediate object example, let us say that we desire to represent the "number of" function, that Peirce denotes by means of square brackets, by means of a 2-adic relative term, say 'v', where 'v'(t) = [t] = the number of the term t.

To set the 2-adic relative term 'v' within a suitable context of interpretation, let us suppose that 'v' corresponds to a relation V ⊆ R × S, where R is the set of real numbers and S is a suitable syntactic domain, here described as "terms". Then the 2-adic relation V is evidently a function from S to R. We might think to use the plain letter "v" to denote this function, as v : S → R, but I worry this may be a chaos waiting to happen. Also, I think that we should anticipate the very great likelihood that we cannot always assign numbers to every term in whatever syntactic domain S that we choose, so it is probably better to account the 2-adic relation V as a partial function from S to R. All things considered, then, let me try out the following impedimentaria of strategies and compromises.

First, I will adapt the functional arrow notation so that it allows us to detach the functional orientation from the order in which the names of domains are written on the page. Second, I will need to change the notation for "pre-functions", or "partial functions", from one likely confound to a slightly less likely confound. This gives the scheme:

q : XY means that q is functional at X.
q : XY means that q is functional at Y.
q : X ~> Y means that q is pre-functional at X.
q : X <~ Y means that q is pre-functional at Y.

For now, I will pretend that v is a function in R of S, v : RS, amounting to the functional alias of the 2-adic relation V ⊆ R × S, and associated with the 2-adic relative term v whose relate lies in the set R of real numbers and whose correlate lies in the set S of syntactic terms.

Commentary Note 11.5

It always helps me to draw lots of pictures of stuff, so let's extract the somewhat overly compressed bits of the "Relations In General" thread that we'll need right away for the applications to Peirce's 1870 LOR, and draw what icons we can within the frame of Ascii.

For the immediate present, we may start with 2-adic relations and describe the customary species of relations and functions in terms of their local and numerical incidence properties.

Let PX × Y be an arbitrary 2-adic relation. The following properties of P can be defined:

P is "total" at X iff P is (≥1)-regular at X.
P is "total" at Y iff P is (≥1)-regular at Y.
P is "tubular" at X iff P is (≤1)-regular at X.
P is "tubular" at Y iff P is (≤1)-regular at Y.

To illustrate these properties, let us fashion a "generic enough" example of a 2-adic relation, E ⊆ X × Y, where X = Y = {0, 1, …, 8, 9}, and where the bigraph picture of E looks like this:

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
     \  |\ /|\   \   \  |   |\
      \ | / | \   \   \ |   | \         E
       \|/ \|  \   \   \|   |  \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

If we scan along the X dimension we see that the "Y incidence degrees" of the X nodes 0 through 9 are 0, 1, 2, 3, 1, 1, 1, 2, 0, 0, in order.

If we scan along the Y dimension we see that the "X incidence degrees" of the Y nodes 0 through 9 are 0, 0, 3, 2, 1, 1, 2, 1, 1, 0, in order.

Thus, E is not total at either X or Y, since there are nodes in both X and Y having incidence degrees that equal 0.

Also, E is not tubular at either X or Y, since there exist nodes in both X and Y having incidence degrees greater than 1.

Clearly, then, E cannot qualify as a pre-function or a function on either of its relational domains.

Commentary Note 11.6

Let's continue to work our way through the rest of the first set of definitions, making up appropriate examples as we go.

Let PX × Y be an arbitrary 2-adic relation. The following properties of P can be defined:

P is "total" at X iff P is (≥1)-regular at X.
P is "total" at Y iff P is (≥1)-regular at Y.
P is "tubular" at X iff P is (≤1)-regular at X.
P is "tubular" at Y iff P is (≤1)-regular at Y.

E1 exemplifies the quality of "totality at X".

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
 \   \  |\ /|\   \   \  |   |\   \  |
  \   \ | / | \   \   \ |   | \   \ |   E_1
   \   \|/ \|  \   \   \|   |  \   \|
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

E2 exemplifies the quality of "totality at Y".

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
|\   \  |\ /|\   \   \  |   |\   \
| \   \ | / | \   \   \ |   | \   \     E_2
|  \   \|/ \|  \   \   \|   |  \   \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

E3 exemplifies the quality of "tubularity at X".

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
     \  |  /     \   \  |   |
      \ | /       \   \ |   |           E_3
       \|/         \   \|   |
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

E4 exemplifies the quality of "tubularity at Y".

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
           /|\   \   \      |\
          / | \   \   \     | \         E_4
         /  |  \   \   \    |  \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

If PX × Y is tubular at X, then P is known as a "partial function" or a "pre-function" from X to Y, frequently signalized by renaming P with an alternative lower case name, say "p", and writing p : X ~> Y.

Just by way of formalizing the definition:

P is a "pre-function" P : X ~> Y iff P is tubular at X.

P is a "pre-function" P : X <~ Y iff P is tubular at Y.

So, E3 is a pre-function e3 : X ~> Y, and E4 is a pre-function e4 : X <~ Y.

Commentary Note 11.7

We come now to the very special cases of 2-adic relations that are known as functions. It will serve a dual purpose on behalf of the exposition if we take the class of functions as a source of object examples to clarify the more abstruse concepts in the RIG material.

To begin, let's recall the definition of a local flag:

Lx.j = { (x1, …, xj, …, xk) ∈ L : xj = x }.

In the case of a 2-adic relation L ⊆ X1 × X2 = X × Y, we can reap the benefits of a radical simplification in the definitions of the local flags. Also in this case, we tend to denote Lu.1 by "Lu.X" and Lv.2 by "Lv.Y".

In the light of these considerations, the local flags of a 2-adic relation L ⊆ X × Y may be formulated as follows:

Lu.X = {(x, y) ∈ L : x = u}
  = the set of all ordered pairs in L incident with u in X.
Lv.Y = {(x, y) ∈ L : y = v}
  = the set of all ordered pairs in L incident with v in Y.

A sufficient illustration is supplied by the earlier example E.

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
     \  |\ /|\   \   \  |   |\
      \ | / | \   \   \ |   | \         E
       \|/ \|  \   \   \|   |  \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

The local flag E3.X is displayed here:

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
           /|\
          / | \                         E_3.X
         /  |  \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

The local flag E2.Y is displayed here:

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
     \  |  /
      \ | /                             E_2.Y
       \|/
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

Commentary Note 11.8

Now let's re-examine the numerical incidence properties of relations, concentrating on the definitions of the assorted regularity conditions.

For instance, L is said to be "c-regular at j" if and only if the cardinality of the local flag Lx.j is c for all x in X'j, coded in symbols, if and only if |Lx.j| = c for all x in Xj.

In a similar fashion, one can define the NIP's "<c-regular at j", ">c-regular at j", and so on. For ease of reference, I record a few of these definitions here:

L is c-regular at j iff Lx.j| = c for all x in Xj.
L is (<c)-regular at j iff Lx.j| < c for all x in Xj.
L is (>c)-regular at j iff Lx.j| > c for all x in Xj.
L is (≤c)-regular at j iff Lx.j| ≤ c for all x in Xj.
L is (≥c)-regular at j iff Lx.j| ≥ c for all x in Xj.

Clearly, if any relation is (≤c)-regular on one of its domains Xj and also (≥c)-regular on the same domain, then it must be (=c)-regular on the affected domain Xj, in effect, c-regular at j.

For example, let G = {rst} and H = {1, …, 9}, and consider the 2-adic relation F ⊆ G × H that is bigraphed here:

    r           s           t
    o           o           o       G
   /|\         /|\         /|\
  / | \       / | \       / | \     F
 /  |  \     /  |  \     /  |  \
o   o   o   o   o   o   o   o   o   H
1   2   3   4   5   6   7   8   9

We observe that F is 3-regular at G and 1-regular at H.

Commentary Note 11.9

Among the vast variety of conceivable regularities affecting 2-adic relations, we pay special attention to the c-regularity conditions where c is equal to 1.

Let PX × Y be an arbitrary 2-adic relation. The following properties of P can be defined:

P is "total" at X iff P is (≥1)-regular at X.
P is "total" at Y iff P is (≥1)-regular at Y.
P is "tubular" at X iff P is (≤1)-regular at X.
P is "tubular" at Y iff P is (≤1)-regular at Y.

We have already looked at 2-adic relations that separately exemplify each of these regularities.

Also, we introduced a few bits of additional terminology and special-purpose notations for working with tubular relations:

P is a "pre-function" P : X ~> Y iff P is tubular at X.
P is a "pre-function" P : X <~ Y iff P is tubular at Y.

Thus, we arrive by way of this winding stair at the very special stamps of 2-adic relations P ⊆ X × Y that are "total prefunctions" at X (or Y), "total and tubular" at X (or Y), or "1-regular" at X (or Y), more often celebrated as "functions" at X (or Y).

If P is a pre-function P : X ~> Y that happens to be total at X, then P is known as a "function" from X to Y, typically indicated as P : X → Y.

To say that a relation P ⊆ X × Y is totally tubular at X is to say that it is 1-regular at X. Thus, we may formalize the following definitions:

P is a "function" p : XY iff P is 1-regular at X.
P is a "function" p : XY iff P is 1-regular at Y.

For example, let X = Y = {0, …, 9} and let F ⊆ X × Y be the 2-adic relation that is depicted in the bigraph below:

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
 \ /       /|\   \      |   |\   \
  \       / | \   \     |   | \   \     F
 / \     /  |  \   \    |   |  \   \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

We observe that F is a function at Y, and we record this fact in either of the manners F : X ← Y or F : Y → X.

Commentary Note 11.10

In the case of a 2-adic relation F ⊆ X × Y that has the qualifications of a function f : X → Y, there are a number of further differentia that arise:

f is "surjective" iff f is total at Y.
f is "injective" iff f is tubular at Y.
f is "bijective" iff f is 1-regular at Y.

For example, or more precisely, contra example, the function f : X → Y that is depicted below is neither total at Y nor tubular at Y, and so it cannot enjoy any of the properties of being sur-, or in-, or bi-jective.

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
|    \  |  /     \   \  |   |    \ /
|     \ | /       \   \ |   |     \     f
|      \|/         \   \|   |    / \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

A cheap way of getting a surjective function out of any function is to reset its codomain to its range. For example, the range of the function f above is Y′ = {0, 2, 5, 6, 7, 8, 9}. Thus, if we form a new function g : XY′ that looks just like f on the domain X but is assigned the codomain Y′, then g is surjective, and is described as mapping "onto" Y′.

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
|    \  |  /     \   \  |   |    \ /
|     \ | /       \   \ |   |     \     g
|      \|/         \   \|   |    / \
o       o           o   o   o   o   o   Y'
0       2           5   6   7   8   9

The function h : Y′ → Y is injective.

0       2           5   6   7   8   9
o       o           o   o   o   o   o   Y'
|       |            \ /    |    \ /
|       |             \     |     \     h
|       |            / \    |    / \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

The function m : XY is bijective.

0   1   2   3   4   5   6   7   8   9
o   o   o   o   o   o   o   o   o   o   X
|   |   |    \ /     \ /    |    \ /
|   |   |     \       \     |     \     m
|   |   |    / \     / \    |    / \
o   o   o   o   o   o   o   o   o   o   Y
0   1   2   3   4   5   6   7   8   9

Commentary Note 11.11

The preceding exercises were intended to beef-up our "functional" literacy skills to the point where we can read our functional alphabets backwards and forwards and to ferret out the local functionalites that may be immanent in relative terms no matter where they locate themselves within the domains of relations. I am hopeful that these skills will serve us in good stead as we work to build a catwalk from Peirce's platform to contemporary scenes on the logic of relatives, and back again.

By way of extending a few very tentative plancks, let us experiment with the following definitions:

  1. A relative term "p" and the corresponding relation P ⊆ X × Y are both called "functional on relates" if and only if P is a function at X, in symbols, P : X → Y.
  2. A relative term "p" and the corresponding relation P ⊆ X × Y are both called "functional on correlates" if and only if P is function at Y, in symbols, P : X ← Y.

When a relation happens to be a function, it may be excusable to use the same name for it in both applications, writing out explicit type markers like P : X × Y, P : X → Y, P : X ← Y, as the case may be, when and if it serves to clarify matters.

From this current, perhaps transient, perspective, it appears that our next task is to examine how the known properties of relations are modified when an aspect of functionality is spied in the mix.

Let us then return to our various ways of looking at relational composition, and see what changes and what stays the same when the relations in question happen to be functions of various different kinds at some of their domains.

Here is one generic picture of relational composition, cast in a style that hews pretty close to the line of potentials inherent in Peirce's syntax of this period.

o-----------------------------------------------------------o
|                                                           |
|                           P o Q                           |
|                 ____________^____________                 |
|                /                         \                |
|               /      P             Q      \               |
|              /       @             @       \              |
|             /       / \           / \       \             |
|            /       /   \         /   \       \            |
|           o       o     o       o     o       o           |
|           X       X     Y       Y     Z       Z           |
|       1,__#       #'p'__$       $'q'__%       %1          |
|           o       o     o       o     o       o           |
|            \     /       \     /       \     /            |
|             \   /         \   /         \   /             |
|              \ /           \ /           \ /              |
|               @             @             @               |
|              !1!           !1!           !1!              |
|                                                           |
o-----------------------------------------------------------o
Figure 16.  Anything that is a 'p' of a 'q' of Anything

From this we extract the "hypergraph picture" of relational composition:

o-----------------------------------------------------------o
|                                                           |
|                 P         P o Q         Q                 |
|                 @           @           @                 |
|                / \         / \         / \                |
|               /   \       /   \       /   \               |
|              o     o     o     o     o     o              |
|              X     Y     X     Z     Y     Z              |
|              o     o     o     o     o     o              |
|               \     \   /       \   /     /               |
|                \     \ /         \ /     /                |
|                 \     /           \     /                 |
|                  \   / \         / \   /                  |
|                   \ /   \___ ___/   \ /                   |
|                    @        @        @                    |
|                   !1!      !1!      !1!                   |
|                                                           |
o-----------------------------------------------------------o
Figure 17.  Relational Composition P o Q

All of the relevant information of these Figures can be compressed into the form of a "spreadsheet", or constraint satisfaction table:

Table 18.  Relational Composition P o Q
o---------o---------o---------o---------o
|         #   !1!   |   !1!   |   !1!   |
o=========o=========o=========o=========o
|    P    #    X    |    Y    |         |
o---------o---------o---------o---------o
|    Q    #         |    Y    |    Z    |
o---------o---------o---------o---------o
|  P o Q  #    X    |         |    Z    |
o---------o---------o---------o---------o

So the following presents itself as a reasonable plan of study: Let's see how much easy mileage we can get in our exploration of functions by adopting the above templates as a paradigm.

Commentary Note 11.12

Since functions are special cases of 2-adic relations, and since the space of 2-adic relations is closed under relational composition, in other words, the composition of a couple of 2-adic relations is again a 2-adic relation, we know that the relational composition of a couple of functions has to be a 2-adic relation. If it is also necessarily a function, then we would be justified in speaking of "functional composition", and also of saying that the space of functions is closed under this functional form of composition.

Just for novelty's sake, let's try to prove this for relations that are functional on correlates.

So our task is this: Given a couple of 2-adic relations, P ⊆ X × Y and Q ⊆ Y × Z, that are functional on correlates, P : X ← Y and Q : Y ← Z, we need to determine whether the relational composition P o Q ⊆ X × Z is also P o Q : X ← Z, or not.

It always helps to begin by recalling the pertinent definitions.

For a 2-adic relation L ⊆ X × Y, we have:

L is a "function" L : X ← Y if and only if L is 1-regular at Y.

As for the definition of relational composition, it is enough to consider the coefficient of the composite on an arbitrary ordered pair like i:j.

(P o Q)ij = ∑k (Pik Qkj).

So let us begin.

P : X ← Y, or P being 1-regular at Y, means that there is exactly one ordered pair i:k in P for each k in Y.
Q : Y ← Z, or Q being 1-regular at Z, means that there is exactly one ordered pair k:j in Q for each j in Z.

Thus, there is exactly one ordered pair i:j in P o Q for each j in Z, which means that P o Q is 1-regular at Z, and so we have the function P o Q : X ← Z.

And we are done.

Bur proofs after midnight must be checked the next day.

Commentary Note 11.13

As we make our way toward the foothills of Peirce's 1870 LOR, there is one piece of equipment that we dare not leave the plains without — for there is little hope that "l'or dans les montagnes là" will lie among our prospects without the ready use of its leverage and lifts — and that is a facility with the utilities that are variously called "arrows", "morphisms", "homomorphisms", "structure-preserving maps", and several other names, in accord with the altitude of abstraction at which one happens to be working, at the given moment in question.

As a middle but not too beaten track, I will lay out the definition of a morphism in the forms that we will need right off, in a slight excess of formality at first, but quickly bringing the bird home to roost on more familiar perches.

Let's say that we have three functions J, K, L that have the following types and that satisfy the equation that follows:

J : XY
K : XX × X
L : YY × Y
J(L(u, v)) = K(Ju, Jv)

Our sagittarian leitmotif can be rubricized in the following slogan:

The image of the ligature is the compound of the images.

Where J is the "image", K is the "compound", and L is the "ligature".

Figure 19 presents us with a picture of the situation in question.

o-----------------------------------------------------------o
|                                                           |
|                       K           L                       |
|                       @           @                       |
|                      /|\         /|\                      |
|                     / | \       / | \                     |
|                    v  |  \     v  |  \                    |
|                   o   o   o   o   o   o                   |
|                   X   X   X   Y   Y   Y                   |
|                   o   o   o   o   o   o                   |
|                    ^   ^   ^ /   /   /                    |
|                     \   \   \   /   /                     |
|                      \   \ / \ /   /                      |
|                       \   \   \   /                       |
|                        \ / \ / \ /                        |
|                         @   @   @                         |
|                         J   J   J                         |
|                                                           |
o-----------------------------------------------------------o
Figure 19.  Structure Preserving Transformation J : K <- L

Here, I have used arrowheads to indicate the relational domains at which each of the relations JKL happens to be functional.

Table 20 gives the constraint matrix version of the same thing.

Table 20.  Arrow:  J(L(u, v)) = K(Ju, Jv)
o---------o---------o---------o---------o
|         #    J    |    J    |    J    |
o=========o=========o=========o=========o
|    K    #    X    |    X    |    X    |
o---------o---------o---------o---------o
|    L    #    Y    |    Y    |    Y    |
o---------o---------o---------o---------o

One way to read this Table is in terms of the informational redundancies that it schematizes. In particular, it can be read to say that when one satisfies the constraint in the L row, along with all of the constraints in the J columns, then the constraint in the K row is automatically true. That is one way of understanding the equation: J(L(uv)) = K(JuJv).

Commentary Note 11.14

First, a correction. Ignore for now the gloss that I gave in regard to Figure 19:

Here, I have used arrowheads to indicate the relational domains at which each of the relations J, K, L happens to be functional.

It is more like the feathers of the arrows that serve to mark the relational domains at which the relations J, K, L are functional, but it would take yet another construction to make this precise, as the feathers are not uniquely appointed but many splintered.

Now, as promised, let's look at a more homely example of a morphism, say, any one of the mappings J : R → R (roughly speaking) that are commonly known as logarithm functions, where you get to pick your favorite base. In this case, K(rs) = r + s and L(uv) = u \(\cdot\) v, and the defining formula J(L(uv)) = K(JuJv) comes out looking like J(u \(\cdot\) v) = J(u) + J(v), writing a dot (\(\cdot\)) and a plus sign (+) for the ordinary 2-ary operations of arithmetical multiplication and arithmetical summation, respectively.

o-----------------------------------------------------------o
|                                                           |
|                      {+}         {.}                      |
|                       @           @                       |
|                      /|\         /|\                      |
|                     / | \       / | \                     |
|                    v  |  \     v  |  \                    |
|                   o   o   o   o   o   o                   |
|                   X   X   X   Y   Y   Y                   |
|                   o   o   o   o   o   o                   |
|                    ^   ^   ^ /   /   /                    |
|                     \   \   \   /   /                     |
|                      \   \ / \ /   /                      |
|                       \   \   \   /                       |
|                        \ / \ / \ /                        |
|                         @   @   @                         |
|                         J   J   J                         |
|                                                           |
o-----------------------------------------------------------o
Figure 21.  Logarithm Arrow J : {+} <- {.}

Thus, where the "image" J is the logarithm map, the "compound" K is the numerical sum, and the the "ligature" L is the numerical product, one obtains the immemorial mnemonic motto:

The image of the product is the sum of the images.
J(u \(\cdot\) v) = J(u) + J(v)
J(L(u, v)) = K(Ju, Jv)

Commentary Note 11.15

I'm going to elaborate a little further on the subject of arrows, morphisms, or structure-preserving maps, as a modest amount of extra work at this point will repay ample dividends when it comes time to revisit Peirce's "number of" function on logical terms.

The "structure" that is being preserved by a structure-preserving map is just the structure that we all know and love as a 3-adic relation. Very typically, it will be the type of 3-adic relation that defines the type of 2-ary operation that obeys the rules of a mathematical structure that is known as a "group", that is, a structure that satisfies the axioms for closure, associativity, identities, and inverses.

For example, in the previous case of the logarithm map J, we have the data:

J : RR (properly restricted)
K : RR × R, where K(r, s) = r + s
L : RR × R, where L(u, v) = u \(\cdot\) v

Real number addition and real number multiplication (suitably restricted) are examples of group operations. If we write the sign of each operation in braces as a name for the 3-adic relation that constitutes or defines the corresponding group, then we have the following set-up:

J : { + } ← { \(\cdot\) }
{ + } ⊆ R × R × R
{ \(\cdot\) } ⊆ R × R × R

In many cases, one finds that both groups are written with the same sign of operation, typically "\(\cdot\)", "+", "*", or simple concatenation, but they remain in general distinct whether considered as operations or as relations, no matter what signs of operation are used. In such a setting, our chiasmatic theme may run a bit like these two variants:

The image of the sum is the sum of the images.
The image of the product is the product of the images.

Figure 22 presents a generic picture for groups G and H.

o-----------------------------------------------------------o
|                                                           |
|                       G           H                       |
|                       @           @                       |
|                      /|\         /|\                      |
|                     / | \       / | \                     |
|                    v  |  \     v  |  \                    |
|                   o   o   o   o   o   o                   |
|                   X   X   X   Y   Y   Y                   |
|                   o   o   o   o   o   o                   |
|                    ^   ^   ^ /   /   /                    |
|                     \   \   \   /   /                     |
|                      \   \ / \ /   /                      |
|                       \   \   \   /                       |
|                        \ / \ / \ /                        |
|                         @   @   @                         |
|                         J   J   J                         |
|                                                           |
o-----------------------------------------------------------o
Figure 22.  Group Homomorphism J : G <- H

In a setting where both groups are written with a plus sign, perhaps even constituting the very same group, the defining formula of a morphism, J(L(uv)) = K(JuJv), takes on the shape J(u + v) = Ju + Jv, which looks very analogous to the'distributive multiplication of a sum (u + v) by a factor J. Hence another popular name for a morphism: a "linear" map.

Commentary Note 11.16

I think that we have enough material on morphisms now to go back and cast a more studied eye on what Peirce is doing with that "number of" function, the one that we apply to a logical term t, absolute or relative of any number of correlates, by writing it in square brackets, as [t]. It is frequently convenient to have a prefix notation for this function, and since Peirce reserves n to signify not, I will try to use v, personally thinking of it as a Greek ν, which stands for frequency in physics, and which kind of makes sense if we think of frequency as it's habitual in statistics. End of mnemonics.

My plan will be nothing less plodding than to work through all of the principal statements that Peirce has made about the "number of" function up to our present stopping place in the paper, namely, those that I collected once before and placed at this location:

I propose to assign to all logical terms, numbers; to an absolute term, the number of individuals it denotes; to a relative term, the average number of things so related to one individual.

Thus in a universe of perfect men (men), the number of "tooth of" would be 32.

The number of a relative with two correlates would be the average number of things so related to a pair of individuals; and so on for relatives of higher numbers of correlates.

I propose to denote the number of a logical term by enclosing the term in square brackets, thus [t]. (Peirce, CP 3.65).

We may formalize the role of the "number of" function by assigning it a local habitation and a name v : S → R, where S is a suitable set of signs, called the syntactic domain, that is ample enough to hold all of the terms that we might wish to number in a given discussion, and where R is the real number domain.

Transcribing Peirce's example, we may let m = "man" and t = "tooth of ---". Then v(t) = [t] = [tm]÷[m], that is to say, in a universe of perfect human dentition, the number of the relative term "tooth of ---" is equal to the number of teeth of humans divided by the number of humans, that is, 32.

The 2-adic relative term t determines a 2-adic relation T ⊆ U × V, where U and V are two universes of discourse, possibly the same one, that hold among other things all of the teeth and all of the people that happen to be under discussion, respectively.

A rough indication of the bigraph for T might be drawn as follows, where I have tried to sketch in just the toothy part of U and the peoply part of V.

t_1     t_32  t_33    t_64  t_65    t_96  ...     ...
 o  ...  o     o  ...  o     o  ...  o     o  ...  o     U
  \  |  /       \  |  /       \  |  /       \  |  /
   \ | /         \ | /         \ | /         \ | /       T
    \|/           \|/           \|/           \|/
     o             o             o             o         V
    m_1           m_2           m_3           ...

Notice that the "number of" function v : S → R needs the data that is represented by this entire bigraph for T in order to compute the value [t].

Finally, one observes that this component of T is a function in the direction T : U → V, since we are counting only those teeth that ideally occupy one and only one mouth of a creature.

Commentary Note 11.17

I think that the reader is beginning to get an inkling of the crucial importance of the "number of" map in Peirce's way of looking at logic, for it's one of the plancks in the bridge from logic to the theories of probability, statistics, and information, in which logic forms but a limiting case at one scenic turnout on the expanding vista. It is, as a matter of necessity and a matter of fact, practically speaking, at any rate, one way that Peirce forges a link between the "eternal", logical, or rational realm and the "secular", empirical, or real domain.

With that little bit of encouragement and exhortation, let us return to the nitty gritty details of the text.

But not only do the significations of "=" and "<" here adopted fulfill all absolute requirements, but they have the supererogatory virtue of being very nearly the same as the common significations. Equality is, in fact, nothing but the identity of two numbers; numbers that are equal are those which are predicable of the same collections, just as terms that are identical are those which are predicable of the same classes. So, to write 5 < 7 is to say that 5 is part of 7, just as to write f < m is to say that Frenchmen are part of men. Indeed, if f < m, then the number of Frenchmen is less than the number of men, and if v = p, then the number of Vice-Presidents is equal to the number of Presidents of the Senate; so that the numbers may always be substituted for the terms themselves, in case no signs of operation occur in the equations or inequalities. (Peirce, CP 3.66).

Peirce is here remarking on the principle that the measure v on terms "preserves" or "respects" the prevailing implication, inclusion, or subsumption relations that impose an ordering on those terms.

In these initiatory passages of the text, Peirce is using a single symbol "<" to denote the usual linear ordering on numbers, but also what amounts to the implication ordering on logical terms and the inclusion ordering on classes. Later, of course, he will introduce distinctive symbols for logical orders.

Now, the links among terms, sets, and numbers can be pursued in all directions, and Peirce has already indicated in an earlier paper how he would "construct" the integers from sets, that is, from the aggregate denotations of terms.

We will get back to that at another time.

In the immediate example, we have this sort of statement:

"if f < m, then the number of Frenchmen is less than the number of men"

In symbolic form, this would be written:

f < m ⇒ [f] < [m]

Here, the "<" on the left is a logical ordering on syntactic terms while the "<" on the right is an arithmetic ordering on real numbers.

The type of principle that comes up here is usually discussed under the question of whether a map between two ordered sets is "order-preserving" or not. The general type of question may be formalized in the following way.

Let X1 be a set with an ordering denoted by "<1".
Let X2 be a set with an ordering denoted by "<2".

What makes an ordering what it is will commonly be a set of axioms that defines the properties of the order relation in question. Since one frequently has occasion to view the same set in the light of several different order relations, one will often resort to explicit forms like (X, <1), (X, <2), and so on, to invoke a set with a given ordering.

A map F : (X1, <1) → (X2, <2) is order-preserving if and only if a statement of a particular form holds for all x and y in (X1, <1), specifically, this:

x <1 yFx <2 Fy

The action of the "number of" map v : (S, <1) → (R, <2) has just this character, as exemplified by its application to the case where x = f = "frenchman" and y = m = "man", like so:

f < m ⇒ [f] < [m]
f < mvf < vm

Here, to be more exacting, we may interpret the "<" on the left as "proper subsumption", that is, excluding the equality case, while we read the "<" on the right as the usual "less than".

Commentary Note 11.18

There is a comment that I ought to make on the concept of a structure preserving map, including as a special case the idea of an order-preserving map. It seems to be a peculiarity of mathematical usage in general — at least, I don't think it's just me — that "preserving structure" always means "preserving some, not of necessity all of the structure in question". People sometimes express this by speaking of structure preservation in measure, the implication being that any property that is amenable to being qualified in manner is potentially amenable to being quantified in degree, perhaps in such a way as to answer questions like "How structure-preserving is it?".

Let's see how this remark applies to the order-preserving property of the "number of" mapping v : S → R. For any pair of absolute terms x and y in the syntactic domain S, we have the following implications, where "–<" denotes the logical subsumption relation on terms and "=<" is the "less than or equal to" relation on the real number domain R.

x –< yvx =< vy

Equivalently:

x –< y ⇒ [x] =< [y]

It is easy to see that nowhere near all of the distinctions that make up the structure of the ordering on the left hand side will be preserved as one passes to the right hand side of these implication statements, but that is not required in order to call the map v "order-preserving", or what is also known as an "order morphism".

Commentary Note 11.19

Up to this point in the LOR of 1870, Peirce has introduced the "number of" measure on logical terms and discussed the extent to which this measure, v : S → R such that v : s ~> [s], exhibits a couple of important measure-theoretic principles:

  1. The "number of" map exhibits a certain type of "uniformity property", whereby the value of the measure on a uniformly qualified population is in fact actualized by each member of the population.
  2. The "number of" map satisfies an "order morphism principle", whereby the illative partial ordering of logical terms is reflected up to a partial extent by the arithmetical linear ordering of their measures.

Peirce next takes up the action of the "number of" map on the two types of, loosely speaking, "additive" operations that we normally consider in logic.

It is plain that both the regular non-invertible addition and the invertible addition satisfy the absolute conditions. (CP 3.67).

The "regular non-invertible addition" is signified by "+,", corresponding to what we'd call the inclusive disjunction of logical terms or the union of their extensions as sets.

The "invertible addition" is signified in algebra by "+", corresponding to what we'd call the exclusive disjunction of logical terms or the symmetric difference of their sets, ignoring many details and nuances that are often important, of course.

But the notation has other recommendations. The conception of taking together involved in these processes is strongly analogous to that of summation, the sum of 2 and 5, for example, being the number of a collection which consists of a collection of two and a collection of five. (CP 3.67).

A full interpretation of this remark will require us to pick up the precise technical sense in which Peirce is using the word "collection", and that will take us back to his logical reconstruction of certain aspects of number theory, all of which I am putting off to another time, but it is still possible to get a rough sense of what he's saying relative to the present frame of discussion.

The "number of" map v : SR evidently induces some sort of morphism with respect to logical sums. If this were straightforwardly true, we could write:

(?) v(x +, y) = vx + vy

Equivalently:

(?) [x +, y] = [x] + [y]

Of course, things are just not that simple in the case of inclusive disjunction and set-theoretic unions, so we'd "probably" invent a word like "sub-additive" to describe the principle that does hold here, namely:

v(x +, y) =< vx + vy

Equivalently:

[x +, y] =< [x] + [y]

This is why Peirce trims his discussion of this point with the following hedge:

Any logical equation or inequality in which no operation but addition is involved may be converted into a numerical equation or inequality by substituting the numbers of the several terms for the terms themselves — provided all the terms summed are mutually exclusive. (CP 3.67).

Finally, a morphism with respect to addition, even a contingently qualified one, must do the right stuff on behalf of the additive identity:

Addition being taken in this sense, nothing is to be denoted by zero, for then:

x +, 0 = x

whatever is denoted by x; and this is the definition of zero. This interpretation is given by Boole, and is very neat, on account of the resemblance between the ordinary conception of zero and that of nothing, and because we shall thus have

[0] = 0.

(Peirce, CP 3.67).

With respect to the nullity 0 in S and the number 0 in R, we have:

v0 = [0] = 0.

In sum, therefor, it also serves that only preserves a due respect for the function of a vacuum in nature.

Commentary Note 11.20

We arrive at the last, for the time being, of Peirce's statements about the "number of" map.

The conception of multiplication we have adopted is that of the application of one relation to another. …

Even ordinary numerical multiplication involves the same idea, for 2 × 3 is a pair of triplets, and 3 × 2 is a triplet of pairs, where "triplet of" and "pair of" are evidently relatives.

If we have an equation of the form:

xy = z

and there are just as many x's per y as there are per things, things of the universe, then we have also the arithmetical equation:

[x][y] = [z].

(Peirce, CP 3.76).

Peirce is here observing what we might dub a "contingent morphism" or a "skeptraphotic arrow", if you will. Provided that a certain condition, to be named and, what is more hopeful, to be clarified in short order, happens to be satisfied, we would find it holding that the "number of" map v : S → R such that vs = [s] serves to preserve the multiplication of relative terms, that is as much to say, the composition of relations, in the form: [xy] = [x][y].

So let us try to uncross Peirce's manifestly chiasmatic encryption of the condition that is called on in support of this preservation.

Proviso for [xy] = [x][y] —

there are just as many x’s per y as there are per things[,] things of the universe …

I have placed angle brackets around a comma that CP shows but CE omits, not that it helps much either way. So let us resort to the example:

For instance, if our universe is perfect men, and there are as many teeth to a Frenchman (perfect understood) as there are to any one of the universe, then:

[t][f] = [tf]

holds arithmetically. (CP 3.76).

Now that is something that we can sink our teeth into, and trace the bigraph representation of the situation. In order to do this, it will help to recall our first examination of the "tooth of" relation, and to adjust the picture that we sketched of it on that occasion.

Transcribing Peirce's example, we may let m = "man" and t = "tooth of ---". Then v(t) = [t] = [tm]/[m], that is to say, in a universe of perfect human dentition, the number of the relative term "tooth of ---" is equal to the number of teeth of humans divided by the number of humans, that is, 32.

The 2-adic relative term t determines a 2-adic relation T ⊆ U × V, where U and V are two universes of discourse, possibly the same one, that hold among other things all of the teeth and all of the people that happen to be under discussion, respectively. To make the case as simple as we can and still cover the point, let's say that there are just four people in our initial universe of discourse, and that just two of them are French. The bigraphic composition below shows all of the pertinent facts of the case.

T_1     T_32  T_33    T_64  T_65    T_96  T_97    T_128
 o  ...  o     o  ...  o     o  ...  o     o  ...  o      U
  \  |  /       \  |  /       \  |  /       \  |  /
   \ | /         \ | /         \ | /         \ | /       't'
    \|/           \|/           \|/           \|/
     o             o             o             o          V = m = 1
                   |                           |
                   |                           |         'f'
                   |                           |
     o             o             o             o          V = m = 1
     J             K             L             M

Here, the order of relational composition flows up the page. For convenience, the absolute term f = "frenchman" has been converted by using the comma functor to give the idempotent representation ‘f’ = f, = "frenchman that is ---", and thus it can be taken as a selective from the universe of mankind.

By way of a legend for the figure, we have the following data:

m = J +, K +, L +, M = 1
f = K +, M    
f = K:K +, M:M    
t = (T001 +, … +, T032):J   +,
    (T033 +, … +, T064):K   +,
    (T065 +, … +, T096):L   +,
    (T097 +, … +, T128):M    

Now let's see if we can use this picture to make sense of the following statement:

For instance, if our universe is perfect men, and there are as many teeth to a Frenchman (perfect understood) as there are to any one of the universe, then:

[t][f] = [tf]

holds arithmetically. (CP 3.76).

In the lingua franca of statistics, Peirce is saying this: That if the population of Frenchmen is a "fair sample" of the general population with regard to dentition, then the morphic equation [tf] = [t][f], whose transpose gives [t] = [tf]/[f], is every bite as true as the defining equation in this circumstance, namely, [t] = [tm]/[m].

Commentary Note 11.21

One more example and one more general observation, and then we will be all caught up with our homework on Peirce's "number of" function.

So if men are just as apt to be black as things in general:

[m,][b] = [m,b]

where the difference between [m] and [m,] must not be overlooked.

(Peirce, CP 3.76).

The protasis, "men are just as apt to be black as things in general", is elliptic in structure, and presents us with a potential ambiguity. If we had no further clue to its meaning, it might be read as either of the following:

Men are just as apt to be black as things in general are apt to be black.
Men are just as apt to be black as men are apt to be things in general.

The second interpretation, if grammatical, is pointless to state, since it equates a proper contingency with an absolute certainty.

So I think it is safe to assume this paraphrase of what Peirce intends:

Men are just as likely to be black as things in general are likely to be black.

Stated in terms of the conditional probability:

P(b|m) = P(b)

From the definition of conditional probability:

P(b|m) = P(b & m)/P(m)

Equivalently:

P(b & m) = P(b|m)P(m)

Thus we may derive the equivalent statement:

P(b & m) = P(b|m)P(m) = P(b)P(m)

And this, of course, is the definition of independent events, as applied to the event of being Black and the event of being a Man.

It seems like a likely guess, then, that this is the content of Peirce's statement about frequencies, [m,b] = [m,][b], in this case normalized to produce the equivalent statement about probabilities: P(m & b) = P(m)P(b).

Let's see if this checks out.

Let n be the number of things in general, in Peirce's lingo, n = [1]. On the assumption that m and b are associated with independent events, we get [m,b] = P(m & b)n = P(m)P(b)n = P(m)[b] = [m,][b], so we have to interpret [m,] = "the average number of men per things in general" as P(m) = the probability of a thing in general being a man. Seems okay.

Commentary Note 11.22

Let's look at that last example from a different angle.

So if men are just as apt to be black as things in general:

[m,][b] = [m,b]

where the difference between [m] and [m,] must not be overlooked.

(Peirce, CP 3.76).

In different lights the formula [m,b] = [m,][b] presents itself as an "aimed arrow", "fair sample", or "independence" condition.

The example apparently assumes a universe of "things in general", encompassing among other things the denotations of the absolute terms m = "man" and b = "black". That suggests to me that we might well illustrate this case in relief, by returning to our earlier staging of 'Othello' and seeing how well that universe of dramatic discourse observes the premiss that "men are just as apt to be black as things in general".

Here are the relevant data:

1 = B +, C +, D +, E +, I +, J +, O
b = O
m = C +, I +, J +, O
1, = B:B +, C:C +, D:D +, E:E +, I:I +, J:J +, O:O b, = O:O
m, = C:C +, I:I +, J:J +, O:O

The "fair sampling" or "episkeptral arrow" condition is tantamount to this: "Men are just as apt to be black as things in general are apt to be black". In other words, men are a fair sample of things in general with respect to the factor of being black.

Should this hold, the consequence would be:

[m,b] = [m,][b].

When [b] is not zero, we obtain the result:

[m,] = [m,b]/[b].

Once again, the absolute term b = "black" is most felicitously depicted by way of its idempotent representation ‘b’ = b, = "black that is ---", and thus it can be taken as a selective from the universe of discourse.

Here is the bigraph for the composition:

m,b = "man that is black",

here represented in the equivalent form:

m,b, = "man that is black that is ---".
B   C   D   E   I   J   O
o   o   o   o   o   o   o   1
    |           |   |   |
    |           |   |   |   m,
    |           |   |   |
o   o   o   o   o   o   o   1
                        |
                        |   b,
                        |
o   o   o   o   o   o   o   1
B   C   D   E   I   J   O

Thus we observe one of the more factitious facts that hold in this universe of discourse, namely:

m,b = b.

Another way of saying that is:

b –< m.

That in itself is enough to puncture any notion that b and m are statistically independent, but let us continue to develop the plot a bit more.

Putting all of the general formulas and particular facts together, we arrive at following summation of situation in the Othello case:

If the fair sampling condition holds:

[m,] = [m,b]/[b] = [b]/[b] = `1`,

In fact, however, it is the case that:

[m,] = [m,1]/[1] = [m]/[1] = 4/7.

In sum, it is not the case in the Othello example that "men are just as apt to be black as things in general".

Expressed in terms of probabilities: P(m) = 4/7 and P(b) = 1/7.

If these were independent we'd have: P(mb) = 4/49.

On the contrary, P(mb) = P(b) = 1/7.

Another way to see it is as follows: P(b|m) = 1/4 while P(b) = 1/7.

Commentary Note 11.23

Let me try to sum up as succinctly as possible the lesson that we ought to take away from Peirce's last "number of" example, since I know that the account that I have given of it so far may appear to have wandered rather widely.

So if men are just as apt to be black as things in general:

[m,][b] = [m,b]

where the difference between [m] and [m,] must not be overlooked.

C.S. Peirce, CP 3.76

In different lights the formula [m,b] = [m,][b] presents itself as an "aimed arrow", "fair sample", or "independence" condition. I had taken the tack of illustrating this polymorphous theme in bas relief, that is, via detour through a universe of discourse where it fails. Here's a brief reminder of the Othello example:

B   C   D   E   I   J   O
o   o   o   o   o   o   o   1
    |           |   |   |
    |           |   |   |   m,
    |           |   |   |
o   o   o   o   o   o   o   1
                        |
                        |   b,
                        |
o   o   o   o   o   o   o   1
B   C   D   E   I   J   O

The condition, "men are just as apt to be black as things in general", is expressible in terms of conditional probabilities as P(b|m) = P(b), written out, the probability of the event Black given the event Male is exactly equal to the unconditional probability of the event Black.

Thus, for example, it is sufficient to observe in the Othello setting that P(b|m) = 1/4 while P(b) = 1/7 in order to cognize the dependency, and thereby to tell that the ostensible arrow is anaclinically biased.

This reduction of a conditional probability to an absolute probability, in the form P(A|Z) = P(A), is a familiar disguise, and yet in practice one of the ways that we most commonly come to recognize the condition of independence P(AZ) = P(A)P(Z), via the definition of a conditional probability according to the rule P(A|Z) = P(AZ)/P(Z). To recall the familiar consequences, the definition of conditional probability plus the independence condition yields P(A|Z) = P(AZ)/P(Z) = P(A)P(Z)/P(Z), to wit, P(A|Z) = P(A).

As Hamlet discovered, there's a lot to be learned from turning a crank.

Commentary Note 11.24

And so we come to the end of the "number of" examples that we found on our agenda at this point in the text:

It is to be observed that:

[!1!] = `1`.

Boole was the first to show this connection between logic and probabilities. He was restricted, however, to absolute terms. I do not remember having seen any extension of probability to relatives, except the ordinary theory of expectation.

Our logical multiplication, then, satisfies the essential conditions of multiplication, has a unity, has a conception similar to that of admitted multiplications, and contains numerical multiplication as a case under it.

C.S. Peirce, CP 3.76

There appears to be a problem with the printing of the text at this point. Let us first recall the conventions that I am using in this transcription: `1` for the "antique 1" that Peirce defines as !1! = "something", and !1! for the "bold 1" that signifies the ordinary 2-identity relation.

CP 3 gives [!1!] = `1`, which I cannot make any sense of. CE 2 gives [!1!] = 1 , which makes sense on the reading of "1" as denoting the natural number 1, and not as the absolute term "1" that denotes the universe of discourse. On this reading, [!1!] is the average number of things related by the identity relation !1! to one individual, and so it makes sense that [!1!] = 1 : N, where N is the set or the type of the natural numbers {0, 1, 2, …}.

With respect to the 2-identity !1! in the syntactic domain S and the number 1 in the non-negative integers N ⊂ R, we have:

v!1! = [!1!] = 1.

And so the "number of" mapping v : S → R has another one of the properties that would be required of an arrow S → R.

The manner in which these arrows and qualified arrows help us to construct a suspension bridge that unifies logic, semiotics, statistics, stochastics, and information theory will be one of the main themes that I aim to elaborate throughout the rest of this inquiry.

Selection 12

The Sign of Involution

I shall take involution in such a sense that xy will denote everything which is an x for every individual of y.

Thus

'l'w

will be a lover of every woman.

Then

('s''l')w

will denote whatever stands to every woman in the relation of servant of every lover of hers;

and

's'('l'w)

will denote whatever is a servant of everything that is lover of a woman.

So that

('s''l')w = 's'('l'w).

(C.S. Peirce, CP 3.77).

Commentary Note 12

Let us make a few preliminary observations about the "logical sign of involution", as Peirce uses it here:

The Sign of Involution

I shall take involution in such a sense that xy will denote everything which is an x for every individual of y.

Thus

'l'w

will be a lover of every woman.

(C.S. Peirce, CP 3.77).

In arithmetic, the "involution" xy, or the "exponentiation" of x to the power of y, is the iterated multiplication of the factor x, repeated as many times as there are ones making up the exponent y.

In analogous fashion, 'l'w is the iterated multiplication of 'l', repeated as many times as there are individuals under the term w.

For example, suppose that the universe of discourse has, among other things, just the three women, W1, W2, W3. This could be expressed in Peirce's notation by writing:

w = W1 +, W2 +, W3.

In this setting, we would have:

'l'w = 'l'(W1 +, W2 +, W3) = 'l'W1 , 'l'W2 , 'l'W3.

That is, a lover of every woman in the universe of discourse would be a lover of W1 and a lover of W2 and lover of W3.

References

  • Boole, George (1854), An Investigation of the Laws of Thought, On Which are Founded the Mathematical Theories of Logic and Probabilities, Macmillan, 1854. Reprinted, Dover Publications, New York, NY, 1958.

Bibliography

  • Peirce, C.S., Collected Papers of Charles Sanders Peirce, vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 1931–1935, 1958. Cited as (CP volume.paragraph).
  • Peirce, C.S., Writings of Charles S. Peirce : A Chronological Edition, Peirce Edition Project (eds.), Indiana University Press, Bloomington and Indianoplis, IN, 1981–. Cited as (CE volume, page).

See also

Aficionados



<sharethis />