Changes

MyWikiBiz, Author Your Legacy — Saturday November 23, 2024
Jump to navigationJump to search
add cats & navigation bar
Line 756: Line 756:  
First, <math>L\!</math> can be associated with a logical predicate or a proposition that says something about the space of effects, being true of certain effects and false of all others.  In this way, <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> can be interpreted as naming a function from <math>\textstyle\prod_i X_i</math> to the domain of truth values <math>\mathbb{B} = \{ 0, 1 \}.</math>  With the appropriate understanding, it is permissible to let the notation <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times X_k \to \mathbb{B} {}^{\prime\prime}</math> indicate this interpretation.
 
First, <math>L\!</math> can be associated with a logical predicate or a proposition that says something about the space of effects, being true of certain effects and false of all others.  In this way, <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> can be interpreted as naming a function from <math>\textstyle\prod_i X_i</math> to the domain of truth values <math>\mathbb{B} = \{ 0, 1 \}.</math>  With the appropriate understanding, it is permissible to let the notation <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times X_k \to \mathbb{B} {}^{\prime\prime}</math> indicate this interpretation.
   −
Second, <math>L\!</math> can be associated with a piece of information that allows one to complete various sorts of partial data sets in the space of effects.  In particular, if one is given a partial effect or an incomplete <math>k\!</math>-tuple, say, one that is missing a value in the <math>j^\text{th}\!</math> place, as indicated by the notation <math>{}^{\backprime\backprime} (x_1, \ldots, \hat{j}, \ldots, x_k) {}^{\prime\prime},</math> then <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> can be interpreted as naming a function from the cartesian product of the domains at the filled places to the power set of the domain at the missing place.  With this in mind, it is permissible to let <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times \hat{j} \times \ldots \times X_k \to \operatorname{Pow}(X_j) {}^{\prime\prime}</math> indicate this use of <math>{}^{\backprime\backprime} L {}^{\prime\prime}.</math>  If the sets in the range of this function are all singletons, then it is permissible to let <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times \hat{j} \times \ldots \times X_k \to X_j {}^{\prime\prime}</math> specify the corresponding use of <math>{}^{\backprime\backprime} L {}^{\prime\prime}.</math>
+
Second, <math>L\!</math> can be associated with a piece of information that allows one to complete various sorts of partial data sets in the space of effects.  In particular, if one is given a partial effect or an incomplete <math>k\!</math>-tuple, say, one that is missing a value in the <math>j^\text{th}\!</math> place, as indicated by the notation <math>{}^{\backprime\backprime} (x_1, \ldots, \hat{j}, \ldots, x_k) {}^{\prime\prime},</math> then <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> can be interpreted as naming a function from the cartesian product of the domains at the filled places to the power set of the domain at the missing place.  With this in mind, it is permissible to let <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times \hat{j} \times \ldots \times X_k \to \mathrm{Pow}(X_j) {}^{\prime\prime}</math> indicate this use of <math>{}^{\backprime\backprime} L {}^{\prime\prime}.</math>  If the sets in the range of this function are all singletons, then it is permissible to let <math>{}^{\backprime\backprime} L : X_1 \times \ldots \times \hat{j} \times \ldots \times X_k \to X_j {}^{\prime\prime}</math> specify the corresponding use of <math>{}^{\backprime\backprime} L {}^{\prime\prime}.</math>
    
In general, the indicated degrees of freedom in the interpretation of relation symbols can be exploited properly only if one understands the consequences of this interpretive liberality and is prepared to deal with the problems that arise from its &ldquo;polymorphic&rdquo; practices &mdash; from using the same sign in different contexts to refer to different types of objects.  For example, one should consider what happens, and what sort of IF is demanded to deal with it, when the name <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> is used equivocally in a statement like <math>L = L^{-1}(1),\!</math> where a sensible reading requires it to denote the relational set <math>L \subseteq \textstyle\prod_i X_i</math> on the first appearance and the propositional function <math>L : \textstyle\prod_i X_i \to \mathbb{B}</math> on the second appearance.
 
In general, the indicated degrees of freedom in the interpretation of relation symbols can be exploited properly only if one understands the consequences of this interpretive liberality and is prepared to deal with the problems that arise from its &ldquo;polymorphic&rdquo; practices &mdash; from using the same sign in different contexts to refer to different types of objects.  For example, one should consider what happens, and what sort of IF is demanded to deal with it, when the name <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> is used equivocally in a statement like <math>L = L^{-1}(1),\!</math> where a sensible reading requires it to denote the relational set <math>L \subseteq \textstyle\prod_i X_i</math> on the first appearance and the propositional function <math>L : \textstyle\prod_i X_i \to \mathbb{B}</math> on the second appearance.
   −
A '''triadic relation''' is a relation on an ordered triple of nonempty sets.  Thus, <math>L\!</math> is a triadic relation on <math>(X, Y, Z)\!</math> if and only if <math>L \subseteq X \times Y \times Z.\!</math>  Exercising a proper degree of flexibility with notation, one can use the name of a triadic relation <math>L \subseteq X \times Y \times Z</math> to refer to a logical predicate or a propositional function, of the type <math>X \times Y \times Z \to \mathbb{B},</math> or any one of the derived binary operations, of the three types <math>X \times Y \to \operatorname{Pow}(Z),</math> <math>X \times Z \to \operatorname{Pow}(Y),</math> and <math>Y \times Z \to \operatorname{Pow}(X).</math>
+
A '''triadic relation''' is a relation on an ordered triple of nonempty sets.  Thus, <math>L\!</math> is a triadic relation on <math>(X, Y, Z)\!</math> if and only if <math>L \subseteq X \times Y \times Z.\!</math>  Exercising a proper degree of flexibility with notation, one can use the name of a triadic relation <math>L \subseteq X \times Y \times Z\!</math> to refer to a logical predicate or a propositional function, of the type <math>X \times Y \times Z \to \mathbb{B},\!</math> or any one of the derived binary operations, of the three types <math>X \times Y \to \mathrm{Pow}(Z),\!</math> <math>X \times Z \to \mathrm{Pow}(Y),\!</math> and <math>Y \times Z \to \mathrm{Pow}(X).\!</math>
   −
A '''binary operation''' or '''law of composition''' (LOC) on a nonempty set <math>X\!</math> is a triadic relation <math>* \subseteq X \times X \times X\!</math> that is also a function <math>* : X \times X \to X.\!</math>  The notation <math>{}^{\backprime\backprime} x * y {}^{\prime\prime}\!</math> is used to indicate the functional value <math>*(x, y) \in X,\!</math> which is also referred to as the '''product''' of <math>x\!</math> and <math>y\!</math> under <math>*.\!</math>
+
A '''binary operation''' or '''law of composition''' (LOC) on a nonempty set <math>X\!</math> is a triadic relation <math>* \subseteq X \times X \times X\!</math> that is also a function <math>* : X \times X \to X.\!</math>  The notation <math>{}^{\backprime\backprime} x * y {}^{\prime\prime}\!</math> is used to indicate the functional value <math>*(x, y) \in X,~\!</math> which is also referred to as the '''product''' of <math>x\!</math> and <math>y\!</math> under <math>*.\!</math>
    
A binary operation or LOC <math>*\!</math> on <math>X\!</math> is '''associative''' if and only if <math>(x*y)*z = x*(y*z)\!</math> for every <math>x, y, z \in X.\!</math>
 
A binary operation or LOC <math>*\!</math> on <math>X\!</math> is '''associative''' if and only if <math>(x*y)*z = x*(y*z)\!</math> for every <math>x, y, z \in X.\!</math>
Line 774: Line 774:  
A '''monoid''' is a semigroup with a unit element.  Formally, a monoid <math>\underline{X}\!</math> is an ordered triple <math>(X, *, e),\!</math> where <math>X\!</math> is a set, <math>*\!</math> is an associative LOC on the set <math>X,\!</math> and <math>e\!</math> is the unit element in the semigroup <math>(X, *).\!</math>
 
A '''monoid''' is a semigroup with a unit element.  Formally, a monoid <math>\underline{X}\!</math> is an ordered triple <math>(X, *, e),\!</math> where <math>X\!</math> is a set, <math>*\!</math> is an associative LOC on the set <math>X,\!</math> and <math>e\!</math> is the unit element in the semigroup <math>(X, *).\!</math>
   −
An '''inverse''' of an element <math>x\!</math> in a monoid <math>\underline{X} = (X, *, e)\!</math> is an element <math>y \in X\!</math> such that <math>x*y = e = y*x.\!</math>  An element that has an inverse in <math>\underline{X}\!</math> is said to be '''invertible''' (relative to <math>*\!</math> and <math>e\!</math>).  If <math>x\!</math> has an inverse in <math>\underline{X},\!</math> then it is unique to <math>x.\!</math>  To see this, suppose that <math>y'\!</math> is also an inverse of <math>x.\!</math>  Then it follows that:
+
An '''inverse''' of an element <math>x\!</math> in a monoid <math>\underline{X} = (X, *, e)\!</math> is an element <math>y \in X\!</math> such that <math>x*y = e = y*x.\!</math>  An element that has an inverse in <math>\underline{X}\!</math> is said to be '''invertible''' (relative to <math>*\!</math> and <math>e\!</math>).  If <math>x\!</math> has an inverse in <math>{\underline{X}},\!</math> then it is unique to <math>x.\!</math>  To see this, suppose that <math>y'\!</math> is also an inverse of <math>x.\!</math>  Then it follows that:
    
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 801: Line 801:  
It is customary to use a number of abbreviations and conventions in discussing semigroups, monoids, and groups.  A system <math>\underline{X} = (X, *)\!</math> is given the adjective ''commutative'' if and only if <math>*\!</math> is commutative.  Commutative groups, however, are traditionally called ''abelian groups''.  By way of making comparisons with familiar systems and operations, the following usages are also common.
 
It is customary to use a number of abbreviations and conventions in discussing semigroups, monoids, and groups.  A system <math>\underline{X} = (X, *)\!</math> is given the adjective ''commutative'' if and only if <math>*\!</math> is commutative.  Commutative groups, however, are traditionally called ''abelian groups''.  By way of making comparisons with familiar systems and operations, the following usages are also common.
   −
One says that <math>\underline{X}\!</math> is '''written multiplicatively''' to mean that a raised dot <math>(\cdot)\!</math> or concatenation is used instead of a star for the LOC.  In this case, the unit element is commonly written as an ordinary algebraic one, <math>1,\!</math> while the inverse of an element <math>x\!</math> is written as <math>x^{-1}.\!</math>  The multiplicative manner of presentation is the one that is usually taken by default in the most general types of situations.  In the multiplicative idiom, the following definitions of ''powers'', ''cyclic groups'', and ''generators'' are also common.
+
One says that <math>\underline{X}\!</math> is '''written multiplicatively''' to mean that a raised dot <math>{(\cdot)}\!</math> or concatenation is used instead of a star for the LOC.  In this case, the unit element is commonly written as an ordinary algebraic one, <math>1,\!</math> while the inverse of an element <math>x\!</math> is written as <math>x^{-1}.\!</math>  The multiplicative manner of presentation is the one that is usually taken by default in the most general types of situations.  In the multiplicative idiom, the following definitions of ''powers'', ''cyclic groups'', and ''generators'' are also common.
    
: In a semigroup, the <math>n^\text{th}\!</math> '''power''' of an element <math>x\!</math> is notated as <math>x^n\!</math> and defined for every positive integer <math>n\!</math> in the following manner.  Proceeding recursively, let <math>x^1 = x\!</math> and let <math>x^n = x^{n-1} \cdot x\!</math> for all <math>n > 1.\!</math>
 
: In a semigroup, the <math>n^\text{th}\!</math> '''power''' of an element <math>x\!</math> is notated as <math>x^n\!</math> and defined for every positive integer <math>n\!</math> in the following manner.  Proceeding recursively, let <math>x^1 = x\!</math> and let <math>x^n = x^{n-1} \cdot x\!</math> for all <math>n > 1.\!</math>
Line 873: Line 873:  
|}
 
|}
   −
To sum up the development so far in a general way:  A ''homomorphism'' is a mapping from a system to a system that preserves an aspect of systematic structure, usually one that is relevant to an understood purpose or context.  When the pertinent aspect of structure for both the source and the target system is a binary operation or a LOC, then the condition that the LOCs be preserved in passing from the pre-image to the image of the mapping is frequently expressed by stating that ''the image of the product is the product of the images''.  That is, if <math>h : X_1 \to X_2\!</math> is a homomorphism from <math>\underline{X}_1 = (X_1, *_1)\!</math> to <math>\underline{X}_2 = (X_2, *_2),\!</math> then for every <math>x, y \in X_1\!</math> the following condition holds:
+
To sum up the development so far in a general way:  A ''homomorphism'' is a mapping from a system to a system that preserves an aspect of systematic structure, usually one that is relevant to an understood purpose or context.  When the pertinent aspect of structure for both the source and the target system is a binary operation or a LOC, then the condition that the LOCs be preserved in passing from the pre-image to the image of the mapping is frequently expressed by stating that ''the image of the product is the product of the images''.  That is, if <math>h : X_1 \to X_2\!</math> is a homomorphism from <math>{\underline{X}_1 = (X_1, *_1)}\!</math> to <math>{\underline{X}_2 = (X_2, *_2)},\!</math> then for every <math>x, y \in X_1\!</math> the following condition holds:
    
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 881: Line 881:  
Next, the concept of a homomorphism or ''structure-preserving map'' is specialized to the different kinds of structure of interest here.
 
Next, the concept of a homomorphism or ''structure-preserving map'' is specialized to the different kinds of structure of interest here.
   −
A '''semigroup homomorphism''' from a semigroup <math>\underline{X}_1 = (X_1, *_1)\!</math> to a semigroup <math>\underline{X}_2 = (X_2, *_2)\!</math> is a mapping between the underlying sets that preserves the structure appropriate to semigroups, namely, the LOCs.  This makes it a map <math>h : X_1 \to X_2\!</math> whose induced action on the LOCs is such that it takes every element of <math>*_1\!</math> to an element of <math>*_2.\!</math>  That is:
+
A '''semigroup homomorphism''' from a semigroup <math>{\underline{X}_1 = (X_1, *_1)}\!</math> to a semigroup <math>{\underline{X}_2 = (X_2, *_2)}\!</math> is a mapping between the underlying sets that preserves the structure appropriate to semigroups, namely, the LOCs.  This makes it a map <math>h : X_1 \to X_2\!</math> whose induced action on the LOCs is such that it takes every element of <math>*_1\!</math> to an element of <math>*_2.\!</math>  That is:
    
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 899: Line 899:  
Finally, to introduce two pieces of language that are often useful:  an '''endomorphism''' is a homomorphism from a system into itself, while an '''automorphism''' is an isomorphism from a system onto itself.
 
Finally, to introduce two pieces of language that are often useful:  an '''endomorphism''' is a homomorphism from a system into itself, while an '''automorphism''' is an isomorphism from a system onto itself.
   −
If nothing more succinct is available, a group can be specified by means of its ''operation table'', usually styled either as a ''multiplication table'' or an ''addition table''.  Table&nbsp;32.1 illustrates the general scheme of a group operation table.  In this case the group operation, treated as a &ldquo;multiplication&rdquo;, is formally symbolized by a star <math>(*),\!</math> as in <math>x * y = z.\!</math>  In contexts where only algebraic operations are formalized it is common practice to omit the star, but when logical conjunctions (symbolized by a raised dot <math>(\cdot)\!</math> or by concatenation) appear in the same context, then the star is retained for the group operation.
+
If nothing more succinct is available, a group can be specified by means of its ''operation table'', usually styled either as a ''multiplication table'' or an ''addition table''.  Table&nbsp;32.1 illustrates the general scheme of a group operation table.  In this case the group operation, treated as a &ldquo;multiplication&rdquo;, is formally symbolized by a star <math>(*),\!</math> as in <math>x * y = z.\!</math>  In contexts where only algebraic operations are formalized it is common practice to omit the star, but when logical conjunctions (symbolized by a raised dot <math>{(\cdot)}\!</math> or by concatenation) appear in the same context, then the star is retained for the group operation.
    
Another way of approaching the study or presenting the structure of a group is by means of a ''group representation'', in particular, one that represents the group in the special form of a ''transformation group''.  This is a set of transformations acting on a concrete space of &ldquo;points&rdquo; or a designated set of &ldquo;objects&rdquo;.  In providing an abstractly given group with a representation as a transformation group, one is seeking to know the group by its effects, that is, in terms of the action it induces, through the representation, on a concrete domain of objects.  In the type of representation known as a ''regular representation'', one is seeking to know the group by its effects on itself.
 
Another way of approaching the study or presenting the structure of a group is by means of a ''group representation'', in particular, one that represents the group in the special form of a ''transformation group''.  This is a set of transformations acting on a concrete space of &ldquo;points&rdquo; or a designated set of &ldquo;objects&rdquo;.  In providing an abstractly given group with a representation as a transformation group, one is seeking to know the group by its effects, that is, in terms of the action it induces, through the representation, on a concrete domain of objects.  In the type of representation known as a ''regular representation'', one is seeking to know the group by its effects on itself.
Line 912: Line 912:     
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
|+ <math>\text{Table 32.1}~~\text{Scheme of a Group Operation Table}</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 32.1} ~~ \text{Scheme of a Group Operation Table}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>*\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>*\!</math>
Line 948: Line 949:     
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
|+ <math>\text{Table 32.2}~~\text{Scheme of the Regular Ante-Representation}</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 32.2} ~~ \text{Scheme of the Regular Ante-Representation}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
Line 989: Line 991:     
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:80%"
|+ <math>\text{Table 32.3}~~\text{Scheme of the Regular Post-Representation}</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 32.3} ~~ \text{Scheme of the Regular Post-Representation}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
Line 1,039: Line 1,042:  
For the sake of comparison, I give a discussion of both these groups.
 
For the sake of comparison, I give a discussion of both these groups.
   −
The next series of Tables presents the group operations and regular representations for the groups <math>V_4\!</math> and <math>Z_4.\!</math>  If a group is abelian, as both of these groups are, then its <math>h_1\!</math> and <math>h_2\!</math> representations are indistinguishable, and a single form of regular representation <math>h : G \to (G \to G)\!</math> will do for both.
+
The next series of Tables presents the group operations and regular representations for the groups <math>V_4\!</math> and <math>Z_4.\!</math>  If a group is abelian, as both of these groups are, then its <math>h_1\!</math> and <math>h_2\!</math> representations are indistinguishable, and a single form of regular representation <math>{h : G \to (G \to G)}\!</math> will do for both.
    
Table&nbsp;33.1 shows the multiplication table of the group <math>V_4,\!</math> while Tables&nbsp;33.2 and 33.3 present two versions of its regular representation.  The first version, somewhat hastily, gives the functional representation of each group element as a set of ordered pairs of group elements.  The second version, more circumspectly, gives the functional representative of each group element as a set of ordered pairs of element names, also referred to as ''objects'', ''points'', ''letters'', or ''symbols''.
 
Table&nbsp;33.1 shows the multiplication table of the group <math>V_4,\!</math> while Tables&nbsp;33.2 and 33.3 present two versions of its regular representation.  The first version, somewhat hastily, gives the functional representation of each group element as a set of ordered pairs of group elements.  The second version, more circumspectly, gives the functional representative of each group element as a set of ordered pairs of element names, also referred to as ''objects'', ''points'', ''letters'', or ''symbols''.
Line 1,046: Line 1,049:     
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 33.1}~~\text{Multiplication Operation of the Group}~V_4</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 33.1} ~~ \text{Multiplication Operation of the Group} ~ V_4\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>\cdot\!</math>
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>\cdot\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{e}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{e}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{f}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{f}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{g}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{g}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{h}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{h}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{e}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{e}\!</math>
| <math>\operatorname{e}</math>
+
| <math>\mathrm{e}\!</math>
| <math>\operatorname{f}</math>
+
| <math>\mathrm{f}\!</math>
| <math>\operatorname{g}</math>
+
| <math>\mathrm{g}\!</math>
| <math>\operatorname{h}</math>
+
| <math>\mathrm{h}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{f}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{f}\!</math>
| <math>\operatorname{f}</math>
+
| <math>\mathrm{f}\!</math>
| <math>\operatorname{e}</math>
+
| <math>\mathrm{e}\!</math>
| <math>\operatorname{h}</math>
+
| <math>\mathrm{h}\!</math>
| <math>\operatorname{g}</math>
+
| <math>\mathrm{g}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{g}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{g}\!</math>
| <math>\operatorname{g}</math>
+
| <math>\mathrm{g}\!</math>
| <math>\operatorname{h}</math>
+
| <math>\mathrm{h}\!</math>
| <math>\operatorname{e}</math>
+
| <math>\mathrm{e}\!</math>
| <math>\operatorname{f}</math>
+
| <math>\mathrm{f}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{h}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{h}\!</math>
| <math>\operatorname{h}</math>
+
| <math>\mathrm{h}\!</math>
| <math>\operatorname{g}</math>
+
| <math>\mathrm{g}\!</math>
| <math>\operatorname{f}</math>
+
| <math>\mathrm{f}\!</math>
| <math>\operatorname{e}</math>
+
| <math>\mathrm{e}\!</math>
 
|}
 
|}
   Line 1,082: Line 1,086:     
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 33.2}~~\text{Regular Representation of the Group}~V_4</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 33.2} ~~ \text{Regular Representation of the Group} ~ V_4\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| width="20%" style="border-right:1px solid black" | <math>\operatorname{e}</math>
+
| width="20%" style="border-right:1px solid black" | <math>\mathrm{e}\!</math>
 
| width="4%"  | <math>\{\!</math>
 
| width="4%"  | <math>\{\!</math>
| width="16%" | <math>(\operatorname{e}, \operatorname{e}),</math>
+
| width="16%" | <math>(\mathrm{e}, \mathrm{e}),\!</math>
| width="20%" | <math>(\operatorname{f}, \operatorname{f}),</math>
+
| width="20%" | <math>(\mathrm{f}, \mathrm{f}),\!</math>
| width="20%" | <math>(\operatorname{g}, \operatorname{g}),</math>
+
| width="20%" | <math>(\mathrm{g}, \mathrm{g}),\!</math>
| width="16%" | <math>(\operatorname{h}, \operatorname{h})</math>
+
| width="16%" | <math>(\mathrm{h}, \mathrm{h})\!</math>
 
| width="4%"  | <math>\}\!</math>
 
| width="4%"  | <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{f}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{f}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{e}, \operatorname{f}),</math>
+
| <math>(\mathrm{e}, \mathrm{f}),\!</math>
| <math>(\operatorname{f}, \operatorname{e}),</math>
+
| <math>(\mathrm{f}, \mathrm{e}),\!</math>
| <math>(\operatorname{g}, \operatorname{h}),</math>
+
| <math>(\mathrm{g}, \mathrm{h}),\!</math>
| <math>(\operatorname{h}, \operatorname{g})</math>
+
| <math>(\mathrm{h}, \mathrm{g})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{g}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{g}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{e}, \operatorname{g}),</math>
+
| <math>(\mathrm{e}, \mathrm{g}),\!</math>
| <math>(\operatorname{f}, \operatorname{h}),</math>
+
| <math>(\mathrm{f}, \mathrm{h}),\!</math>
| <math>(\operatorname{g}, \operatorname{e}),</math>
+
| <math>(\mathrm{g}, \mathrm{e}),\!</math>
| <math>(\operatorname{h}, \operatorname{f})</math>
+
| <math>(\mathrm{h}, \mathrm{f})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{h}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{h}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{e}, \operatorname{h}),</math>
+
| <math>(\mathrm{e}, \mathrm{h}),\!</math>
| <math>(\operatorname{f}, \operatorname{g}),</math>
+
| <math>(\mathrm{f}, \mathrm{g}),\!</math>
| <math>(\operatorname{g}, \operatorname{f}),</math>
+
| <math>(\mathrm{g}, \mathrm{f}),\!</math>
| <math>(\operatorname{h}, \operatorname{e})</math>
+
| <math>(\mathrm{h}, \mathrm{e})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|}
 
|}
Line 1,123: Line 1,128:     
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 33.3}~~\text{Regular Representation of the Group}~V_4</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 33.3} ~~ \text{Regular Representation of the Group} ~ V_4\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Symbols}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Symbols}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| width="20%" style="border-right:1px solid black" | <math>\operatorname{e}</math>
+
| width="20%" style="border-right:1px solid black" | <math>\mathrm{e}\!</math>
 
| width="4%"  | <math>\{\!</math>
 
| width="4%"  | <math>\{\!</math>
| width="16%" | <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),</math>
+
| width="16%" | <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),\!</math>
| width="20%" | <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),</math>
+
| width="20%" | <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),\!</math>
| width="20%" | <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),</math>
+
| width="20%" | <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),\!</math>
| width="16%" | <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime})</math>
+
| width="16%" | <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime})\!</math>
 
| width="4%"  | <math>\}\!</math>
 
| width="4%"  | <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{f}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{f}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime})</math>
+
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{g}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{g}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime})</math>
+
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{h}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{h}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{e}{}^{\prime\prime}, {}^{\backprime\backprime}\text{h}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{f}{}^{\prime\prime}, {}^{\backprime\backprime}\text{g}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),</math>
+
| <math>({}^{\backprime\backprime}\text{g}{}^{\prime\prime}, {}^{\backprime\backprime}\text{f}{}^{\prime\prime}),\!</math>
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime})</math>
+
| <math>({}^{\backprime\backprime}\text{h}{}^{\prime\prime}, {}^{\backprime\backprime}\text{e}{}^{\prime\prime})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|}
 
|}
Line 1,196: Line 1,202:     
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 34.1}~~\text{Multiplicative Presentation of the Group}~Z_4(\cdot)</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 34.1} ~~ \text{Multiplicative Presentation of the Group} ~ Z_4(\cdot)~\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>\cdot\!</math>
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>\cdot\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{1}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{1}</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{a}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{a}</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{b}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{b}</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{c}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{c}</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{1}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{1}</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}</math>
| <math>\operatorname{a}</math>
+
| <math>\mathrm{a}</math>
| <math>\operatorname{b}</math>
+
| <math>\mathrm{b}</math>
| <math>\operatorname{c}</math>
+
| <math>\mathrm{c}</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{a}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{a}</math>
| <math>\operatorname{a}</math>
+
| <math>\mathrm{a}</math>
| <math>\operatorname{b}</math>
+
| <math>\mathrm{b}</math>
| <math>\operatorname{c}</math>
+
| <math>\mathrm{c}</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{b}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{b}</math>
| <math>\operatorname{b}</math>
+
| <math>\mathrm{b}</math>
| <math>\operatorname{c}</math>
+
| <math>\mathrm{c}</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}</math>
| <math>\operatorname{a}</math>
+
| <math>\mathrm{a}</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{c}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{c}</math>
| <math>\operatorname{c}</math>
+
| <math>\mathrm{c}</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}</math>
| <math>\operatorname{a}</math>
+
| <math>\mathrm{a}</math>
| <math>\operatorname{b}</math>
+
| <math>\mathrm{b}</math>
 
|}
 
|}
   Line 1,232: Line 1,239:     
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 34.2}~~\text{Regular Representation of the Group}~Z_4(\cdot)</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 34.2} ~~ \text{Regular Representation of the Group} ~ Z_4(\cdot)\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| width="20%" style="border-right:1px solid black" | <math>\operatorname{1}</math>
+
| width="20%" style="border-right:1px solid black" | <math>\mathrm{1}\!</math>
 
| width="4%"  | <math>\{\!</math>
 
| width="4%"  | <math>\{\!</math>
| width="16%" | <math>(\operatorname{1}, \operatorname{1}),</math>
+
| width="16%" | <math>(\mathrm{1}, \mathrm{1}),\!</math>
| width="20%" | <math>(\operatorname{a}, \operatorname{a}),</math>
+
| width="20%" | <math>(\mathrm{a}, \mathrm{a}),\!</math>
| width="20%" | <math>(\operatorname{b}, \operatorname{b}),</math>
+
| width="20%" | <math>(\mathrm{b}, \mathrm{b}),\!</math>
| width="16%" | <math>(\operatorname{c}, \operatorname{c})</math>
+
| width="16%" | <math>(\mathrm{c}, \mathrm{c})\!</math>
 
| width="4%"  | <math>\}\!</math>
 
| width="4%"  | <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{a}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{a}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{1}, \operatorname{a}),</math>
+
| <math>(\mathrm{1}, \mathrm{a}),\!</math>
| <math>(\operatorname{a}, \operatorname{b}),</math>
+
| <math>(\mathrm{a}, \mathrm{b}),\!</math>
| <math>(\operatorname{b}, \operatorname{c}),</math>
+
| <math>(\mathrm{b}, \mathrm{c}),\!</math>
| <math>(\operatorname{c}, \operatorname{1})</math>
+
| <math>(\mathrm{c}, \mathrm{1})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{b}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{b}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{1}, \operatorname{b}),</math>
+
| <math>(\mathrm{1}, \mathrm{b}),\!</math>
| <math>(\operatorname{a}, \operatorname{c}),</math>
+
| <math>(\mathrm{a}, \mathrm{c}),\!</math>
| <math>(\operatorname{b}, \operatorname{1}),</math>
+
| <math>(\mathrm{b}, \mathrm{1}),\!</math>
| <math>(\operatorname{c}, \operatorname{a})</math>
+
| <math>(\mathrm{c}, \mathrm{a})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{c}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{c}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{1}, \operatorname{c}),</math>
+
| <math>(\mathrm{1}, \mathrm{c}),\!</math>
| <math>(\operatorname{a}, \operatorname{1}),</math>
+
| <math>(\mathrm{a}, \mathrm{1}),\!</math>
| <math>(\operatorname{b}, \operatorname{a}),</math>
+
| <math>(\mathrm{b}, \mathrm{a}),\!</math>
| <math>(\operatorname{c}, \operatorname{b})</math>
+
| <math>(\mathrm{c}, \mathrm{b})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|}
 
|}
Line 1,273: Line 1,281:     
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 35.1}~~\text{Additive Presentation of the Group}~Z_4(+)</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 35.1} ~~ \text{Additive Presentation of the Group} ~ Z_4(+)\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>+\!</math>
 
| width="20%" style="border-bottom:1px solid black; border-right:1px solid black" | <math>+\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{0}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{0}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{1}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{1}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{2}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{2}\!</math>
| width="20%" style="border-bottom:1px solid black" | <math>\operatorname{3}</math>
+
| width="20%" style="border-bottom:1px solid black" | <math>\mathrm{3}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{0}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{0}\!</math>
| <math>\operatorname{0}</math>
+
| <math>\mathrm{0}\!</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}\!</math>
| <math>\operatorname{2}</math>
+
| <math>\mathrm{2}\!</math>
| <math>\operatorname{3}</math>
+
| <math>\mathrm{3}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{1}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{1}\!</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}\!</math>
| <math>\operatorname{2}</math>
+
| <math>\mathrm{2}\!</math>
| <math>\operatorname{3}</math>
+
| <math>\mathrm{3}\!</math>
| <math>\operatorname{0}</math>
+
| <math>\mathrm{0}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{2}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{2}\!</math>
| <math>\operatorname{2}</math>
+
| <math>\mathrm{2}\!</math>
| <math>\operatorname{3}</math>
+
| <math>\mathrm{3}\!</math>
| <math>\operatorname{0}</math>
+
| <math>\mathrm{0}\!</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{3}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{3}\!</math>
| <math>\operatorname{3}</math>
+
| <math>\mathrm{3}\!</math>
| <math>\operatorname{0}</math>
+
| <math>\mathrm{0}\!</math>
| <math>\operatorname{1}</math>
+
| <math>\mathrm{1}\!</math>
| <math>\operatorname{2}</math>
+
| <math>\mathrm{2}\!</math>
 
|}
 
|}
   Line 1,309: Line 1,318:     
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center; width:60%"
|+ <math>\text{Table 35.2}~~\text{Regular Representation of the Group}~Z_4(+)</math>
+
|+ style="height:30px" |
 +
<math>\text{Table 35.2} ~~ \text{Regular Representation of the Group} ~ Z_4(+)\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| style="border-bottom:1px solid black; border-right:1px solid black" | <math>\text{Element}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
| colspan="6" style="border-bottom:1px solid black" | <math>\text{Function as Set of Ordered Pairs of Elements}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| width="20%" style="border-right:1px solid black" | <math>\operatorname{0}</math>
+
| width="20%" style="border-right:1px solid black" | <math>\mathrm{0}\!</math>
 
| width="4%"  | <math>\{\!</math>
 
| width="4%"  | <math>\{\!</math>
| width="16%" | <math>(\operatorname{0}, \operatorname{0}),</math>
+
| width="16%" | <math>(\mathrm{0}, \mathrm{0}),\!</math>
| width="20%" | <math>(\operatorname{1}, \operatorname{1}),</math>
+
| width="20%" | <math>(\mathrm{1}, \mathrm{1}),\!</math>
| width="20%" | <math>(\operatorname{2}, \operatorname{2}),</math>
+
| width="20%" | <math>(\mathrm{2}, \mathrm{2}),\!</math>
| width="16%" | <math>(\operatorname{3}, \operatorname{3})</math>
+
| width="16%" | <math>(\mathrm{3}, \mathrm{3})~\!</math>
 
| width="4%"  | <math>\}\!</math>
 
| width="4%"  | <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{1}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{1}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{0}, \operatorname{1}),</math>
+
| <math>(\mathrm{0}, \mathrm{1}),\!</math>
| <math>(\operatorname{1}, \operatorname{2}),</math>
+
| <math>(\mathrm{1}, \mathrm{2}),\!</math>
| <math>(\operatorname{2}, \operatorname{3}),</math>
+
| <math>(\mathrm{2}, \mathrm{3}),\!</math>
| <math>(\operatorname{3}, \operatorname{0})</math>
+
| <math>(\mathrm{3}, \mathrm{0})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{2}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{2}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{0}, \operatorname{2}),</math>
+
| <math>(\mathrm{0}, \mathrm{2}),\!</math>
| <math>(\operatorname{1}, \operatorname{3}),</math>
+
| <math>(\mathrm{1}, \mathrm{3}),\!</math>
| <math>(\operatorname{2}, \operatorname{0}),</math>
+
| <math>(\mathrm{2}, \mathrm{0}),\!</math>
| <math>(\operatorname{3}, \operatorname{1})</math>
+
| <math>(\mathrm{3}, \mathrm{1})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|- style="height:50px"
 
|- style="height:50px"
| style="border-right:1px solid black" | <math>\operatorname{3}</math>
+
| style="border-right:1px solid black" | <math>\mathrm{3}\!</math>
 
| <math>\{\!</math>
 
| <math>\{\!</math>
| <math>(\operatorname{0}, \operatorname{3}),</math>
+
| <math>(\mathrm{0}, \mathrm{3}),\!</math>
| <math>(\operatorname{1}, \operatorname{0}),</math>
+
| <math>(\mathrm{1}, \mathrm{0}),\!</math>
| <math>(\operatorname{2}, \operatorname{1}),</math>
+
| <math>(\mathrm{2}, \mathrm{1}),\!</math>
| <math>(\operatorname{3}, \operatorname{2})</math>
+
| <math>(\mathrm{3}, \mathrm{2})\!</math>
 
| <math>\}\!</math>
 
| <math>\}\!</math>
 
|}
 
|}
Line 1,393: Line 1,403:  
|}
 
|}
   −
By convention for the case where <math>k = 0,\!</math> this gives <math>\underline{\underline{X}}^0 = \{ () \},</math> that is, the singleton set consisting of the empty sequence.  Depending on the setting, the empty sequence is referred to as the ''empty word'' or the ''empty sentence'', and is commonly denoted by an epsilon <math>{}^{\backprime\backprime} \varepsilon {}^{\prime\prime}</math> or a lambda <math>{}^{\backprime\backprime} \lambda {}^{\prime\prime}.</math>  In this text a variant epsilon symbol will be used for the empty sequence, <math>\varepsilon = ().\!</math>  In addition, a singly underlined epsilon will be used for the language that consists of a single empty sequence, <math>\underline\varepsilon = \{ \varepsilon \} = \{ () \}.</math>
+
By convention for the case where <math>k = 0,\!</math> this gives <math>\underline{\underline{X}}^0 = \{ () \},</math> that is, the singleton set consisting of the empty sequence.  Depending on the setting, the empty sequence is referred to as the ''empty word'' or the ''empty sentence'', and is commonly denoted by an epsilon <math>{}^{\backprime\backprime} \varepsilon {}^{\prime\prime}</math> or a lambda <math>{}^{\backprime\backprime} \lambda {}^{\prime\prime}.</math>  In this text a variant epsilon symbol will be used for the empty sequence, <math>{\varepsilon = ()}.\!</math>  In addition, a singly underlined epsilon will be used for the language that consists of a single empty sequence, <math>\underline\varepsilon = \{ \varepsilon \} = \{ () \}.</math>
    
It is probably worth remarking at this point that all empty sequences are indistinguishable (in a one-level formal language, that is), and thus all sets that consist of a single empty sequence are identical.  Consequently, <math>\underline{\underline{X}}^0 = \{ () \} = \underline{\varepsilon} = \underline{\underline{Y}}^0,</math> for all resources <math>\underline{\underline{X}}</math> and <math>\underline{\underline{Y}}.</math>  However, the empty language <math>\varnothing = \{ \}</math> and the language that consists of a single empty sequence <math>\underline\varepsilon = \{ \varepsilon \} = \{ () \}</math> need to be distinguished from each other.
 
It is probably worth remarking at this point that all empty sequences are indistinguishable (in a one-level formal language, that is), and thus all sets that consist of a single empty sequence are identical.  Consequently, <math>\underline{\underline{X}}^0 = \{ () \} = \underline{\varepsilon} = \underline{\underline{Y}}^0,</math> for all resources <math>\underline{\underline{X}}</math> and <math>\underline{\underline{Y}}.</math>  However, the empty language <math>\varnothing = \{ \}</math> and the language that consists of a single empty sequence <math>\underline\varepsilon = \{ \varepsilon \} = \{ () \}</math> need to be distinguished from each other.
Line 1,482: Line 1,492:  
By way of definition, a sign <math>q\!</math> in a sign relation <math>L \subseteq O \times S \times I\!</math> is said to be, to constitute, or to make a '''plural indefinite reference''' ('''PIR''') to (every element in) a set of objects, <math>X \subseteq O,\!</math> if and only if <math>q\!</math> denotes every element of <math>X.\!</math>  This relationship can be expressed in a succinct formula by making use of one additional definition.
 
By way of definition, a sign <math>q\!</math> in a sign relation <math>L \subseteq O \times S \times I\!</math> is said to be, to constitute, or to make a '''plural indefinite reference''' ('''PIR''') to (every element in) a set of objects, <math>X \subseteq O,\!</math> if and only if <math>q\!</math> denotes every element of <math>X.\!</math>  This relationship can be expressed in a succinct formula by making use of one additional definition.
   −
The '''denotation''' of <math>q\!</math> in <math>L,\!</math> written <math>\operatorname{De}(q, L),\!</math> is defined as follows:
+
The '''denotation''' of <math>q\!</math> in <math>L,\!</math> written <math>\mathrm{De}(q, L),\!</math> is defined as follows:
    
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{De}(q, L) ~=~ \operatorname{Den}(L) \cdot q ~=~ L_{OS} \cdot q ~=~ \{ o \in O : (o, q, i) \in L, ~\text{for some}~ i \in I \}.</math>
+
| <math>\mathrm{De}(q, L) ~=~ \mathrm{Den}(L) \cdot q ~=~ L_{OS} \cdot q ~=~ \{ o \in O : (o, q, i) \in L, ~\text{for some}~ i \in I \}.</math>
 
|}
 
|}
   −
Then <math>q\!</math> makes a PIR to <math>X\!</math> in <math>L\!</math> if and only if <math>X \subseteq \operatorname{De}(q, L).\!</math>  Of course, this includes the limiting case where <math>X\!</math> is a singleton, say <math>X = \{ o \}.\!</math>  In this case the reference is neither plural nor indefinite, properly speaking, but <math>q\!</math> denotes <math>o\!</math> uniquely.
+
Then <math>q\!</math> makes a PIR to <math>X\!</math> in <math>L\!</math> if and only if <math>X \subseteq \mathrm{De}(q, L).\!</math>  Of course, this includes the limiting case where <math>X\!</math> is a singleton, say <math>X = \{ o \}.\!</math>  In this case the reference is neither plural nor indefinite, properly speaking, but <math>q\!</math> denotes <math>o\!</math> uniquely.
    
The proper exploitation of PIRs in sign relations makes it possible to finesse the distinction between HI signs and HU signs, in other words, to provide a ready means of translating between the two kinds of signs that preserves all the relevant information, at least, for many purposes.  This is accomplished by connecting the sides of the distinction in two directions.  First, a HI sign that makes a PIR to many triples of the form <math>(o, s, i)\!</math> can be taken as tantamount to a HU sign that denotes the corresponding sign relation.  Second, a HU sign that denotes a singleton sign relation can be taken as tantamount to a HI sign that denotes its single triple.  The relation of one sign being &ldquo;tantamount to&rdquo; another is not exactly a full-fledged semantic equivalence, but it is a reasonable approximation to it, and one that serves a number of practical purposes.
 
The proper exploitation of PIRs in sign relations makes it possible to finesse the distinction between HI signs and HU signs, in other words, to provide a ready means of translating between the two kinds of signs that preserves all the relevant information, at least, for many purposes.  This is accomplished by connecting the sides of the distinction in two directions.  First, a HI sign that makes a PIR to many triples of the form <math>(o, s, i)\!</math> can be taken as tantamount to a HU sign that denotes the corresponding sign relation.  Second, a HU sign that denotes a singleton sign relation can be taken as tantamount to a HI sign that denotes its single triple.  The relation of one sign being &ldquo;tantamount to&rdquo; another is not exactly a full-fledged semantic equivalence, but it is a reasonable approximation to it, and one that serves a number of practical purposes.
Line 1,605: Line 1,615:  
|}
 
|}
   −
The intent of this succession, as interpreted in FL environments, is that <math>{}^{\langle\langle} x {}^{\rangle\rangle}</math> denotes or refers to <math>{}^{\langle} x {}^{\rangle},</math> which denotes or refers to <math>x.\!</math>  Moreover, its computational realization, as implemented in CL environments, is that <math>{}^{\langle\langle} x {}^{\rangle\rangle}</math> addresses or evaluates to <math>{}^{\langle} x {}^{\rangle},</math> which addresses or evaluates to <math>x.\!</math>
+
The intent of this succession, as interpreted in FL environments, is that <math>{}^{\langle\langle} x {}^{\rangle\rangle}\!</math> denotes or refers to <math>{}^{\langle} x {}^{\rangle},\!</math> which denotes or refers to <math>x.\!</math>  Moreover, its computational realization, as implemented in CL environments, is that <math>{}^{\langle\langle} x {}^{\rangle\rangle}\!</math> addresses or evaluates to <math>{}^{\langle} x {}^{\rangle},\!</math> which addresses or evaluates to <math>x.\!</math>
    
The designations ''higher order'' and ''lower order'' are attributed to signs in a casual, local, and transitory way.  At this point they signify nothing beyond the occurrence in a sign relation of a pair of triples having the form shown in Table&nbsp;37.
 
The designations ''higher order'' and ''lower order'' are attributed to signs in a casual, local, and transitory way.  At this point they signify nothing beyond the occurrence in a sign relation of a pair of triples having the form shown in Table&nbsp;37.
Line 1,744: Line 1,754:  
In ordinary discourse HA signs are usually generated by means of linguistic devices for quoting pieces of text.  In computational frameworks these quoting mechanisms are implemented as functions that map syntactic arguments into numerical or syntactic values.  A quoting function, given a sign or expression as its single argument, needs to accomplish two things:  first, to defer the reference of that sign, in other words, to inhibit, delay, or prevent the evaluation of its argument expression, and then, to exhibit or produce another sign whose object is precisely that argument expression.
 
In ordinary discourse HA signs are usually generated by means of linguistic devices for quoting pieces of text.  In computational frameworks these quoting mechanisms are implemented as functions that map syntactic arguments into numerical or syntactic values.  A quoting function, given a sign or expression as its single argument, needs to accomplish two things:  first, to defer the reference of that sign, in other words, to inhibit, delay, or prevent the evaluation of its argument expression, and then, to exhibit or produce another sign whose object is precisely that argument expression.
   −
The rest of this section considers the development of sign relations that have moderate capacities to reference their own signs as objects.  In each case, these extensions are assumed to begin with sign relations like <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> that have disjoint sets of objects and signs and thus have no reflective capacity at the outset.  The status of <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> as the reflective origins of the associated reflective developments is recalled by saying that <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> themselves are the ''zeroth order reflective extensions'' of <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> in symbols, <math>L(\text{A}) = \operatorname{Ref}^0 L(\text{A})\!</math> and <math>L(\text{B}) = \operatorname{Ref}^0 L(\text{B}).\!</math>
+
The rest of this section considers the development of sign relations that have moderate capacities to reference their own signs as objects.  In each case, these extensions are assumed to begin with sign relations like <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> that have disjoint sets of objects and signs and thus have no reflective capacity at the outset.  The status of <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> as the reflective origins of the associated reflective developments is recalled by saying that <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> themselves are the ''zeroth order reflective extensions'' of <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> in symbols, <math>L(\text{A}) = \mathrm{Ref}^0 L(\text{A})\!</math> and <math>L(\text{B}) = \mathrm{Ref}^0 L(\text{B}).\!</math>
    
The next set of Tables illustrates a few the most common ways that sign relations can begin to develop reflective extensions.  For ease of reference, Tables&nbsp;40 and 41 repeat the contents of Tables&nbsp;1 and 2, respectively, merely replacing ordinary quotes with arch quotes.
 
The next set of Tables illustrates a few the most common ways that sign relations can begin to develop reflective extensions.  For ease of reference, Tables&nbsp;40 and 41 repeat the contents of Tables&nbsp;1 and 2, respectively, merely replacing ordinary quotes with arch quotes.
Line 1,751: Line 1,761:     
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 40.} ~~ \text{Reflective Origin} ~ \operatorname{Ref}^0 L(\text{A})\!</math>
+
|+ style="height:30px" | <math>\text{Table 40.} ~~ \text{Reflective Origin} ~ \mathrm{Ref}^0 L(\text{A})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 1,823: Line 1,833:     
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 41.} ~~ \text{Reflective Origin} ~ \operatorname{Ref}^0 L(\text{B})\!</math>
+
|+ style="height:30px" | <math>\text{Table 41.} ~~ \text{Reflective Origin} ~ \mathrm{Ref}^0 L(\text{B})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 1,894: Line 1,904:  
<br>
 
<br>
   −
Tables&nbsp;42 and 43 show one way that the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> can be extended in a reflective sense through the use of quotational devices, yielding the ''first order reflective extensions'', <math>\operatorname{Ref}^1 L(\text{A})\!</math> and <math>\operatorname{Ref}^1 L(\text{B}).\!</math>  These extensions add one layer of HA signs and their objects to the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> respectively.  The new triples specify that, for each <math>{}^{\langle} x {}^{\rangle}\!</math> in the set <math>\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \},\!</math> the HA sign of the form <math>{}^{\langle\langle} x {}^{\rangle\rangle}\!</math> connotes itself while denoting <math>{}^{\langle} x {}^{\rangle}.\!</math>
+
Tables&nbsp;42 and 43 show one way that the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> can be extended in a reflective sense through the use of quotational devices, yielding the ''first order reflective extensions'', <math>\mathrm{Ref}^1 L(\text{A})\!</math> and <math>\mathrm{Ref}^1 L(\text{B}).\!</math>  These extensions add one layer of HA signs and their objects to the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> respectively.  The new triples specify that, for each <math>{}^{\langle} x {}^{\rangle}\!</math> in the set <math>\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \},\!</math> the HA sign of the form <math>{}^{\langle\langle} x {}^{\rangle\rangle}\!</math> connotes itself while denoting <math>{}^{\langle} x {}^{\rangle}.\!</math>
    
Notice that the semantic equivalences of nouns and pronouns referring to each interpreter do not extend to semantic equivalences of their higher order signs, exactly as demanded by the literal character of quotations.  Also notice that the reflective extensions of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> coincide in their reflective parts, since exactly the same triples were added to each set.
 
Notice that the semantic equivalences of nouns and pronouns referring to each interpreter do not extend to semantic equivalences of their higher order signs, exactly as demanded by the literal character of quotations.  Also notice that the reflective extensions of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> coincide in their reflective parts, since exactly the same triples were added to each set.
Line 1,901: Line 1,911:     
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 42.} ~~ \text{Higher Ascent Sign Relation} ~ \operatorname{Ref}^1 L(\text{A})\!</math>
+
|+ style="height:30px" | <math>\text{Table 42.} ~~ \text{Higher Ascent Sign Relation} ~ \mathrm{Ref}^1 L(\text{A})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 2,004: Line 2,014:     
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 43.} ~~ \text{Higher Ascent Sign Relation} ~ \operatorname{Ref}^1 L(\text{B})\!</math>
+
|+ style="height:30px" | <math>\text{Table 43.} ~~ \text{Higher Ascent Sign Relation} ~ \mathrm{Ref}^1 L(\text{B})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 2,106: Line 2,116:  
<br>
 
<br>
   −
There are many ways to extend sign relations in an effort to develop their reflective capacities.  The implicit goal of a reflective project is to reach a condition of ''reflective closure'', a configuration satisfying the inclusion <math>S \subseteq O,\!</math> where every sign is an object.  It is important to note that not every process of reflective extension can achieve a reflective closure in a finite sign relation.  This can only happen if there are additional equivalence relations that keep the effective orders of signs within finite bounds.  As long as there are higher order signs that remain distinct from all lower order signs, the sign relation driven by a reflective process is forced to keep expanding.  In particular, the process that is ''freely'' suggested by the formation of <math>\operatorname{Ref}^1 L(\text{A})\!</math> and <math>\operatorname{Ref}^1 L(\text{B})\!</math> cannot reach closure if it continues as indicated, without further constraints.
+
There are many ways to extend sign relations in an effort to develop their reflective capacities.  The implicit goal of a reflective project is to reach a condition of ''reflective closure'', a configuration satisfying the inclusion <math>S \subseteq O,\!</math> where every sign is an object.  It is important to note that not every process of reflective extension can achieve a reflective closure in a finite sign relation.  This can only happen if there are additional equivalence relations that keep the effective orders of signs within finite bounds.  As long as there are higher order signs that remain distinct from all lower order signs, the sign relation driven by a reflective process is forced to keep expanding.  In particular, the process that is ''freely'' suggested by the formation of <math>\mathrm{Ref}^1 L(\text{A})~\!</math> and <math>\mathrm{Ref}^1 L(\text{B})~\!</math> cannot reach closure if it continues as indicated, without further constraints.
    
Tables&nbsp;44 and 45 present ''higher import extensions'' of <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> respectively.  These are just higher order sign relations that add selections of higher import signs and their objects to the underlying set of triples in <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  One way to understand these extensions is as follows.  The interpreters <math>\text{A}\!</math> and <math>\text{B}\!</math> each use nouns and pronouns just as before, except that the nouns are given additional denotations that refer to the interpretive conduct of the interpreter named.  In this form of development, using a noun as a canonical form that refers indifferently to all the <math>(o, s, i)\!</math> triples of a sign relation is a pragmatic way that a sign relation can refer to itself and to other sign relations.
 
Tables&nbsp;44 and 45 present ''higher import extensions'' of <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> respectively.  These are just higher order sign relations that add selections of higher import signs and their objects to the underlying set of triples in <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  One way to understand these extensions is as follows.  The interpreters <math>\text{A}\!</math> and <math>\text{B}\!</math> each use nouns and pronouns just as before, except that the nouns are given additional denotations that refer to the interpretive conduct of the interpreter named.  In this form of development, using a noun as a canonical form that refers indifferently to all the <math>(o, s, i)\!</math> triples of a sign relation is a pragmatic way that a sign relation can refer to itself and to other sign relations.
Line 2,113: Line 2,123:     
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 44.} ~~ \text{Higher Import Sign Relation} ~ \operatorname{HI}^1 L(\text{A})\!</math>
+
|+ style="height:30px" | <math>\text{Table 44.} ~~ \text{Higher Import Sign Relation} ~ \mathrm{HI}^1 L(\text{A})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 2,309: Line 2,319:     
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
|+ style="height:30px" | <math>\text{Table 45.} ~~ \text{Higher Import Sign Relation} ~ \operatorname{HI}^1 L(\text{B})\!</math>
+
|+ style="height:30px" | <math>\text{Table 45.} ~~ \text{Higher Import Sign Relation} ~ \mathrm{HI}^1 L(\text{B})\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 2,504: Line 2,514:  
<br>
 
<br>
   −
Several important facts about the class of higher order sign relations in general are illustrated by these examples.  First, the notations appearing in the object columns of <math>\operatorname{HI}^1 L(\text{A})\!</math> and <math>\operatorname{HI}^1 L(\text{B})\!</math> are not the terms that these newly extended interpreters are depicted as using to describe their objects, but the kinds of language that you and I, or other external observers, would typically make available to distinguish them.  The sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> as extended by the transactions of <math>\operatorname{HI}^1 L(\text{A})\!</math> and <math>\operatorname{HI}^1 L(\text{B}),\!</math> respectively, are still restricted to their original syntactic domain <math>\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \}.\!</math>  This means that there need be nothing especially articulate about a HI sign relation just because it qualifies as higher order.  Indeed, the sign relations <math>\operatorname{HI}^1 L(\text{A})\!</math> and <math>\operatorname{HI}^1 L(\text{B})\!</math> are not very discriminating in their descriptions of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> referring to many different things under the very same signs that you and I and others would explicitly distinguish, especially in marking the distinction between an interpretive agent and any one of its individual transactions.
+
Several important facts about the class of higher order sign relations in general are illustrated by these examples.  First, the notations appearing in the object columns of <math>\mathrm{HI}^1 L(\text{A})\!</math> and <math>\mathrm{HI}^1 L(\text{B})\!</math> are not the terms that these newly extended interpreters are depicted as using to describe their objects, but the kinds of language that you and I, or other external observers, would typically make available to distinguish them.  The sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> as extended by the transactions of <math>\mathrm{HI}^1 L(\text{A})\!</math> and <math>\mathrm{HI}^1 L(\text{B}),\!</math> respectively, are still restricted to their original syntactic domain <math>\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \}.\!</math>  This means that there need be nothing especially articulate about a HI sign relation just because it qualifies as higher order.  Indeed, the sign relations <math>\mathrm{HI}^1 L(\text{A})\!</math> and <math>\mathrm{HI}^1 L(\text{B})\!</math> are not very discriminating in their descriptions of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> referring to many different things under the very same signs that you and I and others would explicitly distinguish, especially in marking the distinction between an interpretive agent and any one of its individual transactions.
    
In practice, it does an interpreter little good to have the higher import signs for referring to triples of objects, signs, and interpretants if it does not also have the higher ascent signs for referring to each triple's syntactic portions.  Consequently, the higher order sign relations that one is likely to observe in practice are typically a mixed bag, having both higher ascent and higher import sections.  Moreover, the ambiguity involved in having signs that refer equivocally to simple world elements and also to complex structures formed from these ingredients would most likely be resolved by drawing additional information from context and fashioning more distinctive signs.
 
In practice, it does an interpreter little good to have the higher import signs for referring to triples of objects, signs, and interpretants if it does not also have the higher ascent signs for referring to each triple's syntactic portions.  Consequently, the higher order sign relations that one is likely to observe in practice are typically a mixed bag, having both higher ascent and higher import sections.  Moreover, the ambiguity involved in having signs that refer equivocally to simple world elements and also to complex structures formed from these ingredients would most likely be resolved by drawing additional information from context and fashioning more distinctive signs.
Line 2,512: Line 2,522:  
The technique illustrated here represents a general strategy, one that can be exploited to derive certain benefits of set theory without having to pay the overhead that is needed to maintain sets as abstract objects.  Using an identified type of a sign as a canonical form that can refer indifferently to all the members of a set is a pragmatic way of making plural reference to the members of a set without invoking the set itself as an abstract object.  Of course, it is not that one can get something for nothing by these means.  One is merely banking on one's recurring investment in the setting of a certain sign relation, a particular set of elementary transactions that is taken for granted as already funded.
 
The technique illustrated here represents a general strategy, one that can be exploited to derive certain benefits of set theory without having to pay the overhead that is needed to maintain sets as abstract objects.  Using an identified type of a sign as a canonical form that can refer indifferently to all the members of a set is a pragmatic way of making plural reference to the members of a set without invoking the set itself as an abstract object.  Of course, it is not that one can get something for nothing by these means.  One is merely banking on one's recurring investment in the setting of a certain sign relation, a particular set of elementary transactions that is taken for granted as already funded.
   −
As a rule, it is desirable for the grammatical system that one uses to construct and interpret higher order signs, that is, signs for referring to signs as objects, to mesh in a comfortable fashion with the overall pragmatic system that one uses to assign syntactic codes to objects in general.  For future reference, I call this requirement the problem of creating a ''conformally reflective extension'' (CRE) for a given sign relation.  A good way to think about this task is to imagine oneself beginning with a sign relation <math>L \subseteq O \times S \times I,\!</math> and to consider its denotative component <math>\operatorname{Den}_L = L_{OS} \subseteq O \times S.\!</math>  Typically one has a ''naming function'', say <math>\operatorname{Nom},\!</math> that maps objects into signs:
+
As a rule, it is desirable for the grammatical system that one uses to construct and interpret higher order signs, that is, signs for referring to signs as objects, to mesh in a comfortable fashion with the overall pragmatic system that one uses to assign syntactic codes to objects in general.  For future reference, I call this requirement the problem of creating a ''conformally reflective extension'' (CRE) for a given sign relation.  A good way to think about this task is to imagine oneself beginning with a sign relation <math>L \subseteq O \times S \times I,\!</math> and to consider its denotative component <math>\mathrm{Den}_L = L_{OS} \subseteq O \times S.\!</math>  Typically one has a ''naming function'', say <math>\mathrm{Nom},\!</math> that maps objects into signs:
    
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{Nom} \subseteq \operatorname{Den}_L \subseteq O \times S ~\text{such that}~ \operatorname{Nom} : O \to S.\!</math>
+
| <math>\mathrm{Nom} \subseteq \mathrm{Den}_L \subseteq O \times S ~\text{such that}~ \mathrm{Nom} : O \to S.\!</math>
 
|}
 
|}
   −
Part of the task of making a sign relation more reflective is to extend it in ways that turn more of its signs into objects.  This is the reason for creating higher order signs, which are just signs for making objects out of signs.  One effect of progressive reflection is to extend the initial naming function <math>\operatorname{Nom}\!</math> through a succession of new naming functions <math>\operatorname{Nom}',\!</math> <math>\operatorname{Nom}'',\!</math> and so on, assigning unique names to larger allotments of the original and subsequent signs.  With respect to the difficulties of construction, the ''hard core'' or ''adamant part'' of creating extended naming functions resides in the initial portion <math>\operatorname{Nom}\!</math> that maps objects of the &ldquo;external world&rdquo; to signs in the &ldquo;internal world&rdquo;.  The subsequent task of assigning conventional names to signs is supposed to be comparatively natural and ''easy'', perhaps on account of the ''nominal'' nature of signs themselves.
+
Part of the task of making a sign relation more reflective is to extend it in ways that turn more of its signs into objects.  This is the reason for creating higher order signs, which are just signs for making objects out of signs.  One effect of progressive reflection is to extend the initial naming function <math>\mathrm{Nom}\!</math> through a succession of new naming functions <math>\mathrm{Nom}',\!</math> <math>\mathrm{Nom}'',\!</math> and so on, assigning unique names to larger allotments of the original and subsequent signs.  With respect to the difficulties of construction, the ''hard core'' or ''adamant part'' of creating extended naming functions resides in the initial portion <math>\mathrm{Nom}\!</math> that maps objects of the &ldquo;external world&rdquo; to signs in the &ldquo;internal world&rdquo;.  The subsequent task of assigning conventional names to signs is supposed to be comparatively natural and ''easy'', perhaps on account of the ''nominal'' nature of signs themselves.
    
The effect of reflection on the original sign relation <math>L \subseteq O \times S \times I\!</math> can be analyzed as follows.  Suppose that a step of reflection creates higher order signs for a subset of <math>S.\!</math>  Then this step involves the construction of a newly extended sign relation:
 
The effect of reflection on the original sign relation <math>L \subseteq O \times S \times I\!</math> can be analyzed as follows.  Suppose that a step of reflection creates higher order signs for a subset of <math>S.\!</math>  Then this step involves the construction of a newly extended sign relation:
Line 2,529: Line 2,539:     
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{Nom}_1 : O_1 \to S_1 ~\text{such that}~ \operatorname{Nom}_1 : x \mapsto {}^{\langle} x {}^{\rangle}.\!</math>
+
| <math>\mathrm{Nom}_1 : O_1 \to S_1 ~\text{such that}~ \mathrm{Nom}_1 : x \mapsto {}^{\langle} x {}^{\rangle}.\!</math>
 
|}
 
|}
   −
Finally, the reflectively extended naming function <math>\operatorname{Nom}' : O' \to S'\!</math> is defined as <math>\operatorname{Nom}' = \operatorname{Nom} \cup \operatorname{Nom}_1.\!</math>
+
Finally, the reflectively extended naming function <math>\mathrm{Nom}' : O' \to S'\!</math> is defined as <math>\mathrm{Nom}' = \mathrm{Nom} \cup \mathrm{Nom}_1.\!</math>
    
A few remarks are necessary to see how this way of defining a CRE can be regarded as legitimate.
 
A few remarks are necessary to see how this way of defining a CRE can be regarded as legitimate.
Line 2,538: Line 2,548:  
In the present context an application of the arch notation, for example, <math>{}^{\langle} x {}^{\rangle},\!</math> is read on analogy with the use of any other functional notation, for example, <math>f(x),\!</math> where <math>{}^{\backprime\backprime} f {}^{\prime\prime}\!</math> is the name of a function <math>f,\!</math> <math>{}^{\backprime\backprime} f(~) {}^{\prime\prime}\!</math> is the context of its application, <math>{}^{\backprime\backprime} x {}^{\prime\prime}\!</math> is the name of an argument <math>x,\!</math> and where the functional abstraction <math>{}^{\backprime\backprime} x \mapsto f(x) {}^{\prime\prime}\!</math> is just another name for the function <math>f.\!</math>
 
In the present context an application of the arch notation, for example, <math>{}^{\langle} x {}^{\rangle},\!</math> is read on analogy with the use of any other functional notation, for example, <math>f(x),\!</math> where <math>{}^{\backprime\backprime} f {}^{\prime\prime}\!</math> is the name of a function <math>f,\!</math> <math>{}^{\backprime\backprime} f(~) {}^{\prime\prime}\!</math> is the context of its application, <math>{}^{\backprime\backprime} x {}^{\prime\prime}\!</math> is the name of an argument <math>x,\!</math> and where the functional abstraction <math>{}^{\backprime\backprime} x \mapsto f(x) {}^{\prime\prime}\!</math> is just another name for the function <math>f.\!</math>
   −
It is clear that some form of functional abstraction is being invoked in the above definition of <math>\operatorname{Nom}_1.\!</math>  Otherwise, the expression <math>x \mapsto {}^{\langle} x {}^{\rangle}\!</math> would indicate a constant function, one that maps every <math>x\!</math> in its domain to the same code or sign for the letter <math>{}^{\backprime\backprime} x {}^{\prime\prime}.\!</math>  But if this is allowed, then it appears to pose a dilemma, either to invoke a more powerful concept of functional abstraction than the concept being defined, or else to attempt an improper definition of the naming function in terms of itself.
+
It is clear that some form of functional abstraction is being invoked in the above definition of <math>\mathrm{Nom}_1.\!</math>  Otherwise, the expression <math>x \mapsto {}^{\langle} x {}^{\rangle}\!</math> would indicate a constant function, one that maps every <math>x\!</math> in its domain to the same code or sign for the letter <math>{}^{\backprime\backprime} x {}^{\prime\prime}.\!</math>  But if this is allowed, then it appears to pose a dilemma, either to invoke a more powerful concept of functional abstraction than the concept being defined, or else to attempt an improper definition of the naming function in terms of itself.
    
Although it appears that this form of functional abstraction is being used to define the CRE in terms of itself, trying to extend the definition of the naming function in terms of a definition that is already assumed to be available, in reality this only uses a finite function, a finite table look up, to define the naming function for an unlimited number of higher order signs.
 
Although it appears that this form of functional abstraction is being used to define the CRE in terms of itself, trying to extend the definition of the naming function in terms of a definition that is already assumed to be available, in reality this only uses a finite function, a finite table look up, to define the naming function for an unlimited number of higher order signs.
Line 2,562: Line 2,572:  
===6.11. Higher Order Sign Relations : Application===
 
===6.11. Higher Order Sign Relations : Application===
   −
Given the language in which a notation like <math>{}^{\backprime\backprime} \operatorname{De}(q, L) {}^{\prime\prime}\!</math> makes sense, or in prospect of being given such a language, it is instructive to ask:  &ldquo;What must be assumed about the context of interpretation in which this language is supposed to make sense?&rdquo;  According to the theory of signs that is being examined here, the relevant formal aspects of that context are embodied in a particular sign relation, call it <math>{}^{\backprime\backprime} Q {}^{\prime\prime}.\!</math>  With respect to the hypothetical sign relation <math>Q,\!</math> commonly personified as the prospective reader or the ideal interpreter of the intended language, the denotation of the expression <math>{}^{\backprime\backprime} \operatorname{De}(q, L) {}^{\prime\prime}\!</math> is given by:
+
Given the language in which a notation like <math>{}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime}\!</math> makes sense, or in prospect of being given such a language, it is instructive to ask:  &ldquo;What must be assumed about the context of interpretation in which this language is supposed to make sense?&rdquo;  According to the theory of signs that is being examined here, the relevant formal aspects of that context are embodied in a particular sign relation, call it <math>{}^{\backprime\backprime} Q {}^{\prime\prime}.\!</math>  With respect to the hypothetical sign relation <math>Q,\!</math> commonly personified as the prospective reader or the ideal interpreter of the intended language, the denotation of the expression <math>{}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime}\!</math> is given by:
    
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\operatorname{De}( {}^{\backprime\backprime} \operatorname{De}(q, L) {}^{\prime\prime}, Q ).\!</math>
+
| <math>\mathrm{De}( {}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime}, Q ).\!</math>
 
|}
 
|}
   Line 2,573: Line 2,583:  
|
 
|
 
<math>\begin{array}{lccc}
 
<math>\begin{array}{lccc}
\operatorname{De}( & {}^{\backprime\backprime} \operatorname{De} {}^{\prime\prime} & , & Q)
+
\mathrm{De}( & {}^{\backprime\backprime} \mathrm{De} {}^{\prime\prime} & , & Q)
 
\\[6pt]
 
\\[6pt]
\operatorname{De}( & {}^{\backprime\backprime} q {}^{\prime\prime} & , & Q)
+
\mathrm{De}( & {}^{\backprime\backprime} q {}^{\prime\prime} & , & Q)
 
\\[6pt]
 
\\[6pt]
\operatorname{De}( & {}^{\backprime\backprime} L {}^{\prime\prime} & , & Q)
+
\mathrm{De}( & {}^{\backprime\backprime} L {}^{\prime\prime} & , & Q)
 
\end{array}</math>
 
\end{array}</math>
 
|}
 
|}
   −
What are the roles of the signs <math>{}^{\backprime\backprime} \operatorname{De} {}^{\prime\prime},\!</math> <math>{}^{\backprime\backprime} q {}^{\prime\prime},\!</math> <math>{}^{\backprime\backprime} L {}^{\prime\prime}\!</math> and what are they supposed to mean to <math>Q\!</math>?  Evidently, <math>{}^{\backprime\backprime} \operatorname{De} {}^{\prime\prime}\!</math> is a constant name that refers to a particular function, <math>{}^{\backprime\backprime} q {}^{\prime\prime}\!</math> is a variable name that makes a PIR to a collection of signs, and <math>{}^{\backprime\backprime} L {}^{\prime\prime}\!</math> is a variable name that makes a PIR to a collection of sign relations.
+
What are the roles of the signs <math>{}^{\backprime\backprime} \mathrm{De} {}^{\prime\prime},\!</math> <math>{}^{\backprime\backprime} q {}^{\prime\prime},\!</math> <math>{}^{\backprime\backprime} L {}^{\prime\prime}\!</math> and what are they supposed to mean to <math>Q\!</math>?  Evidently, <math>{}^{\backprime\backprime} \mathrm{De} {}^{\prime\prime}\!</math> is a constant name that refers to a particular function, <math>{}^{\backprime\backprime} q {}^{\prime\prime}\!</math> is a variable name that makes a PIR to a collection of signs, and <math>{}^{\backprime\backprime} L {}^{\prime\prime}\!</math> is a variable name that makes a PIR to a collection of sign relations.
    
This is not the place to take up the possibility of an ideal, universal, or even a very comprehensive interpreter for the language indicated here, so I specialize the account to consider an interpreter <math>Q_{\text{AB}} = Q(\text{A}, \text{B})\!</math> that is competent to cover the initial level of reflections that arise from the dialogue of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
 
This is not the place to take up the possibility of an ideal, universal, or even a very comprehensive interpreter for the language indicated here, so I specialize the account to consider an interpreter <math>Q_{\text{AB}} = Q(\text{A}, \text{B})\!</math> that is competent to cover the initial level of reflections that arise from the dialogue of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
   −
For the interpreter <math>Q_{\text{AB}},\!</math> the sign variable <math>q\!</math> need only range over the syntactic domain <math>S = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \}\!</math> and the relation variable <math>L\!</math> need only range over the set of sign relations <math>\{ L(\text{A}), L(\text{B}) \}.\!</math>  These requirements can be accomplished as follows:
+
For the interpreter <math>Q_\text{AB},\!</math> the sign variable <math>q\!</math> need only range over the syntactic domain <math>S = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \}\!</math> and the relation variable <math>L\!</math> need only range over the set of sign relations <math>\{ L(\text{A}), L(\text{B}) \}.\!</math>  These requirements can be accomplished as follows:
   −
# The variable name <math>{}^{\backprime\backprime} q {}^{\prime\prime}</math> is a HA sign that makes a PIR to the elements of <math>S.\!</math>
+
# The variable name <math>{}^{\backprime\backprime} q {}^{\prime\prime}</math> is a HA sign that makes a PIR to the elements of <math>S.~\!</math>
# The variable name <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> is a HU sign that makes a PIR to the elements of <math>\{ L(\text{A}), L(\text{B}) \}.\!</math>
+
# The variable name <math>{}^{\backprime\backprime} L {}^{\prime\prime}</math> is a HU sign that makes a PIR to the elements of <math>\{ L(\text{A}), L(\text{B}) \}.~\!</math>
# The constant name <math>{}^{\backprime\backprime} L(\text{A}) {}^{\prime\prime}</math> is a HI sign that makes a PIR to the elements of <math>L(\text{A}).\!</math>
+
# The constant name <math>{}^{\backprime\backprime} L(\text{A}) {}^{\prime\prime}</math> is a HI sign that makes a PIR to the elements of <math>L(\text{A}).~\!</math>
# The constant name <math>{}^{\backprime\backprime} L(\text{B}) {}^{\prime\prime}</math> is a HI sign that makes a PIR to the elements of <math>L(\text{B}).\!</math>
+
# The constant name <math>{}^{\backprime\backprime} L(\text{B}) {}^{\prime\prime}</math> is a HI sign that makes a PIR to the elements of <math>L(\text{B}).~\!</math>
   −
This results in a higher order sign relation for <math>Q_{\text{AB}},\!</math> that is shown in Table&nbsp;46.
+
This results in a higher order sign relation for <math>Q_\text{AB},\!</math> that is shown in Table&nbsp;46.
    
<br>
 
<br>
Line 2,608: Line 2,618:  
\\
 
\\
 
\text{B}
 
\text{B}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,614: Line 2,624:  
\\
 
\\
 
{}^{\langle} L {}^{\rangle}
 
{}^{\langle} L {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,620: Line 2,630:  
\\
 
\\
 
{}^{\langle} L {}^{\rangle}
 
{}^{\langle} L {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
Line 2,631: Line 2,641:  
\\
 
\\
 
{}^{\langle} \text{u} {}^{\rangle}
 
{}^{\langle} \text{u} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,641: Line 2,651:  
\\
 
\\
 
{}^{\langle} q {}^{\rangle}
 
{}^{\langle} q {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,651: Line 2,661:  
\\
 
\\
 
{}^{\langle} q {}^{\rangle}
 
{}^{\langle} q {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
Line 2,670: Line 2,680:  
\\
 
\\
 
( & \text{B} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & )
 
( & \text{B} & , & {}^{\langle} \text{u} {}^{\rangle} & , & {}^{\langle} \text{u} {}^{\rangle} & )
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,688: Line 2,698:  
\\
 
\\
 
{}^{\langle} \text{A} {}^{\rangle}
 
{}^{\langle} \text{A} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,706: Line 2,716:  
\\
 
\\
 
{}^{\langle} \text{A} {}^{\rangle}
 
{}^{\langle} \text{A} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
Line 2,725: Line 2,735:  
\\
 
\\
 
( & \text{B} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & )
 
( & \text{B} & , & {}^{\langle} \text{i} {}^{\rangle} & , & {}^{\langle} \text{i} {}^{\rangle} & )
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,743: Line 2,753:  
\\
 
\\
 
{}^{\langle} \text{B} {}^{\rangle}
 
{}^{\langle} \text{B} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 2,761: Line 2,771:  
\\
 
\\
 
{}^{\langle} \text{B} {}^{\rangle}
 
{}^{\langle} \text{B} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
Line 2,780: Line 2,790:  
\\
 
\\
 
(( & {}^{\langle} \text{u} {}^{\rangle} & , & \text{B} & ), & \text{A} & )
 
(( & {}^{\langle} \text{u} {}^{\rangle} & , & \text{B} & ), & \text{A} & )
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
 
\\
 
\\
{}^{\langle} \operatorname{De} {}^{\rangle}
+
{}^{\langle} \mathrm{De} {}^{\rangle}
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|}
 
|}
    
<br>
 
<br>
   −
Following the manner of construction in this extremely reduced example, it is possible to see how answers to the above questions, concerning the meaning of <math>{}^{\backprime\backprime} \operatorname{De}(q, L) {}^{\prime\prime},\!</math> might be worked out.  In the present instance:
+
Following the manner of construction in this extremely reduced example, it is possible to see how answers to the above questions, concerning the meaning of <math>{}^{\backprime\backprime} \mathrm{De}(q, L) {}^{\prime\prime},\!</math> might be worked out.  In the present instance:
    
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
 
|
 
|
 
<math>\begin{array}{lll}
 
<math>\begin{array}{lll}
\operatorname{De} ({}^{\backprime\backprime} q {}^{\prime\prime}, Q_{\text{AB}})
+
\mathrm{De} ({}^{\backprime\backprime} q {}^{\prime\prime}, Q_{\text{AB}})
 
& = & S
 
& = & S
 
\\[6pt]
 
\\[6pt]
\operatorname{De} ({}^{\backprime\backprime} L {}^{\prime\prime}, Q_{\text{AB}})
+
\mathrm{De} ({}^{\backprime\backprime} L {}^{\prime\prime}, Q_{\text{AB}})
 
& = & \{ L(\text{A}), L(\text{B}) \}
 
& = & \{ L(\text{A}), L(\text{B}) \}
 
\end{array}</math>
 
\end{array}</math>
Line 2,904: Line 2,914:  
<p>The ''nominal resource'' (''nominal alphabet'' or ''nominal lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
 
<p>The ''nominal resource'' (''nominal alphabet'' or ''nominal lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
   −
<p><math>X^{\backprime\backprime\prime\prime} = \operatorname{Nom}(X) = \{ {}^{\backprime\backprime} x_1 {}^{\prime\prime}, \ldots, {}^{\backprime\backprime} x_n {}^{\prime\prime} \}.</math></p>
+
<p><math>X^{\backprime\backprime\prime\prime} = \mathrm{Nom}(X) = \{ {}^{\backprime\backprime} x_1 {}^{\prime\prime}, \ldots, {}^{\backprime\backprime} x_n {}^{\prime\prime} \}.</math></p>
    
<p>This concept is intended to capture the ordinary usage of this set of signs in one familiar context or another.</p></li>
 
<p>This concept is intended to capture the ordinary usage of this set of signs in one familiar context or another.</p></li>
Line 2,911: Line 2,921:  
<p>The ''mediate resource'' (''mediate alphabet'' or ''mediate lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
 
<p>The ''mediate resource'' (''mediate alphabet'' or ''mediate lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
   −
<p><math>X^{\langle\rangle} = \operatorname{Med}(X) = \{ {}^{\langle} x_1 {}^{\rangle}, \ldots, {}^{\langle} x_n {}^{\rangle} \}.</math></p>
+
<p><math>X^{\langle\rangle} = \mathrm{Med}(X) = \{ {}^{\langle} x_1 {}^{\rangle}, \ldots, {}^{\langle} x_n {}^{\rangle} \}.</math></p>
    
<p>This concept provides a middle ground between the nominal resource above and the literal resource described next.</p></li>
 
<p>This concept provides a middle ground between the nominal resource above and the literal resource described next.</p></li>
Line 2,918: Line 2,928:  
<p>The ''literal resource'' (''literal alphabet'' or ''literal lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
 
<p>The ''literal resource'' (''literal alphabet'' or ''literal lexicon'') for <math>X\!</math> is a set of signs that is notated and defined as follows:</p>
   −
<p><math>X = \operatorname{Lit}(X) = \{ x_1, \ldots, x_n \}.</math></p>
+
<p><math>X = \mathrm{Lit}(X) = \{ x_1, \ldots, x_n \}.</math></p>
    
<p>This concept is intended to supply a set of signs that can be used in ways analogous to familiar usages, but which are more subject to free variation and thematic control.</p></li></ol>
 
<p>This concept is intended to supply a set of signs that can be used in ways analogous to familiar usages, but which are more subject to free variation and thematic control.</p></li></ol>
Line 2,939: Line 2,949:  
In the ''elemental construal'' of variables, a variable <math>x\!</math> is just an existing object <math>x\!</math> that is an element of a set <math>X,\!</math> the catch being &ldquo;which element?&rdquo;  In spite of this lack of information, one is still permitted to write <math>{}^{\backprime\backprime} x \in X {}^{\prime\prime}\!</math> as a syntactically well-formed expression and otherwise treat the variable name <math>{}^{\backprime\backprime} x {}^{\prime\prime}\!</math> as a pronoun on a grammatical par with a noun.  Given enough information about the contexts of usage and interpretation, this explanation of the variable <math>x\!</math> as an unknown object would complete itself in a determinate indication of the element intended, just as if a constant object had always been named by <math>{}^{\backprime\backprime} x {}^{\prime\prime}.\!</math>
 
In the ''elemental construal'' of variables, a variable <math>x\!</math> is just an existing object <math>x\!</math> that is an element of a set <math>X,\!</math> the catch being &ldquo;which element?&rdquo;  In spite of this lack of information, one is still permitted to write <math>{}^{\backprime\backprime} x \in X {}^{\prime\prime}\!</math> as a syntactically well-formed expression and otherwise treat the variable name <math>{}^{\backprime\backprime} x {}^{\prime\prime}\!</math> as a pronoun on a grammatical par with a noun.  Given enough information about the contexts of usage and interpretation, this explanation of the variable <math>x\!</math> as an unknown object would complete itself in a determinate indication of the element intended, just as if a constant object had always been named by <math>{}^{\backprime\backprime} x {}^{\prime\prime}.\!</math>
   −
In the ''functional construal'' of variables, a variable is a function of unknown circumstances that results in a known range of definite values.  This tactic pushes the ostensible location of the uncertainty back a bit, into the domain of a named function, but it cannot eliminate it entirely.  Thus, a variable is a function <math>x : X \to Y\!</math> that maps a domain of unknown circumstances, or a ''sample space'' <math>X,\!</math> into a range <math>Y\!</math> of outcome values.  Typically, variables of this sort come in sets of the form <math>\{ x_i : X \to Y \},\!</math> collectively called ''coordinate projections'' and together constituting a basis for a whole class of functions <math>x : X \to Y\!</math> sharing a similar type.  This construal succeeds in giving each variable name <math>{}^{\backprime\backprime} x_i {}^{\prime\prime}\!</math> an objective referent, namely, the coordinate projection <math>x_i,\!</math> but the explanation is partial to the extent that the domain of unknown circumstances remains to be explained.  Completing this explanation of variables, to the extent that it can be accomplished, requires an account of how these unknown circumstances can be known exactly to the extent that they are in fact described, that is, in terms of their effects under the given projections.
+
In the ''functional construal'' of variables, a variable is a function of unknown circumstances that results in a known range of definite values.  This tactic pushes the ostensible location of the uncertainty back a bit, into the domain of a named function, but it cannot eliminate it entirely.  Thus, a variable is a function <math>x : X \to Y\!</math> that maps a domain of unknown circumstances, or a ''sample space'' <math>X,\!</math> into a range <math>Y\!</math> of outcome values.  Typically, variables of this sort come in sets of the form <math>\{ x_i : X \to Y \},\!</math> collectively called ''coordinate projections'' and together constituting a basis for a whole class of functions <math>x : X \to Y\!</math> sharing a similar type.  This construal succeeds in giving each variable name <math>{}^{\backprime\backprime} x_i {}^{\prime\prime}\!</math> an objective referent, namely, the coordinate projection <math>{x_i},\!</math> but the explanation is partial to the extent that the domain of unknown circumstances remains to be explained.  Completing this explanation of variables, to the extent that it can be accomplished, requires an account of how these unknown circumstances can be known exactly to the extent that they are in fact described, that is, in terms of their effects under the given projections.
    
As suggested by the whole direction of the present work, the ultimate explanation of variables is to be given by the pragmatic theory of signs, where variables are treated as a special class of signs called ''indices''.
 
As suggested by the whole direction of the present work, the ultimate explanation of variables is to be given by the pragmatic theory of signs, where variables are treated as a special class of signs called ''indices''.
Line 2,998: Line 3,008:     
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\underline{\underline{X}} = \operatorname{Lit}(X) = \{ \underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}} \}.\!</math>
+
| <math>\underline{\underline{X}} = \mathrm{Lit}(X) = \{ \underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}} \}.\!</math>
 
|}
 
|}
   Line 3,016: Line 3,026:  
# The reflective (or critical) acceptation is to see the list before all else as a list of signs, each of which may or may not have a EU-object.  This is the attitude that must be taken in formal language theory and in any setting where computational constraints on interpretation are being contemplated.  In these contexts it cannot be assumed without question that every sign, whose participation in a denotation relation would have to be indicated by a recursive function and implemented by an effective program, does in fact have an existential denotation, much less a unique object.  The entire body of implicit assumptions that go to make up this acceptation, although they operate more like interpretive suspicions than automatic dispositions, will be referred to as the ''sign convention''.
 
# The reflective (or critical) acceptation is to see the list before all else as a list of signs, each of which may or may not have a EU-object.  This is the attitude that must be taken in formal language theory and in any setting where computational constraints on interpretation are being contemplated.  In these contexts it cannot be assumed without question that every sign, whose participation in a denotation relation would have to be indicated by a recursive function and implemented by an effective program, does in fact have an existential denotation, much less a unique object.  The entire body of implicit assumptions that go to make up this acceptation, although they operate more like interpretive suspicions than automatic dispositions, will be referred to as the ''sign convention''.
   −
In the present context, I can answer questions about the ontology of a &ldquo;variable&rdquo; by saying that each variable <math>x_i\!</math> is a kind of a sign, in the boolean case capable of denoting an element of <math>\mathbb{B} = \{ 0, 1 \}\!</math> as its object, with the actual value depending on the interpretation of the moment.  Note that <math>x_i\!</math> is a sign, and that <math>{}^{\backprime\backprime} x_i {}^{\prime\prime}\!</math> is another sign that denotes it.  This acceptation of the list <math>X = \{ x_i \}\!</math> corresponds to what was just called the ''sign convention''.
+
In the present context, I can answer questions about the ontology of a &ldquo;variable&rdquo; by saying that each variable <math>x_i\!</math> is a kind of a sign, in the boolean case capable of denoting an element of <math>{\mathbb{B} = \{ 0, 1 \}}\!</math> as its object, with the actual value depending on the interpretation of the moment.  Note that <math>x_i\!</math> is a sign, and that <math>{}^{\backprime\backprime} x_i {}^{\prime\prime}\!</math> is another sign that denotes it.  This acceptation of the list <math>X = \{ x_i \}\!</math> corresponds to what was just called the ''sign convention''.
    
In a context where all the signs that ought to have EU-objects are in fact safely assured to do so, then it is usually less bothersome to assume the object convention.  Otherwise, discussion must resort to the less natural but more careful sign convention.  This convention is only &ldquo;artificial&rdquo; in the sense that it recalls the artifactual nature and the instrumental purpose of signs, and does nothing more out of the way than to call an implement &ldquo;an implement&rdquo;.
 
In a context where all the signs that ought to have EU-objects are in fact safely assured to do so, then it is usually less bothersome to assume the object convention.  Otherwise, discussion must resort to the less natural but more careful sign convention.  This convention is only &ldquo;artificial&rdquo; in the sense that it recalls the artifactual nature and the instrumental purpose of signs, and does nothing more out of the way than to call an implement &ldquo;an implement&rdquo;.
Line 3,029: Line 3,039:     
<li>
 
<li>
<p>The sign <math>{}^{\backprime\backprime} x_i {}^{\prime\prime},\!</math> appearing in the contextual frame <math>{}^{\backprime\backprime} \underline{~~~} : \mathbb{B}^n \to \mathbb{B} {}^{\prime\prime},\!</math> or interpreted as belonging to that frame, denotes the <math>i^\text{th}\!</math> coordinate function <math>\underline{\underline{x_i}} : \mathbb{B}^n \to \mathbb{B}.</math>  The entire collection of coordinate maps in <math>\underline{\underline{X}} = \{ \underline{\underline{x_i}} \}\!</math> contributes to the definition of the ''coordinate space'' or ''vector space'' <math>\underline{X} : \mathbb{B}^n,\!</math> notated as follows:</p>
+
<p>The sign <math>{}^{\backprime\backprime} x_i {}^{\prime\prime},\!</math> appearing in the contextual frame <math>{}^{\backprime\backprime} \underline{[[User:Jon Awbrey|Jon Awbrey]] ([[User talk:Jon Awbrey|talk]])} : \mathbb{B}^n \to \mathbb{B} {}^{\prime\prime},\!</math> or interpreted as belonging to that frame, denotes the <math>i^\text{th}\!</math> coordinate function <math>\underline{\underline{x_i}} : \mathbb{B}^n \to \mathbb{B}.</math>  The entire collection of coordinate maps in <math>{\underline{\underline{X}} = \{ \underline{\underline{x_i}} \}}\!</math> contributes to the definition of the ''coordinate space'' or ''vector space'' <math>\underline{X} : \mathbb{B}^n,\!</math> notated as follows:</p>
    
<p><math>\underline{X} = \langle \underline{\underline{X}} \rangle = \langle \underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}} \rangle = \{ (\underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}}) \} : \mathbb{B}^n.\!</math></p>
 
<p><math>\underline{X} = \langle \underline{\underline{X}} \rangle = \langle \underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}} \rangle = \{ (\underline{\underline{x_1}}, \ldots, \underline{\underline{x_n}}) \} : \mathbb{B}^n.\!</math></p>
Line 3,081: Line 3,091:  
Next, it is necessary to consider the stylistic differences among the logical, functional, and geometric conceptions of propositional logic.  Logically, a domain of properties or propositions is known by the axioms it is subject to.  Concretely, one thinks of a particular property or proposition as applying to the things or situations it is true of.  With the synthesis just indicated, this can be expressed in a unified form:  In abstract logical terms, a DOP is known by the axioms to which it is subject.  In concrete functional or geometric terms, a particular element of a DOP is known by the things of which it is true.
 
Next, it is necessary to consider the stylistic differences among the logical, functional, and geometric conceptions of propositional logic.  Logically, a domain of properties or propositions is known by the axioms it is subject to.  Concretely, one thinks of a particular property or proposition as applying to the things or situations it is true of.  With the synthesis just indicated, this can be expressed in a unified form:  In abstract logical terms, a DOP is known by the axioms to which it is subject.  In concrete functional or geometric terms, a particular element of a DOP is known by the things of which it is true.
   −
With the appropriate correspondences between these three domains in mind, the general term ''proposition'' can be interpreted in a flexible manner to cover logical, functional, and geometric types of objects.  Thus, a locution like <math>{}^{\backprime\backprime} \text{the proposition}~ F {}^{\prime\prime}\!</math> can be interpreted in three ways:  (1) literally, to denote a logical proposition, (2) functionally, to denote a mapping from a space <math>X\!</math> of propertied or proposed objects to the domain <math>\mathbb{B} = \{ 0, 1 \}\!</math> of truth values, and (3) geometrically, to denote the so-called ''fiber of truth'' <math>F^{-1}(1)\!</math> as a region or a subset of <math>X.\!</math>  For all of these reasons, it is desirable to set up a suitably flexible interpretive framework for propositional logic, where an object introduced as a logical proposition <math>F\!</math> can be recast as a boolean function <math>F : X \to \mathbb{B},\!</math> and understood to indicate the region of the space <math>X\!</math> that is ruled by <math>F.\!</math>
+
With the appropriate correspondences between these three domains in mind, the general term ''proposition'' can be interpreted in a flexible manner to cover logical, functional, and geometric types of objects.  Thus, a locution like <math>{}^{\backprime\backprime} \text{the proposition}~ F {}^{\prime\prime}\!</math> can be interpreted in three ways:  (1) literally, to denote a logical proposition, (2) functionally, to denote a mapping from a space <math>X\!</math> of propertied or proposed objects to the domain <math>{\mathbb{B} = \{ 0, 1 \}}\!</math> of truth values, and (3) geometrically, to denote the so-called ''fiber of truth'' <math>F^{-1}(1)\!</math> as a region or a subset of <math>X.\!</math>  For all of these reasons, it is desirable to set up a suitably flexible interpretive framework for propositional logic, where an object introduced as a logical proposition <math>F\!</math> can be recast as a boolean function <math>F : X \to \mathbb{B},\!</math> and understood to indicate the region of the space <math>X\!</math> that is ruled by <math>F.\!</math>
    
Generally speaking, it does not seem possible to disentangle these three domains from each other or to determine which one is more fundamental.  In practice, due to its concern with the computational implementations of every concept it uses, the present work is biased toward the functional interpretation of propositions.  From this point of view, the abstract intention of a logical proposition <math>F\!</math> is regarded as being realized only when a program is found that computes the function <math>F : X \to \mathbb{B}.\!</math>
 
Generally speaking, it does not seem possible to disentangle these three domains from each other or to determine which one is more fundamental.  In practice, due to its concern with the computational implementations of every concept it uses, the present work is biased toward the functional interpretation of propositions.  From this point of view, the abstract intention of a logical proposition <math>F\!</math> is regarded as being realized only when a program is found that computes the function <math>F : X \to \mathbb{B}.\!</math>
Line 3,089: Line 3,099:  
One of the reasons for pursuing a pragmatic hybrid of semantic and syntactic approaches, rather than keeping to the purely syntactic ways of manipulating meaningless tokens according to abstract rules of proof, is that the model theoretic strategy preserves the form of connection that exists between an agent's concrete particular experiences and the abstract propositions and general properties that it uses to describe its experience.  This makes it more likely that a hybrid approach will serve in the realistic pursuits of inquiry, since these efforts involve the integration of deductive, inductive, and abductive sources of knowledge.
 
One of the reasons for pursuing a pragmatic hybrid of semantic and syntactic approaches, rather than keeping to the purely syntactic ways of manipulating meaningless tokens according to abstract rules of proof, is that the model theoretic strategy preserves the form of connection that exists between an agent's concrete particular experiences and the abstract propositions and general properties that it uses to describe its experience.  This makes it more likely that a hybrid approach will serve in the realistic pursuits of inquiry, since these efforts involve the integration of deductive, inductive, and abductive sources of knowledge.
   −
In this approach to propositional logic, with a view toward computational realization, one begins with a space <math>X,\!</math> called a ''universe of discourse'', whose points can be reasonably well described by means of a finite set of logical features.  Since the points of the space <math>X\!</math> are effectively known only in terms of their computable features, one can assume that there is a finite set of computable coordinate projections <math>x_i : X \to \mathbb{B},\!</math> for <math>i = 1 ~\text{to}~ n,\!</math> for some <math>n,\!</math> that can serve to describe the points of <math>X.\!</math>  This means that there is a computable coordinate representation for <math>X,\!</math> in other words, a computable map <math>T : X \to \mathbb{B}^n\!</math> that describes the points of <math>X\!</math> insofar as they are known.  Thus, each proposition <math>F : X \to \mathbb{B}\!</math> can be factored through the coordinate representation <math>T : X \to \mathbb{B}^n\!</math> to yield a related proposition <math>f : \mathbb{B}^n \to \mathbb{B},\!</math> one that speaks directly about coordinate <math>n\!</math>-tuples but indirectly about points of <math>X.\!</math>  Composing maps on the right, the mapping <math>f\!</math> is defined by the equation <math>F = T \circ f.\!</math>  For all practical purposes served by the representation <math>T,\!</math> the proposition <math>f\!</math> can be taken as a proxy for the proposition <math>F,\!</math> saying things about the points of <math>X\!</math> by means of <math>X\!</math>'s encoding to <math>\mathbb{B}^n.\!</math>
+
In this approach to propositional logic, with a view toward computational realization, one begins with a space <math>X,\!</math> called a ''universe of discourse'', whose points can be reasonably well described by means of a finite set of logical features.  Since the points of the space <math>X\!</math> are effectively known only in terms of their computable features, one can assume that there is a finite set of computable coordinate projections <math>x_i : X \to \mathbb{B},\!</math> for <math>{i = 1 ~\text{to}~ n,}\!</math> for some <math>n,\!</math> that can serve to describe the points of <math>X.\!</math>  This means that there is a computable coordinate representation for <math>X,\!</math> in other words, a computable map <math>T : X \to \mathbb{B}^n\!</math> that describes the points of <math>X\!</math> insofar as they are known.  Thus, each proposition <math>F : X \to \mathbb{B}\!</math> can be factored through the coordinate representation <math>T : X \to \mathbb{B}^n\!</math> to yield a related proposition <math>f : \mathbb{B}^n \to \mathbb{B},\!</math> one that speaks directly about coordinate <math>n\!</math>-tuples but indirectly about points of <math>X.\!</math>  Composing maps on the right, the mapping <math>f\!</math> is defined by the equation <math>F = T \circ f.\!</math>  For all practical purposes served by the representation <math>T,\!</math> the proposition <math>f\!</math> can be taken as a proxy for the proposition <math>F,\!</math> saying things about the points of <math>X\!</math> by means of <math>X\!</math>'s encoding to <math>\mathbb{B}^n.\!</math>
   −
Working under the functional perspective, the formal system known as ''propositional calculus'' is introduced as a general system of notations for referring to boolean functions.  Typically, one takes a space <math>X\!</math> and a coordinate representation <math>T : X \to \mathbb{B}^n\!</math> as parameters of a particular system and speaks of the propositional calculus on a finite set of variables <math>\{ \underline{\underline{x_i}} \}.\!</math>  In objective terms, this constitutes the ''domain of propositions'' on the basis <math>\{ \underline{\underline{x_i}} \},\!</math> notated as <math>\operatorname{DOP}\{ \underline{\underline{x_i}} \}.\!</math>  Ideally, one does not want to become too fixed on a particular set of logical features or to let the momentary dimensions of the space be cast in stone.  In practice, this means that the formalism and its computational implementation should allow for the automatic embedding of <math>\operatorname{DOP}(\underline{\underline{X}})\!</math> into <math>\operatorname{DOP}(\underline{\underline{Y}})\!</math> whenever <math>\underline{\underline{X}} \subseteq \underline{\underline{Y}}.\!</math>
+
Working under the functional perspective, the formal system known as ''propositional calculus'' is introduced as a general system of notations for referring to boolean functions.  Typically, one takes a space <math>X\!</math> and a coordinate representation <math>T : X \to \mathbb{B}^n\!</math> as parameters of a particular system and speaks of the propositional calculus on a finite set of variables <math>\{ \underline{\underline{x_i}} \}.\!</math>  In objective terms, this constitutes the ''domain of propositions'' on the basis <math>\{ \underline{\underline{x_i}} \},\!</math> notated as <math>\mathrm{DOP}\{ \underline{\underline{x_i}} \}.\!</math>  Ideally, one does not want to become too fixed on a particular set of logical features or to let the momentary dimensions of the space be cast in stone.  In practice, this means that the formalism and its computational implementation should allow for the automatic embedding of <math>\mathrm{DOP}(\underline{\underline{X}})\!</math> into <math>\mathrm{DOP}(\underline{\underline{Y}})\!</math> whenever <math>\underline{\underline{X}} \subseteq \underline{\underline{Y}}.\!</math>
    
The rest of this section presents the elements of a particular calculus for propositional logic.  First, I establish the basic notations and summarize the axiomatic presentation of the calculus, and then I give special attention to its functional and geometric interpretations.
 
The rest of this section presents the elements of a particular calculus for propositional logic.  First, I establish the basic notations and summarize the axiomatic presentation of the calculus, and then I give special attention to its functional and geometric interpretations.
Line 3,257: Line 3,267:  
<p>In order to render this MON instructive for the development of a RIF, something intended to be a deliberately ''self-conscious'' construction, it is important to remedy the excessive lucidity of this MONs reflections, the confusing mix of opacity and transparency that comes in proportion to one's very familiarity with an object and that is compounded by one's very fluency in a language.  To do this, it is incumbent on a proper analysis of the situation to slow the MON down, to interrupt one's own comprehension of its developing intent, and to articulate the details of the sign process that mediates it much more carefully than is customary.</p>
 
<p>In order to render this MON instructive for the development of a RIF, something intended to be a deliberately ''self-conscious'' construction, it is important to remedy the excessive lucidity of this MONs reflections, the confusing mix of opacity and transparency that comes in proportion to one's very familiarity with an object and that is compounded by one's very fluency in a language.  To do this, it is incumbent on a proper analysis of the situation to slow the MON down, to interrupt one's own comprehension of its developing intent, and to articulate the details of the sign process that mediates it much more carefully than is customary.</p>
   −
<p>These goals can be achieved by singling out the formal language that is used by this MON to denote its set theoretic objects.  This involves separating the object domain <math>O = O_\text{MON}\!</math> from the sign domain <math>S = S_\text{MON},\!</math> paying closer attention to the naive level of set notation that is actually used by this MON, and treating its primitive set theoretic expressions as a formal language all its own.</p>
+
<p>These goals can be achieved by singling out the formal language that is used by this MON to denote its set theoretic objects.  This involves separating the object domain <math>{O = O_\text{MON}}\!</math> from the sign domain <math>{S = S_\text{MON}},\!</math> paying closer attention to the naive level of set notation that is actually used by this MON, and treating its primitive set theoretic expressions as a formal language all its own.</p>
    
<p>Thus, I need to discuss a variety of formal languages on the following alphabet:</p>
 
<p>Thus, I need to discuss a variety of formal languages on the following alphabet:</p>
Line 3,306: Line 3,316:     
<li>Another approach examines the nature of the objects that are invoked.</li></ol></ol>
 
<li>Another approach examines the nature of the objects that are invoked.</li></ol></ol>
  −
<br>
  −
  −
<p align="center">'''Fragments'''</p>
  −
  −
In previous work I developed a version of propositional calculus based on C.S. Peirce's ''existential graphs'' and implemented this calculus in computational form as a ''sentential calculus interpreter''.  Taking this calculus as a point of departure, I devised a theory of ''differential extensions'' for propositional domains that can be used, figuratively speaking, to put universes of discourse &ldquo;in motion&rdquo;, in other words, to provide qualitative descriptions of processes taking place in logical spaces.  See (Awbrey, 1989 and 1994) for an account of this calculus, documentation of its computer program, and a detailed treatment of differential extensions.
  −
  −
In previous work (Awbrey, 1989) I described a system of notation for propositional calculus based on C.S. Peirce's ''existential graphs'', documented a computer implementation of this formalism, and showed how to provide this calculus with a ''differential extension'' that can be used to describe changing universes of discourse.  In subsequent work (Awbrey, 1994) the resulting system of ''differential logic'' was applied to give qualitative descriptions of change in discrete dynamical systems.  This section draws on that earlier work, summarizing the conceptions that are needed to give logical representations of sign relations and recording a few changes of a minor nature in the typographical conventions used.
  −
  −
Abstractly, a domain of propositions is known by the axioms it satisfies.  Concretely, one thinks of a proposition as applying to the objects it is true of.
  −
  −
Logically, a domain of properties or propositions is known by the axioms it is subject to.  Concretely, a property or proposition is known by the things or situations it is true of.  Typically, the signs of properties and propositions are called ''terms'' and ''sentences'', respectively.
      
===6.20. Three Views of Systems===
 
===6.20. Three Views of Systems===
Line 3,353: Line 3,351:  
I close this section by discussing the relationship among the three views of systems that are relevant to the example of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
 
I close this section by discussing the relationship among the three views of systems that are relevant to the example of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
   −
[Variant] How do these three perspectives bear on the example of <math>\text{A}\!</math> and <math>\text{B}\!</math>?
+
'''[Variant]''' How do these three perspectives bear on the example of <math>\text{A}\!</math> and <math>\text{B}\!</math>?
   −
[Variant] In order to show how these three perspectives bear on the present inquiry, I will now discuss the relationship they exhibit in the example of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
+
'''[Variant]''' In order to show how these three perspectives bear on the present inquiry, I will now discuss the relationship they exhibit in the example of <math>\text{A}\!</math> and <math>\text{B}.\!</math>
    
In the present example, concerned with the form of communication that takes place between the interpreters <math>\text{A}\!</math> and <math>\text{B},\!</math> the topic of interest is not the type of dynamics that would change one of the original objects, <math>\text{A}\!</math> or <math>\text{B},\!</math> into the other.  Thus, the object system is nothing more than the object domain <math>O = \{ \text{A}, \text{B} \}\!</math> shared between the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  In this case, where the object system reduces to an abstract set, falling under the action of a trivial dynamics, one says that the object system is ''stable'' or ''static''.  In more developed examples, when the dynamics at the level of the object system becomes more interesting, the ''objects'' in the object system are usually referred to as ''objective configurations'' or ''object states''.  Later examples will take on object systems that enjoy significant variations in the sequences of their objective states.
 
In the present example, concerned with the form of communication that takes place between the interpreters <math>\text{A}\!</math> and <math>\text{B},\!</math> the topic of interest is not the type of dynamics that would change one of the original objects, <math>\text{A}\!</math> or <math>\text{B},\!</math> into the other.  Thus, the object system is nothing more than the object domain <math>O = \{ \text{A}, \text{B} \}\!</math> shared between the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  In this case, where the object system reduces to an abstract set, falling under the action of a trivial dynamics, one says that the object system is ''stable'' or ''static''.  In more developed examples, when the dynamics at the level of the object system becomes more interesting, the ''objects'' in the object system are usually referred to as ''objective configurations'' or ''object states''.  Later examples will take on object systems that enjoy significant variations in the sequences of their objective states.
Line 3,380: Line 3,378:  
A sign relation is a complex object and its representations, insofar as they faithfully preserve its structure, are complex signs.  Accordingly, the problems of translating between ERs and IRs of sign relations, of detecting when representations alleged to be of sign relations do indeed represent objects of the specified character, and of recognizing whether different representations do or do not represent the same sign relation as their common object &mdash; these are the familiar questions that would be asked of the signs and interpretants in a simple sign relation, but this time asked at a higher level, in regard to the complex signs and complex interpretants that are posed by the different stripes of representation.  At the same time, it should be obvious that these are also the natural questions to be faced in building a bridge between representations.
 
A sign relation is a complex object and its representations, insofar as they faithfully preserve its structure, are complex signs.  Accordingly, the problems of translating between ERs and IRs of sign relations, of detecting when representations alleged to be of sign relations do indeed represent objects of the specified character, and of recognizing whether different representations do or do not represent the same sign relation as their common object &mdash; these are the familiar questions that would be asked of the signs and interpretants in a simple sign relation, but this time asked at a higher level, in regard to the complex signs and complex interpretants that are posed by the different stripes of representation.  At the same time, it should be obvious that these are also the natural questions to be faced in building a bridge between representations.
   −
How many different sorts of entities are conceivably involved in translating between ERs and IRs of sign relations?  To address this question it helps to introduce a system of type notations that can be used to keep track of the various sorts of things, or the varieties of objects of thought, that are generated in the process of answering it.  Table&nbsp;47.1 summarizes the basic types of things that are needed in this pursuit, while the rest can be derived by constructions of the form <math>X ~\operatorname{of}~ Y,\!</math> notated <math>X(Y)\!</math> or just <math>XY,\!</math> for any basic types <math>X\!</math> and <math>Y.\!</math>  The constructed types of things involved in the ERs and IRs of sign relations are listed in Tables&nbsp;47.2 and 47.3, respectively.
+
How many different sorts of entities are conceivably involved in translating between ERs and IRs of sign relations?  To address this question it helps to introduce a system of type notations that can be used to keep track of the various sorts of things, or the varieties of objects of thought, that are generated in the process of answering it.  Table&nbsp;47.1 summarizes the basic types of things that are needed in this pursuit, while the rest can be derived by constructions of the form <math>X ~\mathrm{of}~ Y,\!</math> notated <math>X(Y)\!</math> or just <math>XY,\!</math> for any basic types <math>X\!</math> and <math>Y.\!</math>  The constructed types of things involved in the ERs and IRs of sign relations are listed in Tables&nbsp;47.2 and 47.3, respectively.
    
<br>
 
<br>
Line 3,411: Line 3,409:  
|-
 
|-
 
| <math>\text{Relation}\!</math>
 
| <math>\text{Relation}\!</math>
| <math>R\!</math>
+
| <math>{R}\!</math>
| <math>S(T(U))\!</math>
+
| <math>{S(T(U))}\!</math>
 
|}
 
|}
   Line 3,436: Line 3,434:  
Let <math>\underline{S}\!</math> be the type of signs, <math>S\!</math> the type of sets, <math>T\!</math> the type of triples, and <math>U\!</math> the type of underlying objects.  Now consider the various sorts of things, or the varieties of objects of thought, that are invoked on each side, annotating each type as it is mentioned:
 
Let <math>\underline{S}\!</math> be the type of signs, <math>S\!</math> the type of sets, <math>T\!</math> the type of triples, and <math>U\!</math> the type of underlying objects.  Now consider the various sorts of things, or the varieties of objects of thought, that are invoked on each side, annotating each type as it is mentioned:
   −
ERs of sign relations describe them as sets <math>(Ss)\!</math> of triples <math>(Ts)\!</math> of underlying elements <math>(Us).\!</math>  This makes for three levels of objective structure that must be put in coordination with each other, a task that is projected to be carried out in the appropriate OF of sign relations.  Corresponding to this aspect of structure in the OF, there is a parallel aspect of structure in the IF of sign relations.  Namely, the accessory sign relations that are used to discuss a targeted sign relation need to have signs for sets <math>(\underline{S}Ss),\!</math> signs for triples <math>(\underline{S}Ts),\!</math> and signs for the underlying elements <math>(\underline{S}Us).\!</math>  This accounts for three levels of syntactic structure in the IF of sign relations that must be coordinated with each other and also with the targeted levels of objective structure.
+
ERs of sign relations describe them as sets <math>(Ss)\!</math> of triples <math>(Ts)\!</math> of underlying elements <math>(Us).\!</math>  This makes for three levels of objective structure that must be put in coordination with each other, a task that is projected to be carried out in the appropriate OF of sign relations.  Corresponding to this aspect of structure in the OF, there is a parallel aspect of structure in the IF of sign relations.  Namely, the accessory sign relations that are used to discuss a targeted sign relation need to have signs for sets <math>{(\underline{S}Ss)},\!</math> signs for triples <math>{(\underline{S}Ts)},\!</math> and signs for the underlying elements <math>{(\underline{S}Us)}.\!</math>  This accounts for three levels of syntactic structure in the IF of sign relations that must be coordinated with each other and also with the targeted levels of objective structure.
   −
[Variant] IRs of sign relations describe them in terms of properties <math>(Ps)\!</math> that are taken as primitive entities in their own right.  /// refer to properties <math>(Ps)\!</math> of transactions <math>(Ts)\!</math> of underlying elements <math>(Us).\!</math>
+
'''[Variant]''' IRs of sign relations describe them in terms of properties <math>(Ps)\!</math> that are taken as primitive entities in their own right.  /// refer to properties <math>(Ps)\!</math> of transactions <math>(Ts)\!</math> of underlying elements <math>(Us).\!</math>
   −
[Variant] IRs of sign relations refer to properties of sets <math>(PSs),\!</math> properties of triples <math>(PTs),\!</math> and properties of underlying elements <math>(PUs).\!</math>  This amounts to three more levels of objective structure in the OF of the IR that need to be coordinated with each other and interlaced with the OF of the ER if the two are to be brought into the same discussion, possibly for the purpose of translating either into the other.  Accordingly, the accessory sign relations that are used to discuss an IR of a targeted sign relation need to have <math>\underline{S}PSs,\!</math> <math>\underline{S}PTs,\!</math> and <math>\underline{S}PUs.\!</math>
+
'''[Variant]''' IRs of sign relations refer to properties of sets <math>(PSs),\!</math> properties of triples <math>(PTs),\!</math> and properties of underlying elements <math>(PUs).\!</math>  This amounts to three more levels of objective structure in the OF of the IR that need to be coordinated with each other and interlaced with the OF of the ER if the two are to be brought into the same discussion, possibly for the purpose of translating either into the other.  Accordingly, the accessory sign relations that are used to discuss an IR of a targeted sign relation need to have <math>\underline{S}PSs,\!</math> <math>\underline{S}PTs,\!</math> and <math>\underline{S}PUs.\!</math>
    
===6.22. Extensional Representations of Sign Relations===
 
===6.22. Extensional Representations of Sign Relations===
Line 3,450: Line 3,448:  
Starting from a standpoint in concrete constructions, the easiest way to begin developing an explicit treatment of ERs is to gather the relevant materials in the forms already presented, to fill out the missing details and expand the abbreviated contents of these forms, and to review their full structures in a more formal light.  Consequently, this section inaugurates the formal discussion of ERs by taking a second look at the interpreters <math>\text{A}\!</math> and <math>\text{B},\!</math> recollecting the Tables of their sign relations and finishing up the Tables of their dyadic components.  Since the form of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> no longer presents any novelty, I can exploit their second presentation as a first opportunity to examine a selection of finer points, previously overlooked.  Also, in the process of reviewing this material it is useful to anticipate a number of incidental issues that are reaching the point of becoming critical within this discussion and to begin introducing the generic types of technical devices that are needed to deal with them.
 
Starting from a standpoint in concrete constructions, the easiest way to begin developing an explicit treatment of ERs is to gather the relevant materials in the forms already presented, to fill out the missing details and expand the abbreviated contents of these forms, and to review their full structures in a more formal light.  Consequently, this section inaugurates the formal discussion of ERs by taking a second look at the interpreters <math>\text{A}\!</math> and <math>\text{B},\!</math> recollecting the Tables of their sign relations and finishing up the Tables of their dyadic components.  Since the form of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> no longer presents any novelty, I can exploit their second presentation as a first opportunity to examine a selection of finer points, previously overlooked.  Also, in the process of reviewing this material it is useful to anticipate a number of incidental issues that are reaching the point of becoming critical within this discussion and to begin introducing the generic types of technical devices that are needed to deal with them.
   −
The next set of Tables summarizes the ERs of <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  For ease of reference, Tables&nbsp;48.1 and 49.1 repeat the contents of Tables&nbsp;1 and 2, respectively, the only difference being that appearances of ordinary quotation marks <math>({}^{\backprime\backprime} \ldots {}^{\prime\prime})\!</math> are transcribed as invocations of the ''arch operator'' <math>({}^{\langle} \ldots {}^{\rangle}).\!</math>  The reason for this slight change of notation will be explained shortly.  The denotative components <math>\operatorname{Den}(\text{A})\!</math> and <math>\operatorname{Den}(\text{B})\!</math> are shown in the first two columns of Tables&nbsp;48.2 and 49.2, respectively, while the third column gives the transition from sign to object as an ordered pair <math>(s, o).\!</math>  The connotative components <math>\operatorname{Con}(\text{A})\!</math> and <math>\operatorname{Con}(\text{B})\!</math> are shown in the first two columns of Tables&nbsp;48.3 and 49.3, respectively, while the third column gives the transition from sign to interpretant as an ordered pair <math>(s, i).\!</math>
+
The next set of Tables summarizes the ERs of <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  For ease of reference, Tables&nbsp;48.1 and 49.1 repeat the contents of Tables&nbsp;1 and 2, respectively, the only difference being that appearances of ordinary quotation marks <math>({}^{\backprime\backprime} \ldots {}^{\prime\prime})\!</math> are transcribed as invocations of the ''arch operator'' <math>({}^{\langle} \ldots {}^{\rangle}).\!</math>  The reason for this slight change of notation will be explained shortly.  The denotative components <math>\mathrm{Den}(\text{A})\!</math> and <math>\mathrm{Den}(\text{B})\!</math> are shown in the first two columns of Tables&nbsp;48.2 and 49.2, respectively, while the third column gives the transition from sign to object as an ordered pair <math>(s, o).\!</math>  The connotative components <math>\mathrm{Con}(\text{A})\!</math> and <math>\mathrm{Con}(\text{B})\!</math> are shown in the first two columns of Tables&nbsp;48.3 and 49.3, respectively, while the third column gives the transition from sign to interpretant as an ordered pair <math>(s, i).\!</math>
    
<br>
 
<br>
Line 3,456: Line 3,454:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 48.1} ~~ \operatorname{ER}(L_\text{A}) : \text{Extensional Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 48.1} ~~ \mathrm{ER}(L_\text{A}) : \text{Extensional Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 3,529: Line 3,527:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 48.2} ~~ \operatorname{ER}(\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 48.2} ~~ \mathrm{ER}(\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 3,578: Line 3,576:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 48.3} ~~ \operatorname{ER}(\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 48.3} ~~ \mathrm{ER}(\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Sign}\!</math>
 
| <math>\text{Sign}\!</math>
Line 3,651: Line 3,649:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 49.1} ~~ \operatorname{ER}(L_\text{B}) : \text{Extensional Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 49.1} ~~ \mathrm{ER}(L_\text{B}) : \text{Extensional Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 3,724: Line 3,722:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 49.2} ~~ \operatorname{ER}(\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 49.2} ~~ \mathrm{ER}(\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}~\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Object}\!</math>
 
| <math>\text{Object}\!</math>
Line 3,747: Line 3,745:  
\\
 
\\
 
({}^{\langle} \text{u} {}^{\rangle}, \text{A})
 
({}^{\langle} \text{u} {}^{\rangle}, \text{A})
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" width="33%" |
 
| valign="bottom" width="33%" |
Line 3,773: Line 3,771:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 49.3} ~~ \operatorname{ER}(\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 49.3} ~~ \mathrm{ER}(\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| <math>\text{Sign}\!</math>
 
| <math>\text{Sign}\!</math>
Line 3,852: Line 3,850:  
For the sake of maximum clarity and reusability of results, I begin by articulating the abstract skeleton of the paradigm structure, treating the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> as sundry aspects of a single, unitary, but still uninterpreted object.  Then I return at various successive stages to differentiate and individualize the two interpreters, to arrange more functional flesh on the basis provided by their structural bones, and to illustrate how their bare forms can be arrayed in many different styles of qualitative detail.
 
For the sake of maximum clarity and reusability of results, I begin by articulating the abstract skeleton of the paradigm structure, treating the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> as sundry aspects of a single, unitary, but still uninterpreted object.  Then I return at various successive stages to differentiate and individualize the two interpreters, to arrange more functional flesh on the basis provided by their structural bones, and to illustrate how their bare forms can be arrayed in many different styles of qualitative detail.
   −
In building connections between ERs and IRs of sign relations the discussion turns on two types of partially ordered sets, or ''posets''.  Suppose that <math>L\!</math> is one of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> and let <math>\operatorname{ER}(L)\!</math> be an ER of <math>L.\!</math>
+
In building connections between ERs and IRs of sign relations the discussion turns on two types of partially ordered sets, or ''posets''.  Suppose that <math>L\!</math> is one of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> and let <math>\mathrm{ER}(L)\!</math> be an ER of <math>L.\!</math>
    
In the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> both of their ERs are based on a common world set:
 
In the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> both of their ERs are based on a common world set:
Line 3,896: Line 3,894:  
To devise an IR of any relation <math>L\!</math> one needs to describe <math>L\!</math> in terms of properties of its ingredients.  Broadly speaking, the ingredients of a relation include its elementary relations or <math>n\!</math>-tuples and the elementary components of these <math>n\!</math>-tuples that reside in the relational domains.
 
To devise an IR of any relation <math>L\!</math> one needs to describe <math>L\!</math> in terms of properties of its ingredients.  Broadly speaking, the ingredients of a relation include its elementary relations or <math>n\!</math>-tuples and the elementary components of these <math>n\!</math>-tuples that reside in the relational domains.
   −
The poset <math>\operatorname{Pos}(W)\!</math> of interest here is the power set <math>\mathcal{P}(W) = \operatorname{Pow}(W).\!</math>
+
The poset <math>\mathrm{Pos}(W)\!</math> of interest here is the power set <math>\mathcal{P}(W) = \mathrm{Pow}(W).\!</math>
    
The elements of these posets are abstractly regarded as ''properties'' or ''propositions'' that apply to the elements of <math>W.\!</math>  These properties and propositions are independently given entities.  In other words, they are primitive elements in their own right, and cannot in general be defined in terms of points, but they exist in relation to these points, and their extensions can be represented as sets of points.
 
The elements of these posets are abstractly regarded as ''properties'' or ''propositions'' that apply to the elements of <math>W.\!</math>  These properties and propositions are independently given entities.  In other words, they are primitive elements in their own right, and cannot in general be defined in terms of points, but they exist in relation to these points, and their extensions can be represented as sets of points.
   −
[Variant] For a variety of foundational reasons that I do not fully understand, perhaps most of all because theoretically given structures have their real foundations outside the realm of theory, in empirically given structures, it is best to regard points, properties, and propositions as equally primitive elements, related to each other but not defined in terms of each other, analogous to the undefined elements of a geometry.
+
'''[Variant]''' For a variety of foundational reasons that I do not fully understand, perhaps most of all because theoretically given structures have their real foundations outside the realm of theory, in empirically given structures, it is best to regard points, properties, and propositions as equally primitive elements, related to each other but not defined in terms of each other, analogous to the undefined elements of a geometry.
   −
[Variant] There is a foundational issue arising in this context that I do not pretend to fully understand and cannot attempt to finally dispatch.  What I do understand I will try to express in terms of an aesthetic principle:  On balance, it seems best to regard extensional elements and intensional features as independently given entities.  This involves treating points and properties as fundamental realities in their own rights, placing them on an equal basis with each other, and seeking their relation to each other, but not trying to reduce one to the other.
+
'''[Variant]''' There is a foundational issue arising in this context that I do not pretend to fully understand and cannot attempt to finally dispatch.  What I do understand I will try to express in terms of an aesthetic principle:  On balance, it seems best to regard extensional elements and intensional features as independently given entities.  This involves treating points and properties as fundamental realities in their own rights, placing them on an equal basis with each other, and seeking their relation to each other, but not trying to reduce one to the other.
   −
The discussion is now specialized to consider the IRs of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> their denotative projections as the digraphs <math>\operatorname{Den}(L_\text{A})\!</math> and <math>\operatorname{Den}(L_\text{B}),\!</math> and their connotative projections as the digraphs <math>\operatorname{Con}(L_\text{A})\!</math> and <math>\operatorname{Con}(L_\text{B}).\!</math>  In doing this I take up two different strategies of representation:
+
The discussion is now specialized to consider the IRs of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}),\!</math> their denotative projections as the digraphs <math>\mathrm{Den}(L_\text{A})\!</math> and <math>\mathrm{Den}(L_\text{B}),\!</math> and their connotative projections as the digraphs <math>\mathrm{Con}(L_\text{A})\!</math> and <math>\mathrm{Con}(L_\text{B}).\!</math>  In doing this I take up two different strategies of representation:
   −
# The first strategy is called the ''literal coding'', because it sticks to obvious features of each syntactic element to contrive its code, or the ''<math>\mathcal{O}(n)\!</math> coding'', because it uses a number on the order of <math>n\!</math> logical features to represent a domain of <math>n\!</math> elements.
+
# The first strategy is called the ''literal coding'', because it sticks to obvious features of each syntactic element to contrive its code, or the ''<math>{\mathcal{O}(n)}\!</math> coding'', because it uses a number on the order of <math>n\!</math> logical features to represent a domain of <math>n\!</math> elements.
 
# The second strategy is called the ''analytic coding'', because it attends to the nuances of each sign's interpretation to fashion its code, or the ''<math>\log (n)\!</math> coding'', because it uses roughly <math>\log_2 (n)\!</math> binary features to represent a domain of <math>n\!</math> elements.
 
# The second strategy is called the ''analytic coding'', because it attends to the nuances of each sign's interpretation to fashion its code, or the ''<math>\log (n)\!</math> coding'', because it uses roughly <math>\log_2 (n)\!</math> binary features to represent a domain of <math>n\!</math> elements.
  −
<br>
  −
  −
<p align="center">'''Fragments'''</p>
  −
  −
In the formalized examples of IRs to be presented in this work, I will keep to the level of logical reasoning that is usually referred to as ''propositional calculus'' or ''sentential logic''.
  −
  −
The contrast between ERs and IRs is strongly correlated with another dimension of interest in the study of inquiry, namely, the tension between empirical and rational modes of inquiry.
  −
  −
This section begins the explicit discussion of ERs by taking a second look at the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>  Since the form of these examples no longer presents any novelty, this second presentation of <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> provides a first opportunity to introduce some new material.  In the process of reviewing this material, it is useful to anticipate a number of incidental issues that are on the point of becoming critical, and to begin introducing the generic types of technical devices that are needed to deal with them.
  −
  −
Therefore, the easiest way to begin an explicit treatment of ERs is by recollecting the Tables of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> and by finishing the corresponding Tables of their dyadic components.  Since the form of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> no longer presents any novelty, I can use the second presentation of these examples as a first opportunity to examine a selection of their finer points, previously overlooked.
  −
  −
Starting from this standpoint, the easiest way to begin developing an explicit treatment of ERs is to gather the relevant materials in the forms already presented, to fill out their missing details and expand the abbreviated contents of these forms, and to review their full structures in a more formal light.
  −
  −
Because of the perfect parallelism that the literal coding contrives between individual signs and grammatical categories, this arrangement illustrates not so much a code transformation as a re-interpretation of the original signs under different headings.
      
===6.24. Literal Intensional Representations===
 
===6.24. Literal Intensional Representations===
Line 4,376: Line 4,358:  
Using two different strategies of representation:
 
Using two different strategies of representation:
   −
'''Literal Coding.'''  The first strategy is called the ''literal coding'' because it sticks to obvious features of each syntactic element to contrive its code, or the ''<math>\mathcal{O}(n)\!</math> coding'', because it uses a number on the order of <math>n\!</math> logical features to represent a domain of <math>n\!</math> elements.
+
'''Literal Coding.'''  The first strategy is called the ''literal coding'' because it sticks to obvious features of each syntactic element to contrive its code, or the ''<math>{\mathcal{O}(n)}\!</math> coding'', because it uses a number on the order of <math>n\!</math> logical features to represent a domain of <math>n\!</math> elements.
    
Being superficial as a matter of principle, or adhering to the surface appearances of signs, enjoys the initial advantage that the very same codes can be used by any interpreter that is capable of observing them.  The down side of resorting to this technique is that it typically uses an excessive number of logical dimensions to get each point of the intended space across.
 
Being superficial as a matter of principle, or adhering to the surface appearances of signs, enjoys the initial advantage that the very same codes can be used by any interpreter that is capable of observing them.  The down side of resorting to this technique is that it typically uses an excessive number of logical dimensions to get each point of the intended space across.
   −
Even while operating within the general lines of the literal, superficial, or <math>\mathcal{O}(n)\!</math> strategy, there are still a number of choices to be made in the style of coding to be employed.  For example, if there is an obvious distinction between different components of the world, like that between the objects in <math>O = \{ \text{A}, \text{B} \}\!</math> and the signs in <math>S = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!</math> then it is common to let this distinction go formally unmarked in the LIR, that is, to omit the requirement of declaring an explicit logical feature to make a note of it in the formal coding.  The distinction itself, as a property of reality, is in no danger of being obliterated or permanently erased, but it can be obscured and temporarily ignored.  In practice, the distinction is not so much ignored as it is casually observed and informally attended to, usually being marked by incidental indices in the context of the representation.
+
Even while operating within the general lines of the literal, superficial, or <math>{\mathcal{O}(n)}\!</math> strategy, there are still a number of choices to be made in the style of coding to be employed.  For example, if there is an obvious distinction between different components of the world, like that between the objects in <math>O = \{ \text{A}, \text{B} \}\!</math> and the signs in <math>S = \{ {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!</math> then it is common to let this distinction go formally unmarked in the LIR, that is, to omit the requirement of declaring an explicit logical feature to make a note of it in the formal coding.  The distinction itself, as a property of reality, is in no danger of being obliterated or permanently erased, but it can be obscured and temporarily ignored.  In practice, the distinction is not so much ignored as it is casually observed and informally attended to, usually being marked by incidental indices in the context of the representation.
    
'''Literal Coding'''
 
'''Literal Coding'''
Line 4,459: Line 4,441:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:75%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:75%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 53.1} ~~ \text{Elements of} ~ \operatorname{ER}(W)\!</math>
+
<math>\text{Table 53.1} ~~ \text{Elements of} ~ \mathrm{ER}(W)\!</math>
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| <math>\text{Mnemonic Element}\!</math> <br><br> <math>w \in W\!</math>
 
| <math>\text{Mnemonic Element}\!</math> <br><br> <math>w \in W\!</math>
Line 4,513: Line 4,495:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:75%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:75%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 53.2} ~~ \text{Features of} ~ \operatorname{LIR}(W)\!</math>
+
<math>\text{Table 53.2} ~~ \text{Features of} ~ \mathrm{LIR}(W)\!</math>
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
|
 
|
Line 4,571: Line 4,553:  
<br>
 
<br>
   −
If the world of <math>\text{A}\!</math> and <math>\text{B},\!</math> the set <math>W = \{ \text{A}, \text{B}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!</math> is viewed abstractly, as an arbitrary set of six atomic points, then there are exactly <math>2^6 = 64\!</math> ''abstract properties'' or ''potential attributes'' that might be applied to or recognized in these points.  The elements of <math>W\!</math> that possess a given property form a subset of <math>W\!</math> called the ''extension'' of that property.  Thus the extensions of abstract properties are exactly the subsets of <math>W.\!</math>  The set of all subsets of <math>W\!</math> is called the ''power set'' of <math>W,\!</math> notated as <math>\operatorname{Pow}(W)\!</math> or <math>\mathcal{P}(W).\!</math> In order to make this way of talking about properties consistent with the previous definition of reality, it is necessary to say that one potential property is never realized, since no point has it, and its extension is the empty set <math>\varnothing = \{ \}.\!</math>  All the ''natural'' properties of points that one observes in a concrete situation, properties whose extensions are known as ''natural kinds'', can be recognized among the ''abstract'', ''arbitrary'', or ''set-theoretic'' properties that are systematically generated in this way.  Typically, however, many of these abstract properties will not be recognized as falling among the more natural kinds.
+
If the world of <math>\text{A}\!</math> and <math>\text{B},\!</math> the set <math>W = \{ \text{A}, \text{B}, {}^{\backprime\backprime} \text{A} {}^{\prime\prime}, {}^{\backprime\backprime} \text{B} {}^{\prime\prime}, {}^{\backprime\backprime} \text{i} {}^{\prime\prime}, {}^{\backprime\backprime} \text{u} {}^{\prime\prime} \},\!</math> is viewed abstractly, as an arbitrary set of six atomic points, then there are exactly <math>2^6 = 64\!</math> ''abstract properties'' or ''potential attributes'' that might be applied to or recognized in these points.  The elements of <math>W\!</math> that possess a given property form a subset of <math>W\!</math> called the ''extension'' of that property.  Thus the extensions of abstract properties are exactly the subsets of <math>W.\!</math>  The set of all subsets of <math>W\!</math> is called the ''power set'' of <math>W,\!</math> notated as <math>\mathrm{Pow}(W)\!</math> or <math>\mathcal{P}(W).\!</math> In order to make this way of talking about properties consistent with the previous definition of reality, it is necessary to say that one potential property is never realized, since no point has it, and its extension is the empty set <math>\varnothing = \{ \}.\!</math>  All the ''natural'' properties of points that one observes in a concrete situation, properties whose extensions are known as ''natural kinds'', can be recognized among the ''abstract'', ''arbitrary'', or ''set-theoretic'' properties that are systematically generated in this way.  Typically, however, many of these abstract properties will not be recognized as falling among the more natural kinds.
   −
Tables&nbsp;54.1, 54.2, and 54.3 show three different ways of representing the elements of the world set <math>W\!</math> as vectors in the coordinate space <math>\underline{W}\!</math> and as singular propositions in the universe of discourse <math>W^\circ\!.</math>  Altogether, these Tables present the ''literal'' codes for the elements of <math>\underline{W}\!</math> and <math>W^\circ\!</math> in their ''mnemonic'', ''pragmatic'', and ''abstract'' versions, respectively.  In each Table, Column&nbsp;1 lists the element <math>w \in W,\!</math> while Column&nbsp;2 gives the corresponding coordinate vector <math>\underline{w} \in \underline{W}\!</math> in the form of a bit string.  The next two Columns represent each <math>w \in W\!</math> as a proposition in <math>W^\circ\!,</math> in effect, reconstituting it as a function <math>w : \underline{W} \to \mathbb{B}.</math>  Column&nbsp;3 shows the propositional expression of each element in the form of a conjunct term, in other words, as a logical product of positive and negative features.  Column&nbsp;4 gives the compact code for each element, using a conjunction of positive features in subscripted angle brackets to represent the singular proposition corresponding to each element.
+
Tables&nbsp;54.1, 54.2, and 54.3 show three different ways of representing the elements of the world set <math>W\!</math> as vectors in the coordinate space <math>\underline{W}\!</math> and as singular propositions in the universe of discourse <math>W^\Box.\!</math>  Altogether, these Tables present the ''literal'' codes for the elements of <math>\underline{W}\!</math> and <math>W^\circ\!</math> in their ''mnemonic'', ''pragmatic'', and ''abstract'' versions, respectively.  In each Table, Column&nbsp;1 lists the element <math>w \in W,\!</math> while Column&nbsp;2 gives the corresponding coordinate vector <math>\underline{w} \in \underline{W}\!</math> in the form of a bit string.  The next two Columns represent each <math>w \in W\!</math> as a proposition in <math>W^\circ\!,</math> in effect, reconstituting it as a function <math>w : \underline{W} \to \mathbb{B}.</math>  Column&nbsp;3 shows the propositional expression of each element in the form of a conjunct term, in other words, as a logical product of positive and negative features.  Column&nbsp;4 gives the compact code for each element, using a conjunction of positive features in subscripted angle brackets to represent the singular proposition corresponding to each element.
    
<br>
 
<br>
Line 4,876: Line 4,858:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 55.1} ~~ \operatorname{LIR}_1 (L_\text{A}) : \text{Literal Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 55.1} ~~ \mathrm{LIR}_1 (L_\text{A}) : \text{Literal Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 4,942: Line 4,924:  
\\[4pt]
 
\\[4pt]
 
{\langle\underline{\underline{\text{u}}}\rangle}_W
 
{\langle\underline{\underline{\text{u}}}\rangle}_W
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|}
 
|}
   Line 4,949: Line 4,931:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 55.2} ~~ \operatorname{LIR}_1 (\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 55.2} ~~ \mathrm{LIR}_1 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 4,966: Line 4,948:  
\\[4pt]
 
\\[4pt]
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 5,002: Line 4,984:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 55.3} ~~ \operatorname{LIR}_1 (\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 55.3} ~~ \mathrm{LIR}_1 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 5,030: Line 5,012:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 5,066: Line 5,048:  
\\[4pt]
 
\\[4pt]
 
{\langle\underline{\underline{\text{u}}}\rangle}_W
 
{\langle\underline{\underline{\text{u}}}\rangle}_W
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 5,091: Line 5,073:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 56.1} ~~ \operatorname{LIR}_1 (L_\text{B}) : \text{Literal Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 56.1} ~~ \mathrm{LIR}_1 (L_\text{B}) : \text{Literal Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 5,157: Line 5,139:  
\\[4pt]
 
\\[4pt]
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|}
 
|}
   Line 5,164: Line 5,146:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 56.2} ~~ \operatorname{LIR}_1 (\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 56.2} ~~ \mathrm{LIR}_1 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 5,217: Line 5,199:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 56.3} ~~ \operatorname{LIR}_1 (\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 56.3} ~~ \mathrm{LIR}_1 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 5,245: Line 5,227:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 5,281: Line 5,263:  
\\[4pt]
 
\\[4pt]
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
 
{\langle\underline{\underline{\text{i}}}\rangle}_W
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}W}
+
\rangle}_{\mathrm{d}W}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}W}
+
0_{\mathrm{d}W}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 5,364: Line 5,346:  
\underline{\underline{{}^{\backprime\backprime} \text{u} {}^{\prime\prime}}}
 
\underline{\underline{{}^{\backprime\backprime} \text{u} {}^{\prime\prime}}}
 
& \}
 
& \}
\end{array}</math>
+
\end{array}\!</math>
 
|}
 
|}
   Line 5,622: Line 5,604:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 58.1} ~~ \operatorname{LIR}_2 (L_\text{A}) : \text{Lateral Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 58.1} ~~ \mathrm{LIR}_2 (L_\text{A}) : \text{Lateral Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 5,663: Line 5,645:  
~\underline{\underline{\text{i}}}~
 
~\underline{\underline{\text{i}}}~
 
(\underline{\underline{\text{u}}})
 
(\underline{\underline{\text{u}}})
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 5,751: Line 5,733:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 58.2} ~~ \operatorname{LIR}_2 (\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 58.2} ~~ \mathrm{LIR}_2 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 5,805: Line 5,787:  
(\underline{\underline{\text{i}}})
 
(\underline{\underline{\text{i}}})
 
~\underline{\underline{\text{u}}}~
 
~\underline{\underline{\text{u}}}~
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 5,820: Line 5,802:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 58.3} ~~ \operatorname{LIR}_2 (\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 58.3} ~~ \mathrm{LIR}_2 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 5,847: Line 5,829:  
~\underline{\underline{\text{i}}}~
 
~\underline{\underline{\text{i}}}~
 
(\underline{\underline{\text{u}}})
 
(\underline{\underline{\text{u}}})
\end{matrix}</math>
+
\end{matrix}\!</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
Line 5,965: Line 5,947:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 59.1} ~~ \operatorname{LIR}_2 (L_\text{B}) : \text{Lateral Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 59.1} ~~ \mathrm{LIR}_2 (L_\text{B}) : \text{Lateral Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,094: Line 6,076:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 59.2} ~~ \operatorname{LIR}_2 (\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 59.2} ~~ \mathrm{LIR}_2 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,163: Line 6,145:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 59.3} ~~ \operatorname{LIR}_2 (\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 59.3} ~~ \mathrm{LIR}_2 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 6,234: Line 6,216:  
(\underline{\underline{\text{di}}})
 
(\underline{\underline{\text{di}}})
 
(\underline{\underline{\text{du}}})
 
(\underline{\underline{\text{du}}})
\end{matrix}</math>
+
\end{matrix}\!</math>
 
|-
 
|-
 
| valign="bottom" |
 
| valign="bottom" |
Line 6,308: Line 6,290:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 60.1} ~~ \operatorname{LIR}_3 (L_\text{A}) : \text{Lateral Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 60.1} ~~ \mathrm{LIR}_3 (L_\text{A}) : \text{Lateral Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,381: Line 6,363:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 60.2} ~~ \operatorname{LIR}_3 (\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 60.2} ~~ \mathrm{LIR}_3 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,434: Line 6,416:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 60.3} ~~ \operatorname{LIR}_3 (\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 60.3} ~~ \mathrm{LIR}_3 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 6,462: Line 6,444:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 6,501: Line 6,483:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 6,523: Line 6,505:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 61.1} ~~ \operatorname{LIR}_3 (L_\text{B}) : \text{Lateral Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 61.1} ~~ \mathrm{LIR}_3 (L_\text{B}) : \text{Lateral Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,596: Line 6,578:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 61.2} ~~ \operatorname{LIR}_3 (\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 61.2} ~~ \mathrm{LIR}_3 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 6,649: Line 6,631:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 61.3} ~~ \operatorname{LIR}_3 (\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 61.3} ~~ \mathrm{LIR}_3 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 6,677: Line 6,659:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{a}}}
+
\mathrm{d}\underline{\underline{\text{a}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 6,716: Line 6,698:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
 
{\langle
 
{\langle
\operatorname{d}\underline{\underline{\text{b}}}
+
\mathrm{d}\underline{\underline{\text{b}}}
 
~
 
~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
\rangle}_{\operatorname{d}Y}
+
\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
0_{\operatorname{d}Y}
+
0_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 7,017: Line 6,999:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 65.1} ~~ \operatorname{AIR}_1 (L_\text{A}) : \text{Analytic Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 65.1} ~~ \mathrm{AIR}_1 (L_\text{A}) : \text{Analytic Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,090: Line 7,072:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 65.2} ~~ \operatorname{AIR}_1 (\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 65.2} ~~ \mathrm{AIR}_1 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,139: Line 7,121:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 65.3} ~~ \operatorname{AIR}_1 (\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 65.3} ~~ \mathrm{AIR}_1 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 7,212: Line 7,194:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 66.1} ~~ \operatorname{AIR}_1 (L_\text{B}) : \text{Analytic Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 66.1} ~~ \mathrm{AIR}_1 (L_\text{B}) : \text{Analytic Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,285: Line 7,267:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 66.2} ~~ \operatorname{AIR}_1 (\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 66.2} ~~ \mathrm{AIR}_1 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,334: Line 7,316:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 66.3} ~~ \operatorname{AIR}_1 (\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 66.3} ~~ \mathrm{AIR}_1 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 7,407: Line 7,389:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 67.1} ~~ \operatorname{AIR}_2 (L_\text{A}) : \text{Analytic Representation of} ~ L_\text{A}\!</math>
+
<math>\text{Table 67.1} ~~ \mathrm{AIR}_2 (L_\text{A}) : \text{Analytic Representation of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,480: Line 7,462:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 67.2} ~~ \operatorname{AIR}_2 (\operatorname{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 67.2} ~~ \mathrm{AIR}_2 (\mathrm{Den}(L_\text{A})) : \text{Denotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,529: Line 7,511:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 67.3} ~~ \operatorname{AIR}_2 (\operatorname{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
+
<math>\text{Table 67.3} ~~ \mathrm{AIR}_2 (\mathrm{Con}(L_\text{A})) : \text{Connotative Component of} ~ L_\text{A}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 7,557: Line 7,539:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 7,588: Line 7,570:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 7,602: Line 7,584:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 68.1} ~~ \operatorname{AIR}_2 (L_\text{B}) : \text{Analytic Representation of} ~ L_\text{B}\!</math>
+
<math>\text{Table 68.1} ~~ \mathrm{AIR}_2 (L_\text{B}) : \text{Analytic Representation of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,675: Line 7,657:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 68.2} ~~ \operatorname{AIR}_2 (\operatorname{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 68.2} ~~ \mathrm{AIR}_2 (\mathrm{Den}(L_\text{B})) : \text{Denotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Object}\!</math>
 
| width="33%" | <math>\text{Object}\!</math>
Line 7,724: Line 7,706:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 68.3} ~~ \operatorname{AIR}_2 (\operatorname{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
+
<math>\text{Table 68.3} ~~ \mathrm{AIR}_2 (\mathrm{Con}(L_\text{B})) : \text{Connotative Component of} ~ L_\text{B}\!</math>
 
|- style="height:40px; background:#f0f0ff"
 
|- style="height:40px; background:#f0f0ff"
 
| width="33%" | <math>\text{Sign}\!</math>
 
| width="33%" | <math>\text{Sign}\!</math>
Line 7,752: Line 7,734:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
Line 7,783: Line 7,765:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}\text{n}\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}\text{n}\rangle}_{\mathrm{d}Y}
 
\\[4pt]
 
\\[4pt]
{\langle\operatorname{d}!\rangle}_{\operatorname{d}Y}
+
{\langle\mathrm{d}!\rangle}_{\mathrm{d}Y}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 7,827: Line 7,809:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
~x~ ~\operatorname{at}~ t
+
~x~ ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
~x~ ~\operatorname{at}~ t
+
~x~ ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
(x) ~\operatorname{at}~ t
+
(x) ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
(x) ~\operatorname{at}~ t
+
(x) ~\mathrm{at}~ t
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
~\operatorname{d}x~ ~\operatorname{at}~ t
+
~\mathrm{d}x~ ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}x) ~\operatorname{at}~ t
+
(\mathrm{d}x) ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}x~ ~\operatorname{at}~ t
+
~\mathrm{d}x~ ~\mathrm{at}~ t
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}x) ~\operatorname{at}~ t
+
(\mathrm{d}x) ~\mathrm{at}~ t
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(x) ~\operatorname{at}~ t'
+
(x) ~\mathrm{at}~ t'
 
\\[4pt]
 
\\[4pt]
~x~ ~\operatorname{at}~ t'
+
~x~ ~\mathrm{at}~ t'
 
\\[4pt]
 
\\[4pt]
~x~ ~\operatorname{at}~ t'
+
~x~ ~\mathrm{at}~ t'
 
\\[4pt]
 
\\[4pt]
(x) ~\operatorname{at}~ t'
+
(x) ~\mathrm{at}~ t'
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 7,861: Line 7,843:  
It might be thought that a notion of real time <math>(t \in \mathbb{R})\!</math> is needed at this point to fund the account of sequential processes.  From a logical point of view, however, I think it will be found that it is precisely out of such data that the notion of time has to be constructed.
 
It might be thought that a notion of real time <math>(t \in \mathbb{R})\!</math> is needed at this point to fund the account of sequential processes.  From a logical point of view, however, I think it will be found that it is precisely out of such data that the notion of time has to be constructed.
   −
The symbol <math>{}^{\backprime\backprime} \ominus\!\!- {}^{\prime\prime},</math> read ''thus'', ''then'', or ''yields'', can be used to mark sequential inferences, allowing for expressions like <math>x \land \operatorname{d}x \ominus\!\!-~ (x).\!</math>  In each case, a suitable context of temporal moments <math>(t, t')\!</math> is understood to underlie the inference.
+
The symbol <math>{}^{\backprime\backprime} \ominus\!\!- {}^{\prime\prime},</math> read ''thus'', ''then'', or ''yields'', can be used to mark sequential inferences, allowing for expressions like <math>x \land \mathrm{d}x \ominus\!\!-~ (x).\!</math>  In each case, a suitable context of temporal moments <math>(t, t')\!</math> is understood to underlie the inference.
   −
A ''sequential inference constraint'' is a logical condition that applies to a temporal system, providing information about the kinds of sequential inference that apply to the system in a hopefully large number of situations.  Typically, a sequential inference constraint is formulated in intensional terms and expressed by means of a collection of sequential inference rules or schemata that tell what sequential inferences apply to the system in particular situations.  Since it has the status of logical theory about an empirical system, a sequential inference constraint is subject to being reformulated in terms of its set-theoretic extension, and it can be established as existing in the customary sort of dual relationship with this extension.  Logically, it determines, and, empirically, it is determined by the corresponding set of ''sequential inference triples'', the <math>(x, y, z)\!</math> such that <math>x \land y \ominus\!\!-~ z.\!</math>  The set-theoretic extension of a sequential inference constraint is thus a triadic relation, generically notated as  <math>\ominus,\!</math> where <math>\ominus \subseteq X \times \operatorname{d}X \times X\!</math> is defined as follows.
+
A ''sequential inference constraint'' is a logical condition that applies to a temporal system, providing information about the kinds of sequential inference that apply to the system in a hopefully large number of situations.  Typically, a sequential inference constraint is formulated in intensional terms and expressed by means of a collection of sequential inference rules or schemata that tell what sequential inferences apply to the system in particular situations.  Since it has the status of logical theory about an empirical system, a sequential inference constraint is subject to being reformulated in terms of its set-theoretic extension, and it can be established as existing in the customary sort of dual relationship with this extension.  Logically, it determines, and, empirically, it is determined by the corresponding set of ''sequential inference triples'', the <math>(x, y, z)\!</math> such that <math>x \land y \ominus\!\!-~ z.\!</math>  The set-theoretic extension of a sequential inference constraint is thus a triadic relation, generically notated as  <math>\ominus,\!</math> where <math>\ominus \subseteq X \times \mathrm{d}X \times X\!</math> is defined as follows.
    
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>\ominus ~=~ \{ (x, y, z) \in  X \times \operatorname{d}X \times X : x \land y \ominus\!\!-~ z \}.\!</math>
+
| <math>\ominus ~=~ \{ (x, y, z) \in  X \times \mathrm{d}X \times X : x \land y \ominus\!\!-~ z \}.\!</math>
 
|}
 
|}
   −
Using the appropriate isomorphisms, or recognizing how, in terms of the information given, that each of several descriptions is tantamount to the same object, the triadic relation <math>\ominus \subseteq X \times \operatorname{d}X \times X\!</math> constituted by a sequential inference constraint can be interpreted as a proposition <math>\ominus : X \times \operatorname{d}X \times X \to \mathbb{B}\!</math> about sequential inference triples, and thus as a map <math>\ominus : \operatorname{d}X \to (X \times X \to \mathbb{B})\!</math> from the space <math>\operatorname{d}X\!</math> of differential states to the space of propositions about transitions in <math>X.\!</math>
+
Using the appropriate isomorphisms, or recognizing how, in terms of the information given, that each of several descriptions is tantamount to the same object, the triadic relation <math>\ominus \subseteq X \times \mathrm{d}X \times X\!</math> constituted by a sequential inference constraint can be interpreted as a proposition <math>\ominus : X \times \mathrm{d}X \times X \to \mathbb{B}\!</math> about sequential inference triples, and thus as a map <math>\ominus : \mathrm{d}X \to (X \times X \to \mathbb{B})\!</math> from the space <math>\mathrm{d}X\!</math> of differential states to the space of propositions about transitions in <math>X.\!</math>
    
<br>
 
<br>
   −
'''Question.'''  Group Actions?  <math>r : \operatorname{d}X \to (X \to X)\!</math>
+
'''Question.'''  Group Actions?  <math>r : \mathrm{d}X \to (X \to X)\!</math>
    
<br>
 
<br>
Line 7,879: Line 7,861:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 70.1} ~~ \text{Group Representation} ~ \operatorname{Rep}^\text{A} (V_4)\!</math>
+
<math>\text{Table 70.1} ~~ \text{Group Representation} ~ \mathrm{Rep}^\text{A} (V_4)\!</math>
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
Line 7,899: Line 7,881:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\underline{\underline{\text{a}}})
+
(\mathrm{d}\underline{\underline{\text{a}}})
(\operatorname{d}\underline{\underline{\text{b}}})
+
(\mathrm{d}\underline{\underline{\text{b}}})
(\operatorname{d}\underline{\underline{\text{i}}})
+
(\mathrm{d}\underline{\underline{\text{i}}})
(\operatorname{d}\underline{\underline{\text{u}}})
+
(\mathrm{d}\underline{\underline{\text{u}}})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\underline{\underline{\text{a}}}~
+
~\mathrm{d}\underline{\underline{\text{a}}}~
(\operatorname{d}\underline{\underline{\text{b}}})
+
(\mathrm{d}\underline{\underline{\text{b}}})
~\operatorname{d}\underline{\underline{\text{i}}}~
+
~\mathrm{d}\underline{\underline{\text{i}}}~
(\operatorname{d}\underline{\underline{\text{u}}})
+
(\mathrm{d}\underline{\underline{\text{u}}})
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}\underline{\underline{\text{a}}})
+
(\mathrm{d}\underline{\underline{\text{a}}})
~\operatorname{d}\underline{\underline{\text{b}}}~
+
~\mathrm{d}\underline{\underline{\text{b}}}~
(\operatorname{d}\underline{\underline{\text{i}}})
+
(\mathrm{d}\underline{\underline{\text{i}}})
~\operatorname{d}\underline{\underline{\text{u}}}~
+
~\mathrm{d}\underline{\underline{\text{u}}}~
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\underline{\underline{\text{a}}}~
+
~\mathrm{d}\underline{\underline{\text{a}}}~
~\operatorname{d}\underline{\underline{\text{b}}}~
+
~\mathrm{d}\underline{\underline{\text{b}}}~
~\operatorname{d}\underline{\underline{\text{i}}}~
+
~\mathrm{d}\underline{\underline{\text{i}}}~
~\operatorname{d}\underline{\underline{\text{u}}}~
+
~\mathrm{d}\underline{\underline{\text{u}}}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\langle \operatorname{d}! \rangle
+
\langle \mathrm{d}! \rangle
 
\\[4pt]
 
\\[4pt]
 
\langle
 
\langle
\operatorname{d}\underline{\underline{\text{a}}} ~
+
\mathrm{d}\underline{\underline{\text{a}}} ~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
 
\rangle
 
\rangle
 
\\[4pt]
 
\\[4pt]
 
\langle
 
\langle
\operatorname{d}\underline{\underline{\text{b}}} ~
+
\mathrm{d}\underline{\underline{\text{b}}} ~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
 
\rangle
 
\rangle
 
\\[4pt]
 
\\[4pt]
\langle \operatorname{d}* \rangle
+
\langle \mathrm{d}* \rangle
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}!
+
\mathrm{d}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\underline{\underline{\text{a}}} \cdot
+
\mathrm{d}\underline{\underline{\text{a}}} \cdot
\operatorname{d}\underline{\underline{\text{i}}} ~ !
+
\mathrm{d}\underline{\underline{\text{i}}} ~ !
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\underline{\underline{\text{b}}} \cdot
+
\mathrm{d}\underline{\underline{\text{b}}} \cdot
\operatorname{d}\underline{\underline{\text{u}}} ~ !
+
\mathrm{d}\underline{\underline{\text{u}}} ~ !
 
\\[4pt]
 
\\[4pt]
\operatorname{d}*
+
\mathrm{d}*
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
Line 7,951: Line 7,933:  
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{ai}}
+
\mathrm{d}_{\text{ai}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{bu}}
+
\mathrm{d}_{\text{bu}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{ai}} * \operatorname{d}_{\text{bu}}
+
\mathrm{d}_{\text{ai}} * \mathrm{d}_{\text{bu}}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 7,963: Line 7,945:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 70.2} ~~ \text{Group Representation} ~ \operatorname{Rep}^\text{B} (V_4)\!</math>
+
<math>\text{Table 70.2} ~~ \text{Group Representation} ~ \mathrm{Rep}^\text{B} (V_4)\!</math>
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
Line 7,983: Line 7,965:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\underline{\underline{\text{a}}})
+
(\mathrm{d}\underline{\underline{\text{a}}})
(\operatorname{d}\underline{\underline{\text{b}}})
+
(\mathrm{d}\underline{\underline{\text{b}}})
(\operatorname{d}\underline{\underline{\text{i}}})
+
(\mathrm{d}\underline{\underline{\text{i}}})
(\operatorname{d}\underline{\underline{\text{u}}})
+
(\mathrm{d}\underline{\underline{\text{u}}})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\underline{\underline{\text{a}}}~
+
~\mathrm{d}\underline{\underline{\text{a}}}~
(\operatorname{d}\underline{\underline{\text{b}}})
+
(\mathrm{d}\underline{\underline{\text{b}}})
(\operatorname{d}\underline{\underline{\text{i}}})
+
(\mathrm{d}\underline{\underline{\text{i}}})
~\operatorname{d}\underline{\underline{\text{u}}}~
+
~\mathrm{d}\underline{\underline{\text{u}}}~
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}\underline{\underline{\text{a}}})
+
(\mathrm{d}\underline{\underline{\text{a}}})
~\operatorname{d}\underline{\underline{\text{b}}}~
+
~\mathrm{d}\underline{\underline{\text{b}}}~
~\operatorname{d}\underline{\underline{\text{i}}}~
+
~\mathrm{d}\underline{\underline{\text{i}}}~
(\operatorname{d}\underline{\underline{\text{u}}})
+
(\mathrm{d}\underline{\underline{\text{u}}})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\underline{\underline{\text{a}}}~
+
~\mathrm{d}\underline{\underline{\text{a}}}~
~\operatorname{d}\underline{\underline{\text{b}}}~
+
~\mathrm{d}\underline{\underline{\text{b}}}~
~\operatorname{d}\underline{\underline{\text{i}}}~
+
~\mathrm{d}\underline{\underline{\text{i}}}~
~\operatorname{d}\underline{\underline{\text{u}}}~
+
~\mathrm{d}\underline{\underline{\text{u}}}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\langle \operatorname{d}! \rangle
+
\langle \mathrm{d}! \rangle
 
\\[4pt]
 
\\[4pt]
 
\langle
 
\langle
\operatorname{d}\underline{\underline{\text{a}}} ~
+
\mathrm{d}\underline{\underline{\text{a}}} ~
\operatorname{d}\underline{\underline{\text{u}}}
+
\mathrm{d}\underline{\underline{\text{u}}}
 
\rangle
 
\rangle
 
\\[4pt]
 
\\[4pt]
 
\langle
 
\langle
\operatorname{d}\underline{\underline{\text{b}}} ~
+
\mathrm{d}\underline{\underline{\text{b}}} ~
\operatorname{d}\underline{\underline{\text{i}}}
+
\mathrm{d}\underline{\underline{\text{i}}}
 
\rangle
 
\rangle
 
\\[4pt]
 
\\[4pt]
\langle \operatorname{d}* \rangle
+
\langle \mathrm{d}* \rangle
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}!
+
\mathrm{d}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\underline{\underline{\text{a}}} \cdot
+
\mathrm{d}\underline{\underline{\text{a}}} \cdot
\operatorname{d}\underline{\underline{\text{u}}} ~ !
+
\mathrm{d}\underline{\underline{\text{u}}} ~ !
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\underline{\underline{\text{b}}} \cdot
+
\mathrm{d}\underline{\underline{\text{b}}} \cdot
\operatorname{d}\underline{\underline{\text{i}}} ~ !
+
\mathrm{d}\underline{\underline{\text{i}}} ~ !
 
\\[4pt]
 
\\[4pt]
\operatorname{d}*
+
\mathrm{d}*
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
Line 8,035: Line 8,017:  
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{au}}
+
\mathrm{d}_{\text{au}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{bi}}
+
\mathrm{d}_{\text{bi}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{au}} * \operatorname{d}_{\text{bi}}
+
\mathrm{d}_{\text{au}} * \mathrm{d}_{\text{bi}}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,047: Line 8,029:  
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:90%"
 
|+ style="height:30px" |
 
|+ style="height:30px" |
<math>\text{Table 70.3} ~~ \text{Group Representation} ~ \operatorname{Rep}^\text{C} (V_4)\!</math>
+
<math>{\text{Table 70.3} ~~ \text{Group Representation} ~ \mathrm{Rep}^\text{C} (V_4)}\!</math>
 
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
 
| width="16%" | <math>\begin{matrix} \text{Abstract} \\ \text{Element} \end{matrix}</math>
Line 8,067: Line 8,049:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\text{m})
+
(\mathrm{d}\text{m})
(\operatorname{d}\text{n})
+
(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~
+
~\mathrm{d}\text{m}~
(\operatorname{d}\text{n})
+
(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}\text{m})
+
(\mathrm{d}\text{m})
~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{n}~
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~
+
~\mathrm{d}\text{m}~
~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{n}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\langle\operatorname{d}!\rangle
+
\langle\mathrm{d}!\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}\text{m}\rangle
+
\langle\mathrm{d}\text{m}\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}\text{n}\rangle
+
\langle\mathrm{d}\text{n}\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}*\rangle
+
\langle\mathrm{d}*\rangle
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}!
+
\mathrm{d}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\text{m}!
+
\mathrm{d}\text{m}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\text{n}!
+
\mathrm{d}\text{n}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}*
+
\mathrm{d}*
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
Line 8,103: Line 8,085:  
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{m}}
+
\mathrm{d}_{\text{m}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{n}}
+
\mathrm{d}_{\text{n}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{m}} * \operatorname{d}_{\text{n}}
+
\mathrm{d}_{\text{m}} * \mathrm{d}_{\text{n}}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,135: Line 8,117:  
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\text{m})
+
(\mathrm{d}\text{m})
(\operatorname{d}\text{n})
+
(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~
+
~\mathrm{d}\text{m}~
(\operatorname{d}\text{n})
+
(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}\text{m})
+
(\mathrm{d}\text{m})
~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{n}~
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~
+
~\mathrm{d}\text{m}~
~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{n}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\langle\operatorname{d}!\rangle
+
\langle\mathrm{d}!\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}\text{m}\rangle
+
\langle\mathrm{d}\text{m}\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}\text{n}\rangle
+
\langle\mathrm{d}\text{n}\rangle
 
\\[4pt]
 
\\[4pt]
\langle\operatorname{d}*\rangle
+
\langle\mathrm{d}*\rangle
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}!
+
\mathrm{d}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\text{m}!
+
\mathrm{d}\text{m}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}\text{n}!
+
\mathrm{d}\text{n}!
 
\\[4pt]
 
\\[4pt]
\operatorname{d}*
+
\mathrm{d}*
 
\end{matrix}</math>
 
\end{matrix}</math>
 
| valign="bottom" |
 
| valign="bottom" |
Line 8,171: Line 8,153:  
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{m}}
+
\mathrm{d}_{\text{m}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{n}}
+
\mathrm{d}_{\text{n}}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_{\text{m}} * \operatorname{d}_{\text{n}}
+
\mathrm{d}_{\text{m}} * \mathrm{d}_{\text{n}}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,186: Line 8,168:  
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| width="25%" | <math>\text{Group Coset}\!</math>
 
| width="25%" | <math>\text{Group Coset}\!</math>
| width="25%" | <math>\text{Logical Coset}\!</math>
+
| width="25%" | <math>\text{Logical Coset}~\!</math>
 
| width="25%" | <math>\text{Logical Element}\!</math>
 
| width="25%" | <math>\text{Logical Element}\!</math>
 
| width="25%" | <math>\text{Group Element}\!</math>
 
| width="25%" | <math>\text{Group Element}\!</math>
 
|-
 
|-
 
| <math>G_\text{m}\!</math>
 
| <math>G_\text{m}\!</math>
| <math>(\operatorname{d}\text{m})\!</math>
+
| <math>(\mathrm{d}\text{m})\!</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\text{m})(\operatorname{d}\text{n})
+
(\mathrm{d}\text{m})(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
(\operatorname{d}\text{m})~\operatorname{d}\text{n}~
+
(\mathrm{d}\text{m})~\mathrm{d}\text{n}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|
 
|
Line 8,202: Line 8,184:  
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_\text{n}
+
\mathrm{d}_\text{n}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
| <math>G_\text{m} * \operatorname{d}_\text{m}\!</math>
+
| <math>G_\text{m} * \mathrm{d}_\text{m}\!</math>
| <math>\operatorname{d}\text{m}\!</math>
+
| <math>\mathrm{d}\text{m}\!</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
~\operatorname{d}\text{m}~(\operatorname{d}\text{n})
+
~\mathrm{d}\text{m}~(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{m}~~\mathrm{d}\text{n}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}_\text{m}
+
\mathrm{d}_\text{m}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_\text{n} * \operatorname{d}_\text{m}
+
\mathrm{d}_\text{n} * \mathrm{d}_\text{m}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,228: Line 8,210:  
|- style="background:#f0f0ff"
 
|- style="background:#f0f0ff"
 
| width="25%" | <math>\text{Group Coset}\!</math>
 
| width="25%" | <math>\text{Group Coset}\!</math>
| width="25%" | <math>\text{Logical Coset}\!</math>
+
| width="25%" | <math>\text{Logical Coset}~\!</math>
 
| width="25%" | <math>\text{Logical Element}\!</math>
 
| width="25%" | <math>\text{Logical Element}\!</math>
 
| width="25%" | <math>\text{Group Element}\!</math>
 
| width="25%" | <math>\text{Group Element}\!</math>
 
|-
 
|-
 
| <math>G_\text{n}\!</math>
 
| <math>G_\text{n}\!</math>
| <math>(\operatorname{d}\text{n})\!</math>
+
| <math>({\mathrm{d}\text{n})}\!</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\text{m})(\operatorname{d}\text{n})
+
(\mathrm{d}\text{m})(\mathrm{d}\text{n})
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~(\operatorname{d}\text{n})
+
~\mathrm{d}\text{m}~(\mathrm{d}\text{n})
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|
 
|
Line 8,244: Line 8,226:  
1
 
1
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_\text{m}
+
\mathrm{d}_\text{m}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|-
 
|-
| <math>G_\text{n} * \operatorname{d}_\text{n}\!</math>
+
| <math>G_\text{n} * \mathrm{d}_\text{n}\!</math>
| <math>\operatorname{d}\text{n}\!</math>
+
| <math>\mathrm{d}\text{n}\!</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
(\operatorname{d}\text{m})~\operatorname{d}\text{n}~
+
(\mathrm{d}\text{m})~\mathrm{d}\text{n}~
 
\\[4pt]
 
\\[4pt]
~\operatorname{d}\text{m}~~\operatorname{d}\text{n}~
+
~\mathrm{d}\text{m}~~\mathrm{d}\text{n}~
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{d}_\text{n}
+
\mathrm{d}_\text{n}
 
\\[4pt]
 
\\[4pt]
\operatorname{d}_\text{m} * \operatorname{d}_\text{n}
+
\mathrm{d}_\text{m} * \mathrm{d}_\text{n}
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,287: Line 8,269:  
|}
 
|}
   −
In other words, <math>P\!\!\And\!\!Q</math> is the intersection of the ''inverse projections'' <math>P' = \operatorname{Pr}_{12}^{-1}(P)\!</math> and <math>Q' = \operatorname{Pr}_{23}^{-1}(Q),\!</math> which are defined as follows:
+
In other words, <math>P\!\!\And\!\!Q</math> is the intersection of the ''inverse projections'' <math>P' = \mathrm{Pr}_{12}^{-1}(P)\!</math> and <math>Q' = \mathrm{Pr}_{23}^{-1}(Q),\!</math> which are defined as follows:
    
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
 
|
 
|
 
<math>\begin{matrix}
 
<math>\begin{matrix}
\operatorname{Pr}_{12}^{-1}(P) & = & P \times Z & = & \{ (x, y, z) \in X \times Y \times Z : (x, y) \in P \}.
+
\mathrm{Pr}_{12}^{-1}(P) & = & P \times Z & = & \{ (x, y, z) \in X \times Y \times Z : (x, y) \in P \}.
 
\\[4pt]
 
\\[4pt]
\operatorname{Pr}_{23}^{-1}(Q) & = & X \times Q & = & \{ (x, y, z) \in X \times Y \times Z : (y, z) \in Q \}.
+
\mathrm{Pr}_{23}^{-1}(Q) & = & X \times Q & = & \{ (x, y, z) \in X \times Y \times Z : (y, z) \in Q \}.
 
\end{matrix}</math>
 
\end{matrix}</math>
 
|}
 
|}
Line 8,304: Line 8,286:  
Strictly speaking, the logical entity <math>p_S\!</math> is the intensional representation of the tribe, presiding at the highest level of abstraction, while <math>f_S\!</math> and <math>S\!</math> are its more concrete extensional representations, rendering its concept in functional and geometric materials, respectively.  Whenever it is possible to do so without confusion, I try to use identical or similar names for the corresponding objects and species of each type, and I generally ignore the distinctions that otherwise set them apart.  For instance, in moving toward computational settings, <math>f_S\!</math> makes the best computational proxy for <math>p_S,\!</math> so I commonly refer to the mapping <math>f_S : X \to \mathbb{B}\!</math> as a proposition on <math>X.\!</math>
 
Strictly speaking, the logical entity <math>p_S\!</math> is the intensional representation of the tribe, presiding at the highest level of abstraction, while <math>f_S\!</math> and <math>S\!</math> are its more concrete extensional representations, rendering its concept in functional and geometric materials, respectively.  Whenever it is possible to do so without confusion, I try to use identical or similar names for the corresponding objects and species of each type, and I generally ignore the distinctions that otherwise set them apart.  For instance, in moving toward computational settings, <math>f_S\!</math> makes the best computational proxy for <math>p_S,\!</math> so I commonly refer to the mapping <math>f_S : X \to \mathbb{B}\!</math> as a proposition on <math>X.\!</math>
   −
Regarded as logical models, the elements of the contension <math>P\!\!\And\!\!Q</math> satisfy the proposition referred to as the ''conjunction of extensions'' <math>P'\!</math> and <math>Q'.\!</math>
+
Regarded as logical models, the elements of the contension <math>P\!\!\And\!\!Q</math> satisfy the proposition referred to as the ''conjunction of extensions'' <math>P^\prime\!</math> and <math>Q^\prime.\!</math>
    
Next, the ''composition'' of <math>P\!</math> and <math>Q\!</math> is a dyadic relation <math>R' \subseteq X \times Z\!</math> that is notated as <math>R' = P \circ Q\!</math> and defined as follows.
 
Next, the ''composition'' of <math>P\!</math> and <math>Q\!</math> is a dyadic relation <math>R' \subseteq X \times Z\!</math> that is notated as <math>R' = P \circ Q\!</math> and defined as follows.
    
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
| <math>P \circ Q ~=~ \operatorname{Pr}_{13} (P\!\!\And\!\!Q) ~=~ \{ (x, z) \in X \times Z : (x, y, z) \in P\!\!\And\!\!Q \}.</math>
+
| <math>P \circ Q ~=~ \mathrm{Pr}_{13} (P\!\!\And\!\!Q) ~=~ \{ (x, z) \in X \times Z : (x, y, z) \in P\!\!\And\!\!Q \}.</math>
 
|}
 
|}
   Line 8,426: Line 8,408:  
In order to speak of generalized orders of relations I need to outline the dimensions of variation along which I intend the characters of already familiar orders of relations to be broadened.  Generally speaking, the taxonomic features of <math>n\!</math>-place relations that I wish to liberalize can be read off from their ''local incidence properties'' (LIPs).
 
In order to speak of generalized orders of relations I need to outline the dimensions of variation along which I intend the characters of already familiar orders of relations to be broadened.  Generally speaking, the taxonomic features of <math>n\!</math>-place relations that I wish to liberalize can be read off from their ''local incidence properties'' (LIPs).
   −
'''Definition.'''  A ''local incidence property'' of a <math>k\!</math>-place relation <math>L \subseteq X_1 \times \ldots \times X_k\!</math> is one that is based on the following type of data.  Pick an element <math>x\!</math> in one of the domains <math>X_j\!</math> of <math>L.\!</math>  Let <math>L_{x \,\text{at}\, j}\!</math> be a subset of <math>L\!</math> called the ''flag of <math>L\!</math> with <math>x\!</math> at <math>j\!</math>'', or the ''<math>x \,\text{at}\, j\!</math> flag of <math>L.\!</math>''  The ''local flag'' <math>L_{x \,\text{at}\, j} \subseteq L\!</math> is defined as follows:
+
'''Definition.'''  A ''local incidence property'' of a <math>k\!</math>-place relation <math>L \subseteq X_1 \times \ldots \times X_k\!</math> is one that is based on the following type of data.  Pick an element <math>x\!</math> in one of the domains <math>{X_j}\!</math> of <math>L.\!</math>  Let <math>L_{x \,\text{at}\, j}\!</math> be a subset of <math>L\!</math> called the ''flag of <math>L\!</math> with <math>x\!</math> at <math>{j},\!</math>'' or the ''<math>x \,\text{at}\, j\!</math> flag of <math>L.\!</math>''  The ''local flag'' <math>L_{x \,\text{at}\, j} \subseteq L\!</math> is defined as follows.
    
{| align="center" cellspacing="8" width="90%"
 
{| align="center" cellspacing="8" width="90%"
Line 8,434: Line 8,416:  
Any property <math>P\!</math> of <math>L_{x \,\text{at}\, j}\!</math> constitutes a ''local incidence property'' of <math>L\!</math> with reference to the locus <math>x \,\text{at}\, j.\!</math>
 
Any property <math>P\!</math> of <math>L_{x \,\text{at}\, j}\!</math> constitutes a ''local incidence property'' of <math>L\!</math> with reference to the locus <math>x \,\text{at}\, j.\!</math>
   −
'''Definition.'''  A <math>k\!</math>-place relation <math>L \subseteq X_1 \times \ldots \times X_k\!</math> is ''<math>P\!</math>-regular at <math>j\!</math>'' if and only if every flag of <math>L\!</math> with <math>x\!</math> at <math>j\!</math> is <math>P,\!</math> letting <math>x\!</math> range over the domain <math>X_j,\!</math> in symbols, if and only if <math>P(L_{x \,\text{at}\, j})\!</math> is true for all <math>x \in X_j.\!</math>
+
'''Definition.'''  A <math>k\!</math>-place relation <math>L \subseteq X_1 \times \ldots \times X_k\!</math> is ''<math>P\!</math>-regular at <math>j\!</math>'' if and only if every flag of <math>L\!</math> with <math>x\!</math> at <math>j\!</math> is <math>P,\!</math> letting <math>x\!</math> range over the domain <math>X_j,\!</math> in symbols, if and only if <math>P(L_{x \,\text{at}\, j})\!</math> is true for all <math>{x \in X_j}.\!</math>
   −
<pre>
+
Of particular interest are the local incidence properties of relations that can be calculated from the cardinalities of their local flags, and these are naturally called ''numerical incidence properties'' (NIPs).
Of particular interest are the local incidence properties of relations that can be calculated from the cardinalities of their local flags, and these are naturally called "numerical incidence properties" (NIPs).
     −
For example, R is said to be "k regular at i" or "k regular at Xi" if and only if the cardinality |R&x@i| = k for all x C Xi.  In a similar fashion, one can define the NIPs "<k regular at i", ">k regular at i", and so on. For ease of reference, I record a few of these definitions here:
+
For example, <math>L\!</math> is <math>c\text{-regular at}~ j\!</math> if and only if the cardinality of the local flag <math>L_{x \,\text{at}\, j}\!</math> is equal to <math>c\!</math> for all <math>x \in X_j,\!</math> coded in symbols, if and only if <math>|L_{x \,\text{at}\, j}| = c\!</math> for all <math>{x \in X_j}.\!</math>
   −
R is k regular at i iff |R&x@i| = k for all x C Xi.
+
In a similar fashion, it is possible to define the numerical incidence properties <math>(< c)\text{-regular at}~ j,\!</math> <math>(> c)\text{-regular at}~ j,\!</math> and so on.  For ease of reference, a few of these definitions are recorded below.
R is <k regular at i iff |R&x@i| < k for all x C Xi.
  −
R is >k regular at i iff |R&x@i| > k for all x C Xi.
     −
The definition of "local flags" can be broadened to give a definition of "regional flags".  Suppose R c X1x...xXn and choose a subset M c Xi.  Let "R&M@i" denote a subset of R called the "flag of R with M at i", or the "M@i flag of R", defined as:
+
{| align="center" cellspacing="8" width="90%"
 
+
|
R&M@i  =  {<x1, ... , xi, ... , xn> C R  :  xi C M}.
+
<math>\begin{array}{lll}
 
+
L ~\text{is}~ c\text{-regular at}~ j
Returning to dyadic relations, it is useful to describe some familiar classes of objects in terms of their local and numerical incidence properties. Let R c SxT be an arbitrary dyadic relation.  The following properties of R can then be defined:
+
& \iff &
 
+
|L_{x \,\text{at}\, j}| = c ~\text{for all}~ x \in X_j.
R is total at S iff R is ³1 regular at S.
+
\\[6pt]
R is total at T iff R is ³1 regular at T.
+
L ~\text{is}~ (< c)\text{-regular at}~ j
R is tubular at S iff R is £1 regular at S.
+
& \iff &
R is tubular at T iff R is £1 regular at T.
+
|L_{x \,\text{at}\, j}| < c ~\text{for all}~ x \in X_j.
 +
\\[6pt]
 +
L ~\text{is}~ (> c)\text{-regular at}~ j
 +
& \iff &
 +
|L_{x \,\text{at}\, j}| > c ~\text{for all}~ x \in X_j.
 +
\\[6pt]
 +
L ~\text{is}~ (\le c)\text{-regular at}~ j
 +
& \iff &
 +
|L_{x \,\text{at}\, j}| \le c ~\text{for all}~ x \in X_j.
 +
\\[6pt]
 +
L ~\text{is}~ (\ge c)\text{-regular at}~ j
 +
& \iff &
 +
|L_{x \,\text{at}\, j}| \ge c ~\text{for all}~ x \in X_j.
 +
\end{array}\!</math>
 +
|}
   −
If R is tubular at S, then R is called a "partial function" or "prefunction" from S to T, often indicated by writing R = p : S ~> T.
+
The definition of local flags can be broadened to give a definition of ''regional flags''.  Suppose <math>L \subseteq X_1 \times \ldots \times X_k\!</math> and choose a subset <math>M \subseteq X_j.\!</math>  Let <math>L_{M \,\text{at}\, j}\!</math> be a subset of <math>L\!</math> called the ''flag of <math>L\!</math> with <math>M\!</math> at <math>{j},\!</math>'' or the ''<math>M \,\text{at}\, j\!</math> flag of <math>L,\!</math>'' defined as follows.
   −
R = p : S ~> T iff R is tubular at S.
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>L_{M \,\text{at}\, j} = \{ (x_1, \ldots, x_j, \ldots, x_k) \in L : x_j \in M \}.\!</math>
 +
|}
   −
If R is a prefunction p : S ~> T that happens to be total at S, then R is called a "function" from S to T, indicated by writing R = f : S > T.
+
Returning to dyadic relations, it is useful to describe some familiar classes of objects in terms of their local and numerical incidence properties.  Let <math>L \subseteq X \times Y\!</math> be an arbitrary dyadic relation. The following properties of <math>L\!</math> can then be defined.
   −
R = f : S > T iff R is 1 regular at S.
+
{| align="center" cellspacing="8" width="90%"
f is surjective iff f is total at T.
+
|
f is injective iff f is tubular at T.
+
<math>\begin{array}{lll}
f is bijective iff f is 1 regular at T.
+
L ~\text{is total at}~ X
 +
& \iff &
 +
L ~\text{is}~ (\ge 1)\text{-regular}~ \text{at}~ X.
 +
\\[6pt]
 +
L ~\text{is total at}~ Y
 +
& \iff &
 +
L ~\text{is}~ (\ge 1)\text{-regular}~ \text{at}~ Y.
 +
\\[6pt]
 +
L ~\text{is tubular at}~ X
 +
& \iff &
 +
L ~\text{is}~ (\le 1)\text{-regular}~ \text{at}~ X.
 +
\\[6pt]
 +
L ~\text{is tubular at}~ Y
 +
& \iff &
 +
L ~\text{is}~ (\le 1)\text{-regular}~ \text{at}~ Y.
 +
\end{array}</math>
 +
|}
 +
 
 +
If <math>L\!</math> is tubular at <math>X,\!</math> then <math>L\!</math> is known as a ''partial function'' or a ''prefunction'' from <math>X\!</math> to <math>Y,\!</math> indicated by writing <math>L : X \rightharpoonup Y.\!</math> We have the following definitions and notations.
 +
 
 +
{| align="center" cellspacing="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
L ~\text{is a prefunction}~ L : X \rightharpoonup Y
 +
& \iff &
 +
L ~\text{is tubular at}~ X.
 +
\\[6pt]
 +
L ~\text{is a prefunction}~ L : X \leftharpoonup Y
 +
& \iff &
 +
L ~\text{is tubular at}~ Y.
 +
\end{array}</math>
 +
|}
 +
 
 +
If <math>L\!</math> is a prefunction <math>L : X \rightharpoonup Y\!</math> that happens to be total at <math>X,\!</math> then <math>L\!</math> is known as a ''function'' from <math>X\!</math> to <math>Y,\!</math> indicated by writing <math>L : X \to Y.\!</math>  To say that a relation <math>L \subseteq X \times Y\!</math> is ''totally tubular'' at <math>X\!</math> is to say that <math>L\!</math> is 1-regular at <math>X.\!</math>  Thus, we may formalize the following definitions.
 +
 
 +
{| align="center" cellspacing="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
L ~\text{is a function}~ L : X \to Y
 +
& \iff &
 +
L ~\text{is}~ 1\text{-regular at}~ X.
 +
\\[6pt]
 +
L ~\text{is a function}~ L : X \leftarrow Y
 +
& \iff &
 +
L ~\text{is}~ 1\text{-regular at}~ Y.
 +
\end{array}\!</math>
 +
|}
   −
A few more comments on terminology are needed in further preparation.  One of the constant practical demands encountered in this project is to have available a language and a calculus for relations that can permit discussion and calculation to range over functions, dyadic relations, and n place relations with a minimum amount of trouble in making transitions from subject to subject and in drawing the appropriate generalizations.
+
In the case of a 2-adic relation <math>L \subseteq X \times Y\!</math> that has the qualifications of a function <math>f : X \to Y,\!</math> there are a number of further differentia that arise.
   −
Up to this point in the discussion, the analysis of the A and B dialogue has concerned itself almost exclusively with the relationship of triadic sign relations to the dyadic relations obtained from them by taking their projections onto various relational planes. In particular, a major focus of interest was the extent to which salient properties of sign relations can be gleaned from a study of their dyadic projections.
+
{| align="center" cellspacing="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
f ~\text{is surjective}
 +
& \iff &
 +
f ~\text{is total at}~ Y.
 +
\\[6pt]
 +
f ~\text{is injective}
 +
& \iff &
 +
f ~\text{is tubular at}~ Y.
 +
\\[6pt]
 +
f ~\text{is bijective}
 +
& \iff &
 +
f ~\text{is}~ 1\text{-regular at}~ Y.
 +
\end{array}</math>
 +
|}
   −
Two important topics for later discussion will be concerned with: (1) the sense in which every n place relation can be decomposed in terms of triadic relations, and (2) the fact that not every triadic relation can be further reduced to conjunctions of dyadic relations.
+
A few more comments on terminology are needed in further preparation. One of the constant practical demands encountered in this project is to have available a language and a calculus for relations that can permit discussion and calculation to range over functions, dyadic relations, and <math>n\!</math>-place relations with a minimum amount of trouble in making transitions from subject to subject and in drawing the appropriate generalizations.
   −
It is one of the constant technical needs of this project to maintain a flexible language for talking about relations, one that permits discussion to shift from functional to relational emphases and from dyadic relations to n place relations with a maximum of easeIt is not possible to do this without violating the favored conventions of one technical linguistic community or another.  I have chosen a strategy of use that respects as many different usages as possible, but in the end it cannot help but to reflect a few personal choices.  To some extent my choices are guided by an interest in developing the information, computation, and decision theoretic aspects of the mathematical language used.  Eventually, this requires one to render every distinction, even that of appearing or not in a particular category, as being relative to an interpretive framework.
+
Up to this point in the discussion, the analysis of the <math>\text{A}\!</math> and <math>\text{B}\!</math> dialogue has concerned itself almost exclusively with the relationship of triadic sign relations to the dyadic relations obtained from them by taking their projections onto various relational planesIn particular, a major focus of interest was the extent to which salient properties of sign relations can be gleaned from a study of their dyadic projections.
   −
While operating in this context, it is necessary to distinguish "domains" in the broad sense from "domains of definition" in the narrow sense.  For n place relations it is convenient to use the terms "domain" and "quorum" as references to the wider and narrower sets, respectively.
+
Two important topics for later discussion will be concerned with:  (1) the sense in which every <math>n\!</math>-place relation can be decomposed in terms of triadic relations, and (2) the fact that not every triadic relation can be further reduced to conjunctions of dyadic relations.
   −
For an n place relation R c X1x...xXn, I maintain the following usages:
+
'''Variant.'''  It is one of the constant technical needs of this project to maintain a flexible language for talking about relations, one that permits discussion to shift from functional to relational emphases and from dyadic relations to <math>n\!</math>-place relations with a maximum of ease. It is not possible to do this without violating the favored conventions of one technical linguistic community or another. I have chosen a strategy of use that respects as many different usages as possible, but in the end it cannot help but to reflect a few personal choices. To some extent my choices are guided by an interest in developing the information, computation, and decision-theoretic aspects of the mathematical language used.  Eventually, this requires one to render every distinction, even that of appearing or not in a particular category, as being relative to an interpretive framework.
   −
1. The notation "Domi (R)" denotes the set Xi, called the "domain of R at i" or the "ith domain of R".
+
While operating in this context, it is necessary to distinguish ''domains'' in the broad sense from ''domains of definition'' in the narrow sense.  For <math>k\!</math>-place relations it is convenient to use the terms ''domain'' and ''quorum'' as references to the wider and narrower sets, respectively.
   −
2. The notation "Quoi (R)" denotes a subset of Xi called the "quorum of R at i" or the "ith quorum of R", defined as follows:
+
For a <math>k\!</math>-place relation <math>L \subseteq X_1 \times \ldots \times X_k,\!</math> we have the following usages.
   −
Quoi (R) = the largest Q c Xi such that R&Q@i is >1-regular at i,
+
# The notation <math>{}^{\backprime\backprime} \mathrm{Dom}_j (L) {}^{\prime\prime}\!</math> denotes the set <math>X_j,\!</math> called the ''domain of <math>L\!</math> at <math>j\!</math>'' or the ''<math>j^\text{th}\!</math> domain of <math>L.\!</math>''.
= the largest Q c Xi such that |R&x@i| > 1 for all x C Q c Xi.
+
# The notation <math>{}^{\backprime\backprime} \mathrm{Quo}_j (L) {}^{\prime\prime}\!</math> denotes a subset of <math>{X_j}\!</math> called the ''quorum of <math>L\!</math> at <math>j\!</math>'' or the ''<math>j^\text{th}\!</math> quorum of <math>L,\!</math>'' defined as follows.
   −
In the special case of a dyadic relation R c X1xX2 = SxT, including the case of a partial function p : S ~> T or a total function f : S  > T, I will stick to the following conventions:
+
{| align="center" cellspacing="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
\mathrm{Quo}_j (L)
 +
& = &
 +
\text{the largest}~ Q \subseteq X_j ~\text{such that}~ ~L_{Q \,\text{at}\, j}~ ~\text{is}~ (> 1)\text{-regular at}~ j,
 +
\\[6pt]
 +
& = &
 +
\text{the largest}~ Q \subseteq X_j ~\text{such that}~ |L_{Q \,\text{at}\, j}| > 1 ~\text{for all}~ x \in Q \subseteq X_j.
 +
\end{array}</math>
 +
|}
   −
1. The arbitrarily designated domains X1 = S and X2 = T that form the widest sets admitted to the dyadic relation are referred to as the "domain" or "source" and the "codomain" or "target", respectively, of the relation in question.
+
In the special case of a dyadic relation <math>L \subseteq X_1 \times X_2 = X \times Y,\!</math> including the case of a partial function <math>p : X \rightharpoonup Y\!</math> or a total function <math>f : X \to Y,\!</math> we have the following conventions.
   −
2. The terms "quota" and "range" are reserved for those uniquely defined sets whose elements actually appear as the 1st and 2nd members, respectively, of the ordered pairs in that relation.  Thus, for a dyadic relation R c SxT, I let Quo (R) = Quo1 (R) c S be identified with what is usually called the "domain of definition" of R, and I let Ran (R) = Quo2 (R) c T be identified with the usual range of R.
+
# The arbitrarily designated domains <math>X_1 = X\!</math> and <math>X_2 = Y\!</math> that form the widest sets admitted to the dyadic relation are referred to as the ''domain'' or ''source'' and the ''codomain'' or ''target'', respectively, of the relation in question.
 +
# The terms ''quota'' and ''range'' are reserved for those uniquely defined sets whose elements actually appear as the first and second members, respectively, of the ordered pairs in that relation.  Thus, for a dyadic relation <math>L \subseteq X \times Y,\!</math> we identify <math>\mathrm{Quo} (L) = \mathrm{Quo}_1 (L) \subseteq X\!</math> with what is usually called the ''domain of definition'' of <math>L\!</math> and we identify <math>\mathrm{Ran} (L) = \mathrm{Quo}_2 (L) \subseteq Y\!</math> with the usual ''range'' of <math>L.\!</math>
   −
A "partial equivalence relation" (PER) on a set X is a relation R c XxX that is an equivalence relation on its domain of definition Quo (R) c X.  In this situation, [x]R is empty for each x in X that is not in Quo (R).  Another way of reaching the same concept is to call a PER a dyadic relation that is symmetric and transitive, but not necessarily reflexive.  Like the "self identical elements" of old that epitomized the very definition of self consistent existence in classical logic, the property of being a self related or self equivalent element in the purview of a PER on X singles out the members of Quo (R) as those for which a properly meaningful existence can be contemplated.
+
A ''partial equivalence relation'' (PER) on a set <math>X\!</math> is a relation <math>L \subseteq X \times X\!</math> that is an equivalence relation on its domain of definition <math>\mathrm{Quo} (L) \subseteq X.\!</math> In this situation, <math>[x]_L\!</math> is empty for each <math>x\!</math> in <math>X\!</math> that is not in <math>\mathrm{Quo} (L).\!</math> Another way of reaching the same concept is to call a PER a dyadic relation that is symmetric and transitive, but not necessarily reflexive.  Like the &ldquo;self-identical elements&rdquo; of old that epitomized the very definition of self-consistent existence in classical logic, the property of being a self-related or self-equivalent element in the purview of a PER on <math>X\!</math> singles out the members of <math>\mathrm{Quo} (L)\!</math> as those for which a properly meaningful existence can be contemplated.
   −
A "moderate equivalence relation" (MER) on the "modus" M c X is a relation on X whose restriction to M is an equivalence relation on M.  In symbols, R c XxX such that R|M c MxM is an equivalence relation.  Notice that the subset of restriction, or modus M, is a part of the definition, so the same relation R on X could be a MER or not depending on the choice of M.  In spite of how it sounds, a moderate equivalence relation can have more ordered pairs in it than the ordinary sort of equivalence relation on the same set.
+
A ''moderate equivalence relation'' (MER) on the ''modus'' <math>M \subseteq X\!</math> is a relation on <math>X\!</math> whose restriction to <math>M\!</math> is an equivalence relation on <math>M.\!</math> In symbols, <math>L \subseteq X \times X\!</math> such that <math>L|M \subseteq M \times M\!</math> is an equivalence relation.  Notice that the subset of restriction, or modus <math>M,\!</math> is a part of the definition, so the same relation <math>L\!</math> on <math>X\!</math> could be a MER or not depending on the choice of <math>M.\!</math> In spite of how it sounds, a moderate equivalence relation can have more ordered pairs in it than the ordinary sort of equivalence relation on the same set.
   −
In applying the equivalence class notation to a sign relation R, the definitions and examples considered so far only cover the case where the connotative component RSI is a total equivalence relation on the whole syntactic domain S.  The next job is to adapt this usage to PERs.
+
In applying the equivalence class notation to a sign relation <math>L,\!</math> the definitions and examples considered so far cover only the case where the connotative component <math>L_{SI}\!</math> is a total equivalence relation on the whole syntactic domain <math>S.\!</math> The next job is to adapt this usage to PERs.
   −
If R is a sign relation whose syntactic projection RSI is a PER on S, then I still write "[s]R" for the "equivalence class of s under RSI".  But now, [s]R can be empty if s has no interpretant, that is, if s lies outside the "adequately meaningful" subset of the syntactic domain, where synonymy and equivalence of meaning are defined.  Otherwise, if s has an i then it also has an o, by the definition of RSI.  In this case, there is a triple <o, s, i> C R, and it is permissible to let [o]R = [s]R.
+
If <math>L\!</math> is a sign relation whose syntactic projection <math>L_{SI}\!</math> is a PER on <math>S\!</math> then we may still write <math>{}^{\backprime\backprime} [s]_L {}^{\prime\prime}\!</math> for the &ldquo;equivalence class of <math>s\!</math> under <math>L_{SI}\!</math>&rdquo;.  But now, <math>[s]_L\!</math> can be empty if <math>s\!</math> has no interpretant, that is, if <math>s\!</math> lies outside the &ldquo;adequately meaningful&rdquo; subset of the syntactic domain, where synonymy and equivalence of meaning are defined.  Otherwise, if <math>s\!</math> has an <math>i\!</math> then it also has an <math>o,\!</math> by the definition of <math>L_{SI}.\!</math> In this case, there is a triple <math>{(o, s, i) \in L},\!</math> and it is permissible to let <math>[o]_L = [s]_L.\!</math>
</pre>
      
===6.32. Partiality : Selective Operations===
 
===6.32. Partiality : Selective Operations===
   −
<pre>
+
One of the main subtasks of this project is to develop a computational framework for carrying out set-theoretic operations on abstractly represented classes and for reasoning about their indicated results.  This effort has the general aim of enabling one to articulate the structures of <math>n\!</math>-place relations and the special aim of allowing one to reflect theoretically on the properties and projections of sign relations.  A prototype system that makes a beginning in this direction has already been implemented, to which the current work contributes a major part of the design philosophy and technical documentation.  This section presents the rudiments of set-theoretic notation in a way that conforms to these goals, taking the development only so far as needed for immediate application to sign relations like <math>L(\text{A})\!</math> and <math>L(\text{B}).\!</math>
One of the main subtasks of this project is to develop a computational framework for carrying out set theoretic operations on abstractly represented classes and for reasoning about their indicated results.  This effort has the general aim of enabling one to articulate the structures of n place relations and the special aim of allowing one to reflect theoretically on the properties and projections of sign relations.  A prototype system that makes a beginning in this direction has already been implemented, to which the current work contributes a major part of the design philosophy and technical documentation.  This section presents the rudiments of set theoretic notation in a way that conforms to these goals, taking the development only so far as needed for immediate application to sign relations like A and B.
     −
One of the most important design considerations that goes into building the requisite software system is how well it furthers certain lines of abstraction and generalization.  One of these dimensions of abstraction or directions of generalization is discussed in this section, where I attempt to unify its many appearances under the theme of "partiality".  This name is chosen to suggest the desired sense of abstract intention since the extensions of concepts that it favors and for which it leaves room are outgrowths of the limitation that finite signs and expressions can never provide more than partial information about the richness of individual detail that is always involved in any real object.  All in all, this modicum of tolerance for uncertainty is the very play in the wheels of determinism that provides a significant chance for luck to play a part in the finer steps toward finishing every real objective.
+
One of the most important design considerations that goes into building the requisite software system is how well it furthers certain lines of abstraction and generalization.  One of these dimensions of abstraction or directions of generalization is discussed in this section, where I attempt to unify its many appearances under the theme of ''partiality''.  This name is chosen to suggest the desired sense of abstract intention since the extensions of concepts that it favors and for which it leaves room are outgrowths of the limitation that finite signs and expressions can never provide more than partial information about the richness of individual detail that is always involved in any real object.  All in all, this modicum of tolerance for uncertainty is the very play in the wheels of determinism that provides a significant chance for luck to play a part in the finer steps toward finishing every real objective.
   −
If one needs a slogan to entitle this form of propagation, it is only that "Necessity is the mother of invention".  In other words, it is precisely this lack of perfect information that yields the opportunity for novel forms of speciation to develop among finitely informed creatures (FICs), and just this need of perfect information that drives the evolving forms of independent determination and spontaneous creation in any area, no matter how well the arena is circumscribed by the restrictions of signs.
+
If a slogan is needed to charge this form of propagation, it is only that &ldquo;Necessity is the mother of invention.&rdquo; In other words, it is precisely this lack of perfect information that yields the opportunity for novel forms of speciation to develop among finitely informed creatures (FICs), and just this need of perfect information that drives the evolving forms of independent determination and spontaneous creation in any area, no matter how well the arena is circumscribed by the restrictions of signs.
    
In tracing the echoes of this theme, it is necessary to reflect on the circumstance that degenerate sign relations happen to be perfectly possible in practice, and it is desirable to provide a critical method that can address the facts of their flaws in theoretically insightful terms.  Relative to particular environments of interpretation, nothing proscribes the occurrence of sign relations that are defective in any of their various facets, namely:  (1) with signs that fail to denote or connote, (2) with interpretants that lack of being faithfully represented or reliably objectified, and (3) with objects that make no impression or remain ineffable in the preferred medium.
 
In tracing the echoes of this theme, it is necessary to reflect on the circumstance that degenerate sign relations happen to be perfectly possible in practice, and it is desirable to provide a critical method that can address the facts of their flaws in theoretically insightful terms.  Relative to particular environments of interpretation, nothing proscribes the occurrence of sign relations that are defective in any of their various facets, namely:  (1) with signs that fail to denote or connote, (2) with interpretants that lack of being faithfully represented or reliably objectified, and (3) with objects that make no impression or remain ineffable in the preferred medium.
   −
A cursory examination of the topic of "partiality", as just surveyed, reveals two strains fixing how this "quality of murky" in general reigns.  This division depends on the disposition of n tuples as the individual elements that inhabit an n place relation.
+
A cursory examination of the topic of ''partiality'', as just surveyed, reveals two strains fixing how this &ldquo;quality of murky&rdquo; in general reigns.  This division depends on the disposition of <math>n\!</math>-tuples as the individual elements that inhabit an <math>n\!</math>-place relation.
   −
1. If the integrity of elementary relations as n tuples is maintained, then the predicate of "partiality" characterizes only the state of information that one has, either about elementary relations or about entire relations, or both.  Thus, this strain of partiality affects the determination of relations at two distinct levels of their formation:
+
<ol style="list-style-type:decimal">
   −
a. At the level of elementary relations, it frees up the point to which n tuples are pinned down by signs or expressions of relations by modifying the name that indicates or the formula that specifies a relation.
+
<li>If the integrity of elementary relations as n-tuples is maintained, then the predicate of ''partiality'' characterizes only the state of information that one has, either about elementary relations or about entire relations, or both.  Thus, this strain of partiality affects the determination of relations at two distinct levels of their formation:</li>
   −
b. At the level of entire relations, it relaxes the grip that axioms and constraints have on the character of a relation by modifying the strictness or generalizing the form of their application.
+
<ol style="list-style-type:lower-alpha">
   −
2. If "partial n tuples" are admitted, and not permitted to be confused with "<n tuples", then one arrives at the concept of an "n place relational complex".
+
<li>At the level of elementary relations, it frees up the point to which <math>n\!</math>-tuples are pinned down by signs or expressions of relations by modifying the name that indicates or the formula that specifies a relation.</li>
   −
Relational complex?
+
<li>At the level of entire relations, it relaxes the grip that axioms and constraints have on the character of a relation by modifying the strictness or generalizing the form of their application.</li></ol>
   −
R = R(1) U ... U R(n)
+
<li>If ''partial <math>n\!</math>-tuples'' are admitted, and not permitted to be confused with ''<math>(< n)\!</math>-tuples'', then one arrives at the concept of an ''<math>n\!</math>-place relational complex''.</li></ol>
   −
Sign relational complex?
+
'''Relational Complex?'''
   −
R = R(1) U R(2) U R(3)
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>L ~=~ L^{(1)} \cup \ldots \cup L^{(k)}\!</math>
 +
|}
 +
 
 +
'''Sign Relational Complex?'''
 +
 
 +
{| align="center" cellspacing="8" width="90%"
 +
| <math>L ~=~ L^{(1)} \cup L^{(2)} \cup L^{(3)}\!</math>
 +
|}
    
It is possible to see two directions of remove that signs and concepts can take in departing from complete specifications of individual objects, and thus to see two dimensions of variation in the requisite varieties of partiality, each of which leads off into its own distinctive realm of abstraction.
 
It is possible to see two directions of remove that signs and concepts can take in departing from complete specifications of individual objects, and thus to see two dimensions of variation in the requisite varieties of partiality, each of which leads off into its own distinctive realm of abstraction.
   −
1. In a direction of "generality", with "general" signs and concepts, one loses an amount of certainty as to exactly what object the sign or concept applies at any given moment, and thus this can be recognized as an extensional type of abstraction.
+
# In a direction of ''generality'', with ''general'' signs and concepts, one loses an amount of certainty as to exactly what object the sign or concept applies at any given moment, and thus this can be recognized as an extensional type of abstraction.
 
+
# In a direction of ''vagueness'', with ''vague'' signs and concepts, one loses a degree of security as to exactly what property the sign or concept implies in the current context, and thus this can be classified as an intensional mode of abstraction.
2. In a direction of "vagueness", with "vague" signs and concepts, one loses a degree of security as to exactly what property the sign or concept implies in the current context, and thus this can be classified as an intensional mode of abstraction.
      
The first order of business is to draw some distinctions, and at the same time to note some continuities, between the varieties of partiality that remain to be sufficiently clarified and the more mundane brands of partiality that are already familiar enough for present purposes, but lack perhaps only the formality of being recognized under that heading.
 
The first order of business is to draw some distinctions, and at the same time to note some continuities, between the varieties of partiality that remain to be sufficiently clarified and the more mundane brands of partiality that are already familiar enough for present purposes, but lack perhaps only the formality of being recognized under that heading.
   −
The most familiar illustrations of information theoretic "partiality", "partial indication", or "signs bearing partial information about objects" occur every time one uses a general name, for example, the name of a genus, class, or set.  Almost as commonly, the formula that expresses a logical proposition can be regarded as a partial specification of its logical models or satisfying interpretations.  Just as the name of a genus or class can be taken as a "partially informed reference" or a "plural indefinite reference" (PIR) to one of its species or elements, so the name of an n place relation can be viewed as a PIR to one of its elementary relations or n tuples, and the formula or expression of a proposition can be understood as a PIR to one its models or satisfying interpretations.  For brevity, this variety of referential indetermination can be called the "generic partiality" of signs as information bearers.
+
The most familiar illustrations of information-theoretic partiality, partial indication, or &ldquo;signs bearing partial information about objects&rdquo; occur every time one uses a general name, for example, the name of a class, genus, or set.  Almost as commonly, the formula that expresses a logical proposition can be regarded as a partial specification of its logical models or satisfying interpretations.  Just as the name of a class or genus can be taken as a ''partially informed reference'' or a ''plural indefinite reference'' (PIR) to one of its elements or species, so the name of an <math>n\!</math>-place relation can be viewed as a PIR to one of its elementary relations or <math>n\!</math>-tuples, and the formula or expression of a proposition can be understood as a PIR to one its models or satisfying interpretations.  For brevity, this variety of referential indetermination can be called the ''generic partiality'' of signs as information bearers.
   −
Note.  In this discussion I will not systematically distinguish between the logical entity typically called a "proposition" or "statement" and the syntactic entity usually called an "expression", "formula", or "sentence".  Instead, I work on the assumption that both types of entity are always involved in everything one proposes and also on the hope that context will determine which aspect of proposing is most apt.  For precision, the abstract category of propositions proper will have to be reconstituted as logical equivalence classes of syntactically diverse expressions.  For the present, I will use the phrase "propositional expression" whenever it is necessary to call particular attention to the syntactic entity.  Likewise, I will not always separate "higher order propositions" (HOPs), that is, propositions about propositions, from their corresponding formulations in the guise of "higher order propositional expressions" (HOPEs).
+
'''Note.''' In this discussion I will not systematically distinguish between the logical entity typically called a ''proposition'' or a ''statement'' and the syntactic entity usually called an ''expression'', ''formula'', or ''sentence''.  Instead, I work on the assumption that both types of entity are always involved in everything one proposes and also on the hope that context will determine which aspect of proposing is most apt.  For precision, the abstract category of propositions proper will have to be reconstituted as logical equivalence classes of syntactically diverse expressions.  For the present, I will use the phrase ''propositional expression'' whenever it is necessary to call particular attention to the syntactic entity.  Likewise, I will not always separate ''higher order propositions'', that is, propositions about propositions, from their corresponding formulations in the guise of ''higher order propositional expressions''.
   −
Even though "partial information" is the usual case of information (as rendered by signs about objects) I will continue to use this phrase, for all its informative redundancy, to emphasize the issues of partial definition, specification, and determination that arise under the pervasive theme of "partiality".
+
Even though partial information is the usual case of information (as rendered by signs about objects) I will continue to use this phrase, for all its informative redundancy, to emphasize the issues of partial definition, determination, and specification that arise under the pervasive theme of partiality.
   −
In talking about properties and classes of relations, one would like to allude to "all relations" as the implicit domain of discussion, setting each particular topic against this optimally generous and neutral background.  But even before discussion is restricted to a computational framework the notion of "all" (of almost anything) proves to be problematic in its very conception, not always amenable to assuming a consistent concept.  So the connotation of "all relations" — really just a passing phrase that pops up in casual and careless discussions must be relegated to the status of an informal concept, one that takes on definite meaning only when related to a context of constructive examples and formal models.
+
In speaking of properties and classes of relations, one would like to allude to ''all relations'' as the implicit domain of discussion, setting each particular topic against this optimally generous and neutral background.  But even before discussion is restricted to a computational framework the notion of ''all'' (of almost anything) proves to be problematic in its very conception, not always amenable to assuming a consistent concept.  So the connotation of ''all relations'' &mdash; really just a passing phrase that pops up in casual and careless discussions &mdash; must be relegated to the status of an informal concept, one that takes on definite meaning only when related to a context of constructive examples and formal models.
   −
Thus, in talking "sensibly" about properties and classes of relations, one is always invoking, explicitly or implicitly, a preconceived domain of discussion or an established universe of discourse X, and in relation to this X, one is always talking, expressly or otherwise, about a selected subset S c X that exhibits the property in question or a binary valued selector function f : X > B that picks out the class in question.
+
Thus, in talking sensibly about properties and classes of relations, one is always invoking, explicitly or implicitly, a preconceived domain of discussion or an established universe of discourse <math>X,\!</math> and in relation to this <math>X\!</math> one is always talking, expressly or otherwise, about a selected subset <math>A \subset X\!</math> that exhibits the property in question and a binary-valued selector function <math>f_A : X \to \mathbb{B}\!</math> that picks out the class in question.
   −
When the subject matter of discussion is bounded by a universal set X, out of which all objects referred to must come, then every PIR to an object can be identified with the name or formula (sign or expression) of a subset S c X, or with that of its selector function S# : X > B.  Conceptually, one imagines generating all the objects in X and then selecting out the ones that satisfy some test for membership in S.
+
When the subject matter of discussion is bounded by a universal set <math>X,\!</math> out of which all objects referred to must come, then every PIR to an object can be identified with the name or formula (sign or expression) of a subset <math>A \subseteq X\!</math> or else with that of its selector function <math>f_A : X \to \mathbb{B}.\!</math> Conceptually, one imagines generating all the objects in <math>X\!</math> and then selecting the ones that satisfy a definitive test for membership in <math>A.\!</math>
    
In a realistic computational framework, however, when the domain of interest is given generatively in a genuine sense of the word, that is, defined solely in terms of the primitive elements and operations that are needed to generate it, and when the resource limitations in actual effect make it impractical to enumerate all the possibilities in advance of selecting the adumbrated subset, then the implementation of PIRs becomes a genuine computational problem.
 
In a realistic computational framework, however, when the domain of interest is given generatively in a genuine sense of the word, that is, defined solely in terms of the primitive elements and operations that are needed to generate it, and when the resource limitations in actual effect make it impractical to enumerate all the possibilities in advance of selecting the adumbrated subset, then the implementation of PIRs becomes a genuine computational problem.
   −
Considered in its application to n place relations, the generic brand of partial specification constitutes a rather limited type of partiality, in that every element conceived as falling under the specified relation, no matter how indistinctly indicated, is still envisioned to maintain its full arity and to remain every bit a complete, though unknown, n tuple.  Still, there is a simple way to extend the concept of generic partiality in a significant fashion, achieving a form of PIRs to relations by making use of "higher order propositions" (HOPs).
+
Considered in its application to <math>n\!</math>-place relations, the generic brand of partial specification constitutes a rather limited type of partiality, in that every element conceived as falling under the specified relation, no matter how indistinctly indicated, is still envisioned to maintain its full arity and to remain every bit a complete, though unknown, <math>n\!</math>-tuple.  Still, there is a simple way to extend the concept of generic partiality in a significant fashion, achieving a form of PIRs to relations by making use of ''higher order propositions''.
    
Extending the concept of generic partiality, by iterating the principle on which it is based, leads to higher order propositions about elementary relations, or propositions about relations, as one way to achieve partial specifications of relations, or PIRs to relations.
 
Extending the concept of generic partiality, by iterating the principle on which it is based, leads to higher order propositions about elementary relations, or propositions about relations, as one way to achieve partial specifications of relations, or PIRs to relations.
   −
This direction of generalization expands the scope of PIRs by means of an analogical extension, and can be charted in the following manner.  If the sign or expression (name or formula) of an n place relation can be interpreted as a proposition about n tuples and thus as a PIR to an elementary relation, then a higher order proposition about n tuples is a proposition about n place relations that can be used to formulate a PIR to an n place relation.
+
This direction of generalization expands the scope of PIRs by means of an analogical extension, and can be charted in the following manner.  If the sign or expression (name or formula) of an <math>n\!</math>-place relation can be interpreted as a proposition about <math>n\!</math>-tuples and thus as a PIR to an elementary relation, then a higher order proposition about <math>n\!</math>-tuples is a proposition about <math>n\!</math>-place relations that can be used to formulate a PIR to an <math>n\!</math>-place relation.
    
In order to formalize these ideas, it is helpful to have notational devices for switching back and forth among different ways of exemplifying what is abstractly the same contents of information, in particular, for translating among sets, their logical expressions, and their functional indications.
 
In order to formalize these ideas, it is helpful to have notational devices for switching back and forth among different ways of exemplifying what is abstractly the same contents of information, in particular, for translating among sets, their logical expressions, and their functional indications.
   −
If S c X is a set contained in a universal set or domain X, then "S#", read as "S sharp" or "S selective", denotes the "selector function" of S, defined as:
+
Given a set <math>X\!</math> and a subset <math>A \subseteq X,\!</math> let the ''selector function of <math>A\!</math> in <math>X\!</math>'' be notated as <math>A^\sharp\!</math> and defined as follows.
   −
S# : X > B with S#(x) = 1 iff x C S.
+
{| align="center" cellspacing="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
A^\sharp : X \to \mathbb{B} & \text{such that} & A^\sharp (x) = 1 \iff x \in A.
 +
\end{array}</math>
 +
|}
   −
Other names for the same concept, appearing under various notations, are the "indicator function" or the "characteristic function" of a set.
+
Other names for the same concept, appearing under various notations, are the ''characteristic function'' or the ''indicator function'' of <math>A\!</math> in <math>X.\!</math>
   −
Conversely, if one has a binary valued function f : X > B, then "f#", read as "f numbd" or "f selection", denotes the "selected set" of f, defined as:
+
Conversely, given a boolean-valued function <math>f : X \to \mathbb{B},\!</math> let the ''selected set of <math>f\!</math> in <math>X\!</math>'' be notated as <math>f_\flat\!</math> and defined as follows.
   −
f# c X with f#  = f 1(1) = {x C X : f(x) = 1}.
+
{| align="center" cellspacing="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
f_\flat \subseteq X & \text{such that} & f_\flat = f^{-1}(1) = \{ x \in X : f(x) = 1 \}.
 +
\end{array}</math>
 +
|}
   −
Other names for this subset are the "fiber", "pre image", "level set", or "antecedents" of 1 under the mapping f.
+
Other names for the same concept are the ''fiber'', ''level set'', or ''pre-image'' of 1 under the mapping <math>f : X \to \mathbb{B}.\!</math>
   −
Obviously, the relation between these operations is such that:
+
Obviously, the relation between these operations is such that the following equations hold.
   −
S##  = S    and   f##  = f.
+
{| align="center" cellspacing="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
(A^\sharp)_\flat = A & \text{and} & (f_\flat)^\sharp = f.
 +
\end{array}</math>
 +
|}
   −
It will facilitate future discussions to explicitly go through the details of applying these selective operations to the case of n place relations.  If R c X1x...xXn is an n place relation, then R# : X1x...xXn  > B is the selector of R defined by:
+
It will facilitate future discussions to go through the details of applying these selective operations to the case of <math>n\!</math>-place relations.  If <math>L \subseteq X_1 \times \ldots \times X_n\!</math> is an <math>n\!</math>-place relation, then <math>L^\sharp : X_1 \times \ldots \times X_n \to \mathbb{B}\!</math> is the selector of <math>L\!</math> defined as follows.
   −
R#(<x1, ... , xn>) = 1 iff <x1, ... , xn> C R.
+
{| align="center" cellspacing="8" width="90%"
</pre>
+
|
 +
<math>\begin{array}{lll}
 +
L^\sharp (x_1, \ldots, x_n) = 1 & \iff & (x_1, \ldots, x_n) \in L.
 +
\end{array}</math>
 +
|}
    
===6.33. Sign Relational Complexes===
 
===6.33. Sign Relational Complexes===
Line 8,592: Line 8,683:  
# Under the ''sign-theoretic'' alternative one takes the partiality as something affecting only the signs used in discussion.  Accordingly, one approaches the task as a matter of handling partial information about ordinary objects, namely, the same domains of objects initially given at the outset of discussion.
 
# Under the ''sign-theoretic'' alternative one takes the partiality as something affecting only the signs used in discussion.  Accordingly, one approaches the task as a matter of handling partial information about ordinary objects, namely, the same domains of objects initially given at the outset of discussion.
   −
<pre>
+
But a working maxim of information theory says that &ldquo;Partial information is your ordinary information.&rdquo; Applied to the principle regulating the sign-theoretic convention this means that the adjective ''partial'' is swallowed up by the substantive ''information'', so that the ostensibly more general case is always already subsumed within the ordinary case.  Because partiality is part and parcel to the usual nature of information, it is a perfectly typical feature of the signs and expressions bearing it to provide normally only partial information about ordinary objects.
But a working maxim of information theory says that "Partial information is your ordinary information".  Applied to the principle regulating the sign theoretic convention this means that the adjective "partial" is swallowed up by the substantive "information", so that the ostensibly more general case is always already subsumed within the ordinary case.  Because partiality is part and parcel to the usual nature of information, it is a perfectly typical feature of the signs and expressions bearing it to provide normally only partial information about ordinary objects.
     −
The only time when a finite sign or expression can give the appearance of determining a perfectly precise content or a post finite amount of information, for example, when the symbol “e” is used to denote the number also known as “the unique base of the natural logarithms” — this can only happen when interpreters are prepared, by dint of the information embodied in their prior design and preliminary training, to accept as meaningful and be terminally satisfied with what is still only a finite content, syntactically speaking.  Every remaining impression that a perfectly determinate object, an "individual" in the original sense of the word, has nevertheless been successfully specified this can only be the aftermath of some prestidigitation, that is, the effect of some pre arranged consensus, for example, of accepting a finite system of definitions and axioms that are supposed to define the space R and the element e within it, and of remembering or imagining that an effective proof system has once been able or will yet be able to convince one of its demonstrations.
+
The only time when a finite sign or expression can give the appearance of determining a perfectly precise content or a post-finite amount of information, for example, when the symbol <math>{}^{\backprime\backprime} e {}^{\prime\prime}\!</math> is used to denote the number also known as &ldquo;the unique base of the natural logarithms&rdquo; &mdash; this can only happen when interpreters are prepared, by dint of the information embodied in their prior design and preliminary training, to accept as meaningful and be terminally satisfied with what is still only a finite content, syntactically speaking.  Every remaining impression that a perfectly determinate object, an ''individual'' in the original sense of the word, has nevertheless been successfully specified &mdash; this can only be the aftermath of some prestidigitation, that is, the effect of some pre-arranged consensus, for example, of accepting a finite system of definitions and axioms that are supposed to define the space <math>\mathbb{R}\!</math> and the element <math>e\!</math> within it, and of remembering or imagining that an effective proof system has once been able or will yet be able to convince one of its demonstrations.
   −
Ultimately, one must be prepared to work with probability distributions that are defined on entire spaces O of the relevant objects or outcomes.  But probability distributions are just a special class of functions f : O > [0, 1] c R, where R is the real line, and this means that the corresponding theory of partializations involves the dual aspect of the domain O, dealing with the "functionals" defined on it, or the functions that map it into "coefficient" spaces.  And since it is unavoidable in a computational framework, one way or another every type of coefficient information, real or otherwise, must be approached bit by bit.  That is, all information is defined in terms of the either or decisions that must be made to really and practically determine it.  So, to make a long story short, one might as well approach this dual aspect by starting with the functions f : O > B = {0, 1}, in effect, with the logic of propositions.
+
Ultimately, one must be prepared to work with probability distributions that are defined on entire spaces <math>O\!</math> of the relevant objects or outcomes.  But probability distributions are just a special class of functions <math>f : O \to [0, 1] \subseteq \mathbb{R},\!</math> where <math>\mathbb{R}\!</math> is the real line, and this means that the corresponding theory of partializations involves the dual aspect of the domain <math>O,\!</math> dealing with the ''functionals'' defined on it, or the functions that map it into ''coefficient'' spaces.  And since it is unavoidable in a computational framework, one way or another every type of coefficient information, real or otherwise, must be approached bit by bit.  That is, all information is defined in terms of the either-or decisions that must be made to determine it.  So, to make a long story short, one might as well approach this dual aspect by starting with the functions <math>f : O \to \{ 0, 1 \} = \mathbb{B},\!</math> in effect, with the logic of propositions.
   −
I turn now to the question of "partially specified" (PS) relations, or “partially informed relations” (PIRs), in other words, to the explicit treatment of relations in terms of the information that is logically possessed or actually expressed about them.  There seem to be several ways to approach the concept of an n place PIR and the supporting notion of a PS n tuple.  Since the term "partial relation" is already implicitly in use for the general class of relations that are not necessarily total on any of their domains, I will coin the term "pro relation", on analogy with "pronoun" and "proposition", to denote an expression of information about a relation, a contingent indication that, if and when completed, conceivably points to a particular relation.
+
I turn now to the question of ''partially specified relations'', or ''partially informed relations'' (PIRs), in other words, to the explicit treatment of relations in terms of the information that is logically possessed or actually expressed about them.  There seem to be several ways to approach the concept of an <math>n\!</math>-place PIR and the supporting notion of a partially specified <math>n\!</math>-tuple.  Since the term ''partial relation'' is already implicitly in use for the general class of relations that are not necessarily total on any of their domains, I will coin the term ''pro-relation'', on analogy with ''pronoun'' and ''proposition'', to denote an expression of information about a relation, a contingent indication that, if and when completed, conceivably points to a particular relation.
   −
One way to deal with "partially informed categories" (PICs) of n place relations is to contemplate incomplete relational forms or schemata.  Regarded over the years chiefly in logical and intensional terms, constructs of roughly this type have been variously referred to as "rhemes" or "rhemata" (Peirce), "unsaturated relations" (Frege), or "frames" (in current AI literature).  Expressed in extensional terms, talking about PICs of n place relations is tantamount to admitting elementary relations with missing elements.  The question is not just syntactic How to represent an n tuple with empty places? but also semantic How to make sense of an n tuple with less than n elements?
+
One way to deal with ''partially informed categories'' of <math>n\!</math>-place relations is to contemplate incomplete relational forms or schemata.  Regarded over the years chiefly in logical and intensional terms, constructs of roughly this type have been variously referred to as ''rhemes'' or ''rhemata'' (Peirce), ''unsaturated relations'' (Frege), or ''frames'' (in current AI parlance).  Expressed in extensional terms, talking about partially informed categories of <math>n\!</math>-place relations is tantamount to admitting elementary relations with missing elements.  The question is not just syntactic &mdash; How to represent an <math>n\!</math>-tuple with empty places? &mdash; but also semantic &mdash; How to make sense of an <math>n\!</math>-tuple with less than <math>n\!</math> elements?
   −
In order to deal with PIRs in a thoroughly consistent fashion, it appears necessary to contemplate elementary relations that present themselves as being "unsaturated" (in Frege's usage of that term), in other words, to consider elements of a presumptive product space that in some sense "wanna be" n tuples or "would be" sequences of a certain length, but are currently missing components in some of their places.
+
In order to deal with PIRs in a thoroughly consistent fashion, it appears necessary to contemplate elementary relations that present themselves as being ''unsaturated'' (in Frege's sense of that term), in other words, to consider elements of a presumptive product space that in some sense ''wanna&nbsp;be'' <math>n\!</math>-tuples or ''would&nbsp;be'' sequences of a certain length, but are currently missing components in some of their places.
    
To the extent that the issues of partialization become obvious at the level of symbols and can be dealt with by elementary syntactic means, they initially make their appearance in terms of the various ways that data can go missing.
 
To the extent that the issues of partialization become obvious at the level of symbols and can be dealt with by elementary syntactic means, they initially make their appearance in terms of the various ways that data can go missing.
   −
The alternate notation "a^b" is provided for the ordered pair <a, b>. This choice of representation for ordered pairs is especially apt in the case of "concrete indices" (CIs) and "localized addresses" (LAs), where one wants the lead item to serve as a pointed reminder of the itemized content, as in i^Xi = <i, Xi>, and it helps to stress the individuality of each member in the indexed family, as in G = {Gj} = {j^Gj} = {<j, Gj>}.
+
The alternate notation <math>a \widehat{~} b\!</math> is provided for the ordered pair <math>(a, b).\!</math>  This choice of representation for ordered pairs is especially apt in the case of ''concrete indices'' and ''localized addresses'', where one wants the lead item to serve as a pointed reminder of the itemized content, as in <math>j \widehat{~} X_j = (j, X_j),\!</math> and it helps to stress the individuality of each member in the indexed family, as in the following set of equivalent notations.
 +
 
 +
{| align="center" cellspacing="8" width="90%"
 +
|
 +
<math>\begin{matrix}
 +
G & = & \{ G_j \} & = & \{ j \widehat{~} G_j \} & = & \{ (j, G_j ) \}.
 +
\end{matrix}</math>
 +
|}
   −
The "caret" (^) device works well in any situation where one desires to accentuate the fact that a formal subscript is being reclaimed and elevated to the status of an actual parameter.  By way of the operation indicated by the caret character the index bound to an object term can be rehabilitated as a full fledged component of an elementary relation, thereby schematically embedding the indicated object in the experiential space of a typical agent.
+
The ''link'' device <math>(\,\widehat{~}~)\!</math> works well in any situation where one desires to accentuate the fact that a formal subscript is being reclaimed and elevated to the status of an actual parameter.  By way of the operation indicated by the link symbol the index bound to an object term can be rehabilitated as a full-fledged component of an elementary relation, thereby schematically embedding the indicated object in the experiential space of a typical agent.
   −
The form of the caret notation is intended to suggest the use of "pointers" and "views" in computational frameworks, letting one interpret "j^x" in any one of many various ways, for example:
+
The form of the link notation is intended to suggest the use of ''pointers'' and ''views'' in computational frameworks, letting one interpret <math>j \widehat{~} x\!</math> in several different ways, for example, any one of the following.
   −
j^x = j's indication of x, j's access to x,
+
{| align="center" cellspacing="8" width="90%"
j's information on x, j's allusion to x,
+
|
j's copy of x, j's view of x.
+
<math>\begin{array}{lllll}
 +
j \widehat{~} x
 +
& =
 +
& j^\texttt{,}\text{s access to}~ x,
 +
& j^\texttt{,}\text{s allusion to}~ x,
 +
& j^\texttt{,}\text{s copy of}~ x,
 +
\\[4pt]
 +
&
 +
& j^\texttt{,}\text{s indication of}~ x,
 +
& j^\texttt{,}\text{s information on}~ x,
 +
& j^\texttt{,}\text{s view of}~ x.
 +
\end{array}</math>
 +
|}
    
Presently, the distinction between indirect pointers and direct pointers, that is, between virtual copies and actual views of an objective domain, is not yet relevant here, being a dimension of variation that the discussion is currently abstracting over.
 
Presently, the distinction between indirect pointers and direct pointers, that is, between virtual copies and actual views of an objective domain, is not yet relevant here, being a dimension of variation that the discussion is currently abstracting over.
   −
I would like to record here, in what is topically the appropriate place, notice of a number of open questions that will have to be addressed if anyone desires to make a consistent calculus out of this caret notation. Perhaps it is only because the franker forms of liaison involved in the caret couple a^b are more subject to the vagaries of syntactic elision than the corresponding bindings of the anglish ligature <a, b>, but for some reason or other the circumflex character of these diacritical notices are much more liable to suggest various forms of elaboration, including higher order generalizations and information theoretic partializations of the very idea of n tuples and sequences.
+
===6.34. Set-Theoretic Constructions===
   −
One way to deal with the problems of partial information ...
+
The next few sections deal with the informational relationships that exist between <math>n\!</math>-place relations and the relations of fewer dimensions that arise as their projections. A number of set-theoretic constructions of constant use in this investigation are brought together and described in the present section. Because their intended application is mainly to sign relations and other triadic relations, and since the current focus is restricted to discrete examples of these types, no attempt is made to present these constructions in their most general and elegant fashions, but only to deck them out in the forms that are most readily pressed into immediate service.
   −
Relational complex?
+
An initial set of operations, required to establish the subsequent constructions, all have in common the property that they do exactly the opposite of what is normally done in abstracting sets from situations.  These operations reconstitute, though still in a generic, schematic, or stereotypical manner, some of the details of concrete context and interpretive nuance that are commonly suppressed in forming sets.  Stretching points back along the direction of their initial pointing out, these extensions return to the mix a well-chosen selection of features, putting back in those dimensions from which ordinary sets are forced to abstract and in their ordination to treat as distractions.
   −
R = R(1) U ... U R(n)
+
In setting up these constructions, one typically makes use of two kinds of index sets, in colloquial terms, ''clipboards'' and ''scrapbooks''.
   −
Sign relational complex?
+
<ol style="list-style-type:decimal">
   −
R = R(1) U R(2) U R(3)
+
<li>
 +
<p>The smaller and shorter-term index sets, typically having the form <math>I = \{ 1, \ldots, n \},\!</math> are used to keep tabs on the terms of finite sets and sequences, unions and intersections, sums and products.</p>
   −
1. Carets linkages can be chained together to form sequences of indications or n tuples, without worrying too much about the order of collecting terms in the corresponding angle brackets.
+
<p>In this context and elsewhere, the notation <math>{[n] = \{ 1, \ldots, n \}}\!</math> will be used to refer to a ''standard segment'' (finite initial subset) of the natural numbers <math>\mathbb{N} = \{ 1, 2, 3, \ldots \}.\!</math></p></li>
   −
a^b^c  =  <a, b, c> = <a, <b, c>> =  <<a, b>, c>.
+
<li>
 +
<p>The larger and longer-term index sets, typically having the form <math>J \subseteq \mathbb{N} = \{ 1, 2, 3, \ldots \},\!</math> are used to enumerate families of objects that enjoy a more abiding reference throughout the course of a discussion.</p></li></ol>
   −
These equivalences depend on the existence of natural isomorphisms between different ways of constructing n place product spaces, that is, on the associativity of pairwise products, a not altogether trivial result (MacLane, CatWorkMath, ch. 7).
+
'''Definition.'''  An ''indicated set'' <math>j \widehat{~} S\!</math> is an ordered pair <math>j \widehat{~} S = (j, S),\!</math> where <math>j \in J\!</math> is the ''indicator'' of the set and <math>S\!</math> is the set indicated.
   −
2. Higher order indications (HOIs)?
+
'''Definition.'''  An ''indited set'' <math>j \widehat{~} S\!</math> extends the incidental and extraneous indication of a set into a constant indictment of its entire membership.
   −
^x  =     < , x> x^  =   <x, >    ?
+
{| align="center" cellspacing="8" width="90%"
^^x  = < , < , x>> x^^  =  <<x, >, > ?
+
|
+
<math>\begin{array}{lll}
Fragments
+
j \widehat{~} S
 +
& = & j \widehat{~} \{ j \widehat{~} s : s \in S \}
 +
\\[4pt]
 +
& = & j \widehat{~} \{ (j, s) : s \in S \}
 +
\\[4pt]
 +
& = & (j, \{ j \} \times S)
 +
\end{array}</math>
 +
|}
   −
In talking about properties and classes of relations, one would like to refer to "all relations" as forming a topic of potential discussion, and then take it as a background for contemplating ...
+
Notice the difference between these notions and the more familiar concepts of an ''indexed set'', ''numbered set'', and ''enumerated set''.  In each of these cases the construct that results is one where each element has a distinctive index attached to it.  In contrast, the above indications and indictments attach to the set <math>S\!</math> as a whole, and respectively to each element of it, the same index number <math>j.\!</math>
   −
In talking and thinking, often in just that order, about properties and classes of relations, one is always invoking, explicitly or implicitly, a particular background, a limited field of experience, actual or potential, against which each object of "discussion and thought" (DAT) figuresExpressing the matter in the idiom of logical inquiry, one brings to mind a preconceived universe of discourse U or a restricted domain of discussion X, and then contemplates ...
+
'''Definition.'''  An ''indexed set'' <math>(S, L)\!</math> is constructed from two components:  its ''underlying set'' <math>S\!</math> and its ''indexing relation'' <math>L : S \to \mathbb{N},\!</math> where <math>L\!</math> is total at <math>S\!</math> and tubular at <math>\mathbb{N}.\!</math> It is defined as follows:
   −
This direction of generalization expands the scope of PIRs by means of an analogical extension, and can be charted in the following manner.  If the name of a relation can be taken as a PIR to elementary relations, that is, if the formula of an n place relation can be interpreted as a proposition about n tuples, then a PIR to relations themselves can be formulated as a proposition about relations and thus as a HOPE about elementary relations or n tuples.
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>(S, L) = \{ \{ s \} \times L(s) : s \in S \} = \{ (s, j) : s \in S, j \in L(s) \}.\!</math>
 +
|}
   −
One way to extend the generic brand of partiality among relations in a non trivial direction can be charted as follows.  If the name or formula of a relation is a PIR to elementary relations, that is, if a sign or expression an n place relation is interpreted as a proposition about n tuples, then a PIR to relations ...
+
<math>L\!</math> assigns a unique set of &ldquo;local habitations&rdquo; <math>L(s)\!</math> to each element <math>s\!</math> in the underlying set <math>S.\!</math>
</pre>
     −
===6.34. Set-Theoretic Constructions===
+
'''Definition.'''  A ''numbered set'' <math>(S, f),\!</math> based on the set <math>S\!</math> and the injective function <math>{f : S \to \mathbb{N}},</math> is defined as follows. &hellip;
   −
<pre>
+
'''Definition.''' An ''enumerated set'' <math>(S, f)\!</math> is a numbered set with a bijective <math>f.\!</math> &hellip;
The next few sections deal with the informational relationships that exist between n place relations and the relations of fewer dimensions that arise as their projectionsA number of set theoretic constructions of constant use in this investigation are brought together and described in the present section.  Because their intended application is mainly to sign relations and other triadic relations, and since the current focus is restricted to discrete examples of these types, no attempt is made to present these constructions in their most general and elegant fashions, but only to deck them out in the forms that are most readily pressed into immediate service.
     −
An initial set of operations, required to establish the subsequent constructions, all have in common the property that they do exactly the opposite of what is normally done in abstracting sets from situationsThese operations reconstitute, though still in a generic, schematic, or stereotypical manner, some of the details of concrete context and interpretive nuance that are commonly suppressed in forming sets.  Stretching points back along the direction of their initial pointing out, these extensions return to the mix a well chosen selection of features, putting back in those dimensions from which ordinary sets are forced to abstract and in their ordination to treat as distractions.
+
'''Definition.''' The ''<math>n\!</math>-fold sum'' (''co-product'', ''disjoint union'') of the sets <math>X_1, \ldots, X_n\!</math> is notated and defined as follows:
   −
In setting up these constructions, one typically makes use of two kinds of index sets, in colloquial terms, "clipboards" and "scrapbooks":
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>\coprod_{i=1}^n X_i ~=~ X_1 + \ldots + X_n ~=~ 1 \widehat{~} X_1 \cup \ldots \cup n \widehat{~} X_n.\!</math>
 +
|}
   −
1. The smaller and shorter term index sets, typically having the form I = {1, ... , n}, are used to keep tabs on the terms of finite sets and sequences, unions and intersections, sums and products.
+
'''Definition.'''  The ''<math>n\!</math>-fold product'' (''cartesian product'') of the sets <math>X_1, \ldots, X_n\!</math> is notated and defined as follows:
   −
In this context and elsewhere, the notation [n] = {1, ... , n} will be used to refer to a "standard segment" (finite initial subset) of the natural numbers N = {1, 2, 3, ... }.
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>\prod_{i=1}^n X_i ~=~ X_1 \times \ldots \times X_n ~=~ \{ (x_1, \ldots, x_n) : x_i \in X_i \}.\!</math>
 +
|}
   −
2. The larger and longer term index sets, typically having the form J c N = {1, 2, 3, ... }, are used to enumerate families of objects that enjoy a more abiding reference throughout the course of a discussion.
+
As an alternative definition, the <math>n\!</math>-tuples of <math>\prod_{i=1}^n X_i\!</math> can be regarded as sequences of elements from the successive <math>X_i\!</math> and thus as functions that map <math>[n]\!</math> into the sum of the <math>X_i,\!</math> namely, as the functions <math>f : [n] \to \coprod_{i=1}^n X_i\!</math> that obey the condition <math>f(i) \in i \widehat{~} X_i.\!</math>
   −
Definition.  An "indicated set" j^S is an ordered pair j^S = <j, S>, where j C J is the indicator of the set and S is the set indicated.
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>\prod_{i=1}^n X_i ~=~ X_1 \times \ldots \times X_n ~=~ \{ f : [n] \to \coprod_{i=1}^n X_i ~|~ f(i) \in X_i \}.\!</math>
 +
|}
   −
Definition.  An "indited set" j^S extends the incidental and extraneous indication of a set into a constant indictment of its entire membership.
+
Viewing these functions as relations <math>f \subseteq J \times J \times X,\!</math> where <math>X = \bigcup_{i=1}^n X_i\!</math> &hellip;
   −
j^S  =  j^{j^s : s C S} j^{<j, s> : s C S}  = <j, {j} x S>.
+
Another way to view these elements is as triples <math>i \widehat{~} j \widehat{~} x\!</math> such that <math>i = j\!</math> &hellip;
   −
Notice the difference between these notions and the more familiar concepts of an "indexed set", "numbered set", and "enumerated set". In each of these cases the construct that results is one where each element has a distinctive index attached to it. In contrast, the above indications and indictments attach to the set S as a whole, and respectively to each element of it, the same index number j.
+
===6.35. Reducibility of Sign Relations===
   −
Definition.  An "indexed set" <S, L> is constructed from two components:  its "underlying set" S and its "indexing relation" L : S  > N, where L is total at S and tubular at N. It is defined as follows:
+
This Section introduces a topic of fundamental importance to the whole theory of sign relations, namely, the question of whether triadic relations are ''determined by'', ''reducible to'', or ''reconstructible from'' their dyadic projections.
   −
<S, L>  =  { {s} x L(s) : s C S }  =  {<s, j> : s C S, j C L(s)}.
+
Suppose <math>L \subseteq X \times Y \times Z\!</math> is an arbitrary triadic relation and consider the information about <math>L\!</math> that is provided by collecting its dyadic projections. To formalize this information define the ''projective triple'' of <math>L\!</math> as follows:
   −
L assigns a unique set of "local habitations" L(s) to each element s in the underlying set S.
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>\mathrm{Proj}^{(2)} L ~=~ (\mathrm{proj}_{12} L, ~ \mathrm{proj}_{13} L, ~ \mathrm{proj}_{23} L).\!</math>
 +
|}
   −
Definition.  A "numbered set" <S, f>, based on the set S and the injective function f : S  > N, is defined as follows. ???
+
If <math>L\!</math> is visualized as a solid body in the 3-dimensional space <math>X \times Y \times Z,\!</math> then <math>\mathrm{Proj}^{(2)} L\!</math> can be visualized as the arrangement or ordered collection of shadows it throws on the <math>XY, ~ XZ, ~ YZ\!</math> planes, respectively.
   −
Definition.  An "enumerated set" <S, f> is a numbered set with a bijective f. ???
+
Two more set-theoretic constructions are worth introducing at this point, in particular for describing the source and target domains of the projection operator <math>\mathrm{Proj}^{(2)}.\!</math>
   −
The "n fold sum" ("co product", "disjoint union") of the sets X1, ... , Xn is notated and defined as follows:
+
The set of subsets of a set <math>S\!</math> is called the ''power set'' of <math>S.\!</math>  This object is denoted by either of the forms <math>\mathrm{Pow}(S)\!</math> or <math>2^S\!</math> and defined as follows:
   −
Ui Xi  = X1 + ... + Xn  = 1^X1 U ... U n^Xn.
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>\mathrm{Pow}(S) ~=~ 2^S ~=~ \{ T : T \subseteq S \}.\!</math>
 +
|}
   −
The "n fold product" ("cartesian product") of the sets X1, ... , Xn is notated and defined as follows:
+
The power set notation can be used to provide an alternative description of relations.  In the case where <math>S\!</math> is a cartesian product, say <math>{S = X_1 \times \ldots \times X_n},\!</math> then each <math>n\!</math>-place relation <math>L\!</math> described as a subset of <math>S,\!</math> say <math>L \subseteq X_1 \times \ldots \times X_n,\!</math> is equally well described as an element of <math>\mathrm{Pow}(S),\!</math> in other words, as <math>L \in \mathrm{Pow}(X_1 \times \ldots \times X_n).\!</math>
   −
Xi Xi  =  X1 x ... x Xn  = {<x1, ... , xn> : xi C Xi for all i}.
+
The set of triples of dyadic relations, with pairwise cartesian products chosen in a pre-arranged order from a triple of three sets <math>(X, Y, Z),\!</math> is called the ''dyadic explosion'' of <math>X \times Y \times Z.\!</math> This object is denoted <math>\mathrm{Explo}(X, Y, Z ~|~ 2),\!</math> read as the ''explosion of <math>X \times Y \times Z\!</math> by twos'', or more simply as <math>X, Y, Z ~\mathrm{choose}~ 2,\!</math> and defined as follows:
   −
As an alternative definition, the n tuples of Xi Xi can be regarded as sequences of elements that come from the successive Xi, and thus as the various functions of a certain sort that map [n] into the sum of the Xi, namely, as the sort of functions f : [n]  > Ui Xi that obey the condition f(i) C i^Xi.
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>\mathrm{Explo}(X, Y, Z ~|~ 2) ~=~ \mathrm{Pow}(X \times Y) \times \mathrm{Pow}(X \times Z) \times \mathrm{Pow}(Y \times Z).\!</math>
 +
|}
   −
Xi Xi  =  X1 x ... x Xn  =  { f : [n]  > Ui Xi | f(i) C Xi for all i}.
+
This domain is defined well enough to serve the immediate purposes of this section, but later it will become necessary to examine its construction more closely.
   −
Viewing these functions as relations f c JxJxX, where X = Ui Xi,
+
By means of these constructions the operation that forms <math>\mathrm{Proj}^{(2)} L\!</math> for each triadic relation <math>L \subseteq X \times Y \times Z\!</math> can be expressed as a function:
   −
???
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>\mathrm{Proj}^{(2)} : \mathrm{Pow}(X \times Y \times Z) \to \mathrm{Explo}(X, Y, Z ~|~ 2).\!</math>
 +
|}
   −
Another way to view these elements is as triples i^j^x such that i = j ???
+
In this setting the issue of whether triadic relations are ''reducible to'' or ''reconstructible from'' their dyadic projections, both in general and in specific cases, can be identified with the question of whether <math>\mathrm{Proj}^{(2)}\!</math> is injective.  The mapping <math>\mathrm{Proj}^{(2)}\!</math> is said to ''preserve information'' about the triadic relations <math>L \in \mathrm{Pow}(X \times Y \times Z)\!</math> if and only if it is injective, otherwise one says that some loss of information has occurred in taking the projections.  Given a specific instance of a triadic relation <math>L \in \mathrm{Pow}(X \times Y \times Z),\!</math> it can be said that <math>L\!</math> is ''determined by'' (''reducible to'' or ''reconstructible from'') its dyadic projections if and only if <math>(\mathrm{Proj}^{(2)})^{-1}(\mathrm{Proj}^{(2)}L)\!</math> is the singleton set <math>\{ L \}.\!</math>  Otherwise, there exists an <math>L'\!</math> such that <math>\mathrm{Proj}^{(2)}L = \mathrm{Proj}^{(2)}L',\!</math> and in this case <math>L\!</math> is said to be ''irreducibly triadic'' or ''genuinely triadic''.  Notice that irreducible or genuine triadic relations, when they exist, naturally occur in sets of two or more, the whole collection of them being equated or confounded with one another under <math>\mathrm{Proj}^{(2)}.\!</math>
</pre>
     −
===6.35. Reducibility of Sign Relations===
+
The next series of Tables illustrates the operation of <math>\mathrm{Proj}^{(2)}\!</math> by means of its actions on the sign relations <math>L_\text{A}\!</math> and <math>L_\text{B}.\!</math>  For ease of reference, Tables&nbsp;72.1 and 73.1 repeat the contents of Tables&nbsp;1 and 2, respectively, while the dyadic relations comprising <math>\mathrm{Proj}^{(2)}L_\text{A}\!</math> and <math>\mathrm{Proj}^{(2)}L_\text{B}\!</math> are shown in Tables&nbsp;72.2 to 72.4 and Tables&nbsp;73.2 to 73.4, respectively.
   −
<pre>
+
<br>
This section introduces a topic of fundamental importance to the whole theory of sign relations, namely, the question of whether triadic relations are "determined by", "reducible to", or "reconstructible from" their dyadic projections.
     −
Suppose R c XxYxZ is an arbitrary triadic relation and consider the information about R that is provided by collecting its dyadic projections.  To formalize this information define the "projective triple" of R as follows:
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
+
|+ style="height:30px" | <math>\text{Table 72.1} ~~ \text{Sign Relation of Interpreter A}~\!</math>
Proj (R)  = Pr2(R)  =  <Pr12(R), Pr13(R), Pr23(R)>.
+
|- style="height:40px; background:#f0f0ff"
 
+
| width="33%" | <math>\text{Object}\!</math>
If R is visualized as a solid body in the 3 dimensional space XxYxZ, then Proj (R) can be visualized as the arrangement or ordered collection of shadows it throws on the XY, XZ, and YZ planes, respectively.
+
| width="33%" | <math>\text{Sign}\!</math>
 
+
| width="33%" | <math>\text{Interpretant}\!</math>
There are a couple of set theoretic constructions that are useful here, in particular for describing the source and target domains of Proj.
+
|-
 
+
| valign="bottom" |
1. The set of subsets of a set S is called the "power set" of S.  This object is denoted by either of the forms "Pow (S)" or "2S" and defined as follows:
+
<math>\begin{matrix}
 
+
\text{A}
Pow (S)  =  2S  =  {T : T c S}.
+
\\
 
+
\text{A}
The power set notation can be used to provide an alternative description of relations.  In the case where S is a cartesian product, say S = X1x...xXn, then each n place relation described as a subset of S, say as R c X1x...xXn, is equally well described as an element of Pow (S), in other words, as R C Pow (X1x...xXn).
+
\\
 
+
\text{A}
2. The set of triples of dyadic relations, with pairwise cartesian products chosen in a pre arranged order from a triple of three sets <X, Y, Z>, is called the "dyadic explosion" of XxYxZ.  This object is denoted by "Explo (X, Y, Z; 2)", read as the "explosion of XxYxZ by 2s", or more simply as "X, Y, Z, choose 2", and defined as follows:
+
\\
 
+
\text{A}
Explo (X, Y, Z; 2)  =  Pow (XxY) x Pow (XxZ) x Pow (YxZ).
+
\end{matrix}</math>
 
+
| valign="bottom" |
This domain is defined well enough to serve the immediate purposes of this section, but later it will become necessary to examine its construction more closely.
+
<math>\begin{matrix}
 
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
By means of these constructions the operation that forms Proj (R) for each triadic relation R c XxYxZ can be expressed as a function:
+
\\
 
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
Proj : Pow (XxYxZ)  > Explo (X, Y, Z; 2).
+
\\
 
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
In this setting, the issue of whether triadic relations are "reducible to" or "reconstructible from" their dyadic projections, both in general and in specific cases, can be identified with the question of whether Proj is injective.  The mapping Proj : Pow (XxYxZ)  > Explo (X, Y, Z; 2) is said to "preserve information" about the triadic relations R C Pow (XxYxZ) if and only if it is injective, otherwise one says that some loss of information has occurred in taking the projections.  Given a specific instance of a triadic relation R C Pow (XxYxZ), it can be said that R is "determined by" ("reducible to" or "reconstructible from") its dyadic projections if and only if Proj 1(Proj (R)) is the singleton set {R}.  Otherwise, there exists an R' such that Proj (R) = Proj (R'), and in this case R is said to be "irreducibly triadic" or "genuinely triadic".  Notice that irreducible or genuine triadic relations, when they exist, naturally occur in sets of two or more, the whole collection of them being equated or confounded with one another under Proj.
+
\\
 
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
The next series of Tables illustrates the operation of Proj by means of its actions on the sign relations A and B.  For ease of reference, Tables 69.1 and 70.1 repeat the contents of Tables 1 and 2, respectively, while the dyadic relations comprising Proj (A) and Proj (B) are shown in Tables 69.2 to 69.4 and Tables 70.2 to 70.4, respectively.
+
\end{matrix}</math>
 
+
| valign="bottom" |
Table 69.1 Sign Relation of Interpreter A
+
<math>\begin{matrix}
Object Sign Interpretant
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
A "A" "A"
+
\\
A "A" "i"
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
A "i" "A"
+
\\
A "i" "i"
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
B "B" "B"
+
\\
B "B" "u"
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
B "u" "B"
+
\end{matrix}</math>
B "u" "u"
+
|-
 
+
| valign="bottom" |
Table 69.2  Dyadic Projection AOS
+
<math>\begin{matrix}
Object Sign
+
\text{B}
A "A"
+
\\
A "i"
+
\text{B}
B "B"
+
\\
B "u"
+
\text{B}
 
+
\\
Table 69.3  Dyadic Projection AOI
+
\text{B}
Object Interpretant
+
\end{matrix}</math>
A "A"
+
| valign="bottom" |
A "i"
+
<math>\begin{matrix}
B "B"
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
B "u"
+
\\
 
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
Table 69.4  Dyadic Projection ASI
+
\\
Sign Interpretant
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
"A" "A"
+
\\
"A" "i"
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
"i" "A"
+
\end{matrix}</math>
"i" "i"
+
| valign="bottom" |
"B" "B"
+
<math>\begin{matrix}
"B" "u"
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
"u" "B"
+
\\
"u" "u"
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 
+
\\
Table 70.1  Sign Relation of Interpreter B
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
Object Sign Interpretant
+
\\
A "A" "A"
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
A "A" "u"
+
\end{matrix}</math>
A "u" "A"
+
|}
A "u" "u"
  −
B "B" "B"
  −
B "B" "i"
  −
B "i" "B"
  −
B "i" "i"
  −
 
  −
Table 70.2  Dyadic Projection BOS
  −
Object Sign
  −
A "A"
  −
A "u"
  −
B "B"
  −
B "i"
  −
 
  −
Table 70.3  Dyadic Projection BOI
  −
Object Interpretant
  −
A "A"
  −
A "u"
  −
B "B"
  −
B "i"
  −
 
  −
Table 70.4  Dyadic Projection BSI
  −
Sign Interpretant
  −
"A" "A"
  −
"A" "u"
  −
"u" "A"
  −
"u" "u"
  −
"B" "B"
  −
"B" "i"
  −
"i" "B"
  −
"i" "i"
  −
 
  −
A comparison of the corresponding projections in Proj (A) and Proj (B) shows that the distinction between the triadic relations A and B is preserved by Proj, and this circumstance allows one to say that this much information, at least, can be derived from the dyadic projections.  However, to say that a triadic relation R C Pow (OxSxI) is reducible in this sense it is necessary to show that no distinct R' C Pow (OxSxI) exists such that Proj (R) = Proj (R'), and this can take a rather more exhaustive or comprehensive investigation of the space Pow (OxSxI).
  −
 
  −
As it happens, each of the relations R C {A, B} is uniquely determined by its projective triple Proj (R).  This can be seen as follows.  Consider any coordinate position <s, i> in the plane SxI.  If <s, i> is not in RSI then there can be no element <o, s, i> in R, therefore we may restrict our attention to positions <s, i> in RSI, knowing that there exist at least |RSI| = 8 elements in R, and seeking only to determine what objects o exist such that <o, s, i> is an element in the objective "fiber" of <s, i>.  In other words, for what o C O is <o, s, i> C PrSI 1(<s, i>)?  The fact that ROS has exactly one element <o, s> for each coordinate s C S and that ROI has exactly one element <o, i> for each coordinate i C I, plus the "coincidence" of it being the same o at any one choice for <s, i>, tells us that R has just the one element <o, s, i> over each point of SxI.  This proves that both A and B are reducible in an informational sense to triples of dyadic relations, that is, they are "dyadically reducible".
  −
</pre>
  −
 
  −
===6.36. Irreducibly Triadic Relations===
     −
<pre>
+
<br>
Most likely, any triadic relation R c XxYxZ that is imposed on arbitrary domains X, Y, Z could find use as a sign relation, provided that it embodies any constraint at all, in other words, so long as it forms a proper subset of its total space, R c XxYxZ.  However, these sorts of uses of triadic relations are not guaranteed to capture the most natural examples of sign relations.
     −
In order to show what an irreducibly triadic relation looks like, this section presents a pair of triadic relations that have the same dyadic projections, and thus cannot be distinguished from each other on this basis alone.  As it happens, these examples of triadic relations can be discussed independently of sign relational concerns, but structures of their general ilk are frequently found arising in signal theoretic applications, and they are undoubtedly closely associated with problems of reliable coding and communication.
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
 
+
|+ style="height:30px" | <math>\text{Table 72.2} ~~ \text{Dyadic Projection} ~ L(\text{A})_{OS}\!</math>
Tables 71.1 and 72.1 show a pair of irreducibly triadic relations R0 and R1, respectively.  Tables 71.2 to 71.4 and Tables 72.2 to 72.4 show the dyadic relations comprising Proj (R0) and Proj (R1), respectively.
+
|- style="height:40px; background:#f0f0ff"
 
+
| width="50%" | <math>\text{Object}\!</math>
Table 71.1  Relation R0 = {<x, y, z> C B3 : x + y + z = 0}
+
| width="50%" | <math>\text{Sign}\!</math>
x y z
+
|-
0 0 0
+
| valign="bottom" |
0 1 1
+
<math>\begin{matrix}
1 0 1
+
\text{A}
1 1 0
+
\\
 
+
\text{A}
Table 71.2  Dyadic Projection R012
+
\end{matrix}</math>
x y
+
| valign="bottom" |
0 0
+
<math>\begin{matrix}
0 1
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
1 0
+
\\
1 1
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 
+
\end{matrix}</math>
Table 71.3  Dyadic Projection R013
+
|-
x z
+
| valign="bottom" |
0 0
+
<math>\begin{matrix}
0 1
+
\text{B}
1 1
+
\\
1 0
+
\text{B}
 
+
\end{matrix}</math>
Table 71.4  Dyadic Projection R023
+
| valign="bottom" |
y z
+
<math>\begin{matrix}
0 0
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
1 1
+
\\
0 1
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
1 0
+
\end{matrix}</math>
 
+
|}
Table 72.1  Relation R1 = {<x, y, z> C B3 : x + y + z = 1}
  −
x y z
  −
0 0 1
  −
0 1 0
  −
1 0 0
  −
1 1 1
  −
 
  −
Table 72.2  Dyadic Projection R112
  −
x y
  −
0 0
  −
0 1
  −
1 0
  −
1 1
  −
 
  −
Table 72.3  Dyadic Projection R113
  −
x z
  −
0 1
  −
0 0
  −
1 0
  −
1 1
     −
Table 72.4  Dyadic Projection R123
+
<br>
y z
  −
0 1
  −
1 0
  −
0 0
  −
1 1
     −
The relations R0, R1 c B3 are defined by the following equations, with algebraic operations taking place as in GF(2), that is, with 1 + 1 = 0.
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
 +
|+ style="height:30px" | <math>\text{Table 72.3} ~~ \text{Dyadic Projection} ~ L(\text{A})_{OI}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="50%" | <math>\text{Object}\!</math>
 +
| width="50%" | <math>\text{Interpretant}\!</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|}
   −
1. The triple <x, y, z> in B3 belongs to R0 iff x + y + z = 0.  Thus, R0 is the set of even parity bit vectors, with x + y = z.
+
<br>
   −
2. The triple <x, y, z> in B3 belongs to R1 iff x + y + z = 1.  Thus, R1 is the set of odd parity bit vectors, with x + y = z + 1.
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
 
+
|+ style="height:30px" | <math>\text{Table 72.4} ~~ \text{Dyadic Projection} ~ L(\text{A})_{SI}\!</math>
The corresponding projections of Proj (R0) and Proj (R1) are identical.  In fact, all six projections, taken at the level of logical abstraction, constitute precisely the same dyadic relation, isomorphic to the whole of BxB and expressed by the universal constant proposition 1 : BxB  > B.  In summary:
+
|- style="height:40px; background:#f0f0ff"
 
+
| width="50%" | <math>\text{Sign}\!</math>
R012  = R112  = 112  = B2,
+
| width="50%" | <math>\text{Interpretant}\!</math>
R013  = R113  = 113  =  B2,
+
|-
R023  =  R123  =  123  =  B2.
+
| valign="bottom" |
 
+
<math>\begin{matrix}
Thus, R0 and R1 are both examples of irreducibly triadic relations.
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
</pre>
+
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|}
   −
===6.37. Propositional Types===
+
<br>
   −
<pre>
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
In this section, I describe a formal system of "type expressions" that are analogous to formulas of propositional logic, and I discuss their use as a calculus of predicates for classifying, analyzing, and drawing typical inferences about n place relations, in particular, for reasoning about the results of operations indicated or performed on relations and about the properties of their transformations and combinations.
+
|+ style="height:30px" | <math>\text{Table 73.1} ~~ \text{Sign Relation of Interpreter B}~\!</math>
 
+
|- style="height:40px; background:#f0f0ff"
Definition.  Given a cartesian product XxY, an ordered pair <x, y> C XxY has the type S.T, written <x, y> : S.T, iff x C S c X and y C T c Y.  Notice that an ordered pair can have many types.
+
| width="33%" | <math>\text{Object}\!</math>
 
+
| width="33%" | <math>\text{Sign}\!</math>
Definition.  A relation R c XxY has type S.T, written R : S.T, iff every <x, y> C R has type S.T, that is, iff R c SxT for some S c X and T c Y.
+
| width="33%" | <math>\text{Interpretant}\!</math>
 
+
|-
Notation.  "Barred parentheses", like "(" and ")", will be used in pairs to indicate the negations of propositions and the complements of sets.  When an n place relation R is initially given relative to the domains X1, ... , Xn and a set S is being mentioned as a subset of one of them, say S c Xi, then the "relevant complement" of S in such a context is the one taken relative to Xi, that is:
+
| valign="bottom" |
 
+
<math>\begin{matrix}
(S)  =  S  = (Xi  S).
+
\text{A}
 
+
\\
When there is occasion for ambiguities that are not resolved by context then one must resort to indices on the bars, as "(S)i", or revert to writing out the intended term in full, as "(Xi  S)".
+
\text{A}
 
+
\\
R : (S(T)).
+
\text{A}
+
\\
Fragments
+
\text{A}
 
+
\end{matrix}</math>
Finally, the set of triples of dyadic relations, with pairwise cartesian products chosen in a pre arranged order from a collection of three sets {X, Y, Z}, is called the "dyadic explosion" of {X, Y, Z}.  This object is denoted as "Explo (X, Y, Z; 2)", read as the "explosion of XxYxZ by 2s" or simply as "X, Y, Z, choose 2", and is defined as follows:
+
| valign="bottom" |
 
+
<math>\begin{matrix}
Explo (X, Y, Z; 2)  =  Pow (XxY) x Pow (XxZ) x Pow (YxZ).
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 
+
\\
This domain is defined well enough for now to serve the immediate purposes of this section, but later it will be necessary to examine its construction more closely.
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 
+
\\
Just to provide a hint of what's at stake, consider the suggestive identity,
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 
+
\\
2XY x 2XZ x 2YZ  = 2(XY + XZ + YZ),
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 
+
\end{matrix}</math>
and ask what sense would have to be found for the sums on the right in order to interpret this equation as a set theoretic isomorphism.  Answering this question requires the concept of a "co product", roughly speaking, a "disjointed union" of sets.  By the time this discussion has detailed the forms of indexing necessary to maintain these constructions, it should have become patently obvious that the forms of analysis and synthesis that are called on to achieve the putative "reductions to" and "reconstructions from" dyadic relations in actual fact never really leave the realm of genuinely triadic relations, but merely reshuffle its contents in various convenient fashions.
+
| valign="bottom" |
</pre>
+
<math>\begin{matrix}
 
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
===6.38. Considering the Source===
+
\\
 
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
<pre>
+
\\
There are several ways to contemplate the supplementation of signs, the sorts of augmentation that are crucial to meaning in the case of indices.  Some approaches are analytic, in the sense that they regard signs as derivative compounds and try to break up the unitary concept of an individual sign into a congeries of seemingly more real, more actual, or more determinate sign instances.  Other approaches are synthetic, in the sense that they accept a given collection of signs at face value and try to reconstruct more objective realities through the formation of abstract categories on this basis.
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 
+
\\
1. One analytic method takes it as a maxim for the logic of context that:  "Every sign or text is indexed by the context in which it occurs."  This means that all signs, including indices, are themselves indexed, though initially only tacitly, by the objective situation, the syntactic context, and the actual interpreter that makes use of them.
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 
+
\end{matrix}</math>
To begin formalizing this brand of supplementation, it is necessary to mark salient aspects of the situational, contextual, and inclusively interpretive features of sign usage that were previously held tacit.  In effect, signs once regarded as primitive objects need to be newly analyzed as categorical abstractions that cover multitudes of existential sign instances or "signs in use" (SIUs).
+
|-
 
+
| valign="bottom" |
To develop these dimensions of the A and B dialogue, I will attempt to articulate these interpretive parameters of signs by means of subscripts or superscripts attached to the signs or their quotations, in this way constituting a brand of "situated signs" or "attributed remarks".
+
<math>\begin{matrix}
 
+
\text{B}
The attribution of signs to their activating interpreters preserves the original object domain but produces an expanded syntactic domain, the set of "attributed signs":
+
\\
 
+
\text{B}
O =    { A, B },
+
\\
 
+
\text{B}
S = I = {"A"A, "B"A, "i"A, "u"A, "A"B, "B"B, "i"B, "u"B}.
+
\\
 
+
\text{B}
Table 73 displays the results of indexing every sign of the dialogue between A and B with a superscript indicating its source or "exponent", namely, the interpreter who actively communicates or transmits the sign.  Ostensibly, the operation of attribution produces two new sign relations for A and B, but it turns out that both sign relations have the same form and content, so a single Table will do.  The new sign relation generated by this operation will be denoted as "At (A, B)" and called the "attributed sign relation" for A and B.
+
\end{matrix}</math>
 
+
| valign="bottom" |
Table 73.  Attributed Sign Relation for Interpreters A & B
+
<math>\begin{matrix}
Object Sign Interpretant
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
A "A"A "A"A
+
\\
A "A"A "A"B
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
A "A"A "i"A
+
\\
A "A"A "u"B
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
A "A"B "A"A
+
\\
A "A"B "A"B
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
A "A"B "i"A
+
\end{matrix}</math>
A "A"B "u"B
+
| valign="bottom" |
A "i"A "A"A
+
<math>\begin{matrix}
A "i"A "A"B
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
A "i"A "i"A
+
\\
A "i"A "u"B
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
A "u"B "A"A
+
\\
A "u"B "A"B
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
A "u"B "i"A
+
\\
A "u"B "u"B
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
B "B"A "B"A
+
\end{matrix}</math>
B "B"A "B"B
+
|}
B "B"A "i"B
  −
B "B"A "u"A
  −
B "B"B "B"A
  −
B "B"B "B"B
  −
B "B"B "i"B
  −
B "B"B "u"A
  −
B "i"B "B"A
  −
B "i"B "B"B
  −
B "i"B "i"B
  −
B "i"B "u"A
  −
B "u"A "B"A
  −
B "u"A "B"B
  −
B "u"A "i"B
  −
B "u"A "u"A
  −
  −
Thus informed, the SER for interpreter A yields the semantic equations:
  −
 
  −
["A"A]A  =   ["A"B]A  =  ["i"A]A  =  ["u"B]A,
  −
 
  −
or "A"A    =A  "A"B     =A  "i"A    =A  "u"B.
  −
 
  −
In comparison, the SER for interpreter B yields the semantic equations:
  −
 
  −
["A"A]B  =  ["A"B]B  =  ["i"A]B  =  ["u"B]B,
  −
 
  −
or "A"A    =B  "A"B    =B  "i"A    ="u"B.
  −
 
  −
Consequently, both SERs now induce the same semantic partition on S:
  −
 
  −
{{ "A"A, "A"B, "i"A, "u"B }, { "B"A, "B"B, "i"B, "u"A }}.
  −
 
  −
By means of a simple attribution step a certain level of congruity has been reached in the community of interpretation constituted by A and B.  This newfound agreement on what is abstractly a single SER means that its equivalence classes reconstruct the structure of the object domain within the parts of the SEP.  This allows a measure of objectivity or inter subjectivity to be predicated of the sign relation's representation. 
  −
 
  −
An instance of Y using "X" is considered to be an objective event, the kind of happening to which all suitably placed observers can point, and adverting to an occurrence of "X"Y is more specific and less vague than resorting to instances of "X" as if being issued by anonymous sources.  The situated sign "X"Y comprises a "wider sign" than "X" in the sense that it takes in a broader field of view on the interpretive situation and provides more information about the context of use.  As to the reception of attributed remarks, the interpreter that can recognize signs of the form "X"Y is one that knows what it means to "consider the source".
  −
 
  −
It is best to read the superscripts on attributed signs as accentuations and integral parts of the quotation marks, taking ("..."A) and ("..."B) as variant inflections of ("...").  Thus, I can refer to the sign "X"Y just as I would refer to the sign "X" in the present informal context (PIC), without any additional marks of quotation.
  −
 
  −
Taking a cue from this usage, the ordinary quotes that I use to mark salient relationships of signs and expressions with respect to the informal context can now be regarded as quotes that I myself, operating as a casual interpreter, tacitly index.  Even without knowing the complete sign relation that I have in mind, the one that I presumably use to conduct this discussuion, the sign relation that "I" represents can nevertheless be partially formalized by means of a certain functional equation, namely, the equation between semantic functions:  "..." = "..."I.
  −
 
  −
By way of vocal expression, the attributed sign "X"Y can be pronounced as '"X" quoth Y ' or '"X" used by Y '.  To facilitate visual imagery, each token of the type "X"Y can be pictured as a specific occasion where the sign "X" is being used or issued by the interpreter Y.
     −
There is one remaining form of useful continuity that can be established between these newly formalized inventions and the ordinary conventions of common practice that are customary to apply in the informal context.  Conforming to the ascriptions made above, I revive an old usage for framing interjections and enunciate the quotation "X"I as '"X" quotha '.  Readers who find this custom too curious for words might consider the twofold origins of inquiry and interpretation, one in the virtue of addressing uncertainty and another in the acknowledgment of surprise.
+
<br>
   −
The construal of objects as classes of attributed signs leads to a measure of inter subjective agreement between the interpreters A and B.  Something like this must be the goal of any system of communication, and analogous forms of congruity and gregarity are likely to be found in any system for establishing mutually intelligible responses and maintaining socially coordinated practices.
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
 
+
|+ style="height:30px" | <math>\text{Table 73.2} ~~ \text{Dyadic Projection} ~ L(\text{B})_{OS}\!</math>
Nevertheless, the particular types of "analytic" solutions that were proposed for resolving the conflict of interpretations between A and B are conceptually unsatisfactory in several ways.  The constructions instituted retain the quality of hypotheses, especially due to the level of speculation about fundamental objects that is required to support them.  There remains something fictional and imaginary about the nature of the object instances that are posited to form the ontological infrastructure, the supposedly more determinate strata of being that are presumed to anchor the initial objects of discussion.
+
|- style="height:40px; background:#f0f0ff"
 
+
| width="50%" | <math>\text{Object}\!</math>
Founding objects on a particular selection of object instances is always initially an arbitrary choice, a meet response to a judgment call and a responsibility that cannot be avoided, but still a bit of guesswork that needs to be tested for its reality in practice (RIP).
+
| width="50%" | <math>\text{Sign}\!</math>
 
+
|-
This means that the postulated objects of objects cannot have their reality probed and proved in detail but evaluated only in terms of their conceivable practical effects.
+
| valign="bottom" |
 
+
<math>\begin{matrix}
2. One synthetic method ...
+
\text{A}
 
+
\\
Suppose now that each of the agents A and B reflects on the situational context of their discussion and observes on every occasion of utterance exactly who is saying what.  By this critically reflective operation of "considering the source" each interpreter is empowered to create, in effect, an "extended token" or "situated sign" out of each utterance by indexing it with the proper name of its utterer.  Though it arises by reflection, the augmented sign is not a higher order of abstraction so much as a restoration or reconstitution of what was lost by abstracting the sign from the signer in the first instance.
+
\text{A}
 
+
\end{matrix}</math>
In order to continue the development of this example, I need to employ a more precise system of marking quotations in order to keep track of who says what, and in what kinds of context.  To help with this, I use pairs of raised angle brackets (<...>) on a par with ordinary quotation marks ("...") to call attention to pieces of text as signs or expressions.  The angle quotes are especially useful for embedded quotations and for text regarded as used or mentioned by interpreters other than myself, for instance, by the fictional characters A and B.  Whenever possible, I save ordinary quotes for the outermost level, the one that interfaces with the context of informal discussion.
+
| valign="bottom" |
 
+
<math>\begin{matrix}
A notation like "<<A>, B, C>" is intended to indicate the construction of an extended (attributed, indexed, or situated) sign, in this case, by enclosing an initial sign "A" in a contextual envelope "<< >, , >" and inscribing it with relevant items of situational data, as represented by the signs "B" and "C".  When a salient component of the situational data represents an observation of the agent B communicating the sign "A", then the compressed form "<<A>B, C>" can be used to mark this fact.  If there is no addiutional contextual information beyond the marking of its source, then the form "<<A>B>" suffices to say that B said "A".
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 
+
\\
With this last modification, "angle quotes" become like "ascribed quotes" or "attributed remarks", indexed with the name of the interpretive agent that issued the message in question.  In sum, the notation "<<A>B>" is intended to situate the sign "A" in the context of its contemplated use, and to index the sign "A" with the name of the interpreter that is considered to be using it on a given occasion.
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 
+
\end{matrix}</math>
The notation "<<A>B>", read "<A> quoth B" or "<A> used by B", is an expression that indicates the use of the sign "A" by the interpreter B.  The expression inside the outer quotes is refered to as an "indexed quotation", since it is indexed by the name of the interpreter to which it is referred.  Since angle quotes with a blank index are equivalent to ordinary quotes, "<A>B" = <<A>B>?
+
|-
 
+
| valign="bottom" |
Enclosing a piece of text with raised angle brackets and following it with the name of an interpreter is intended to call to mind ...
+
<math>\begin{matrix}
 
+
\text{B}
Object domain:
+
\\
 
+
\text{B}
O = { A, B }
+
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|}
   −
Indexed syntactic domain or extended sign system:
+
<br>
   −
S = { "[<A>]A", "[<B>]A", "[<i>]A", "[<u>]A",
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
  "[<A>]B", "[<B>]B", "[<i>]B", "[<u>]B" }
+
|+ style="height:30px" | <math>\text{Table 73.3} ~~ \text{Dyadic Projection} ~ L(\text{B})_{OI}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="50%" | <math>\text{Object}\!</math>
 +
| width="50%" | <math>\text{Interpretant}\!</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|}
   −
The situated sign or indexed expression "[<A>]B" presents the sign or expression "A" as used by the interpreter B.  In other words, the sign is indexed by the name of an interpreter to indicate a use of that sign by that interpreter.  Thus, "[<A>]B" augments "A" into a new and more complete sign by including additional information about the context of its transmission, in particular, by the consideration of its source.
+
<br>
   −
Table 74.  Adequated Sign Relation for Interpreters A & B
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
Object Sign Interpretant
+
|+ style="height:30px" | <math>\text{Table 73.4} ~~ \text{Dyadic Projection} ~ L(\text{B})_{SI}\!</math>
A "[<A>]A" "[<A>]A"
+
|- style="height:40px; background:#f0f0ff"
A "[<A>]A" "[<A>]B"
+
| width="50%" | <math>\text{Sign}\!</math>
A "[<A>]A" "[<i>]A"
+
| width="50%" | <math>\text{Interpretant}\!</math>
A "[<A>]A" "[<u>]B"
+
|-
A "[<A>]B" "[<A>]A"
+
| valign="bottom" |
A "[<A>]B" "[<A>]B"
+
<math>\begin{matrix}
A "[<A>]B" "[<i>]A"
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
A "[<A>]B" "[<u>]B"
+
\\
A "[<i>]A" "[<A>]A"
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
A "[<i>]A" "[<A>]B"
+
\\
A "[<i>]A" "[<i>]A"
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
A "[<i>]A" "[<u>]B"
+
\\
A "[<u>]B" "[<A>]A"
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
A "[<u>]B" "[<A>]B"
+
\end{matrix}</math>
A "[<u>]B" "[<i>]A"
+
| valign="bottom" |
A "[<u>]B" "[<u>]B"
+
<math>\begin{matrix}
B "[<B>]A" "[<B>]A"
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
B "[<B>]A" "[<B>]B"
+
\\
B "[<B>]A" "[<i>]B"
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
B "[<B>]A" "[<u>]A"
+
\\
B "[<B>]B" "[<B>]A"
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
B "[<B>]B" "[<B>]B"
+
\\
B "[<B>]B" "[<i>]B"
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
B "[<B>]B" "[<u>]A"
+
\end{matrix}</math>
B "[<i>]B" "[<B>]A"
+
|-
B "[<i>]B" "[<B>]B"
+
| valign="bottom" |
B "[<i>]B" "[<i>]B"
+
<math>\begin{matrix}
B "[<i>]B" "[<u>]A"
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
B "[<u>]A" "[<B>]A"
+
\\
B "[<u>]A" "[<B>]B"
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
B "[<u>]A" "[<i>]B"
+
\\
B "[<u>]A" "[<u>]A"
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
</pre>
+
\\
 
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
===6.39. Prospective Indices : Pointers to Future Work===
+
\end{matrix}</math>
 
+
| valign="bottom" |
In the effort to unify dynamical, connectionist, and symbolic approaches to intelligent systems, indices supply important stepping stones between the sorts of signs that remain bound to circumscribed theaters of action and the kinds of signs that can function globally as generic symbols.  Current technology presents an array of largely accidental discoveries that have been brought into being for implementing indexical systems.  Bringing systematic study to bear on this variety of accessory devices and trying to discern within the wealth of incidental features their essential principles and effective ingredients could help to improve the traction this form of bridge affords.
+
<math>\begin{matrix}
 
+
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
In the points where this project addresses work on the indexical front, a primary task is to show how the ''actual connections'' promised by the definition of indexical signs can be translated into system-theoretic terms and implemented by means of the class of ''dynamic connections'' that can persist in realistic systems.
+
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|}
   −
An offshoot of this investigation would be to explore how indices like pointer variables could be realized within &ldquo;connectionist&rdquo; systems.  There is no reason in principle why this cannot be done, but I think that pragmatic reasons and practical success will force the contemplation of higher orders of connectivity than those currently fashioned in two-dimensional arrays of connections.  To be specific, further advances will require the generative power of genuinely triadic relations to be exploited to the fullest possible degree.
+
<br>
   −
To avert one potential misunderstanding of what this entails, computing with triadic relations is not really a live option unless the algebraic tools and logical calculi needed to do so are developed to greater levels of facility than they are at presentMerely officiating over the storage of &ldquo;dead letters&rdquo; in higher dimensional arrays will not do the trick.  Turning static sign relations into the orders of dynamic sign processes that can support live inquiries will demand new means of representation and new methods of computation.
+
A comparison of the corresponding projections in <math>\mathrm{Proj}^{(2)} L(\text{A})\!</math> and <math>\mathrm{Proj}^{(2)} L(\text{B})\!</math> shows that the distinction between the triadic relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> is preserved by <math>\mathrm{Proj}^{(2)},\!</math> and this circumstance allows one to say that this much information, at least, can be derived from the dyadic projectionsHowever, to say that a triadic relation <math>L \in \mathrm{Pow} (O \times S \times I)\!</math> is reducible in this sense it is necessary to show that no distinct <math>L' \in \mathrm{Pow} (O \times S \times I)\!</math> exists such that <math>\mathrm{Proj}^{(2)} L = \mathrm{Proj}^{(2)} L',\!</math> and this can take a rather more exhaustive or comprehensive investigation of the space <math>\mathrm{Pow} (O \times S \times I).\!</math>
   −
To fulfill their intended roles, a formal calculus for sign relations and the associated implementation must be able to address and restore the full dimensionalities of the existential and social matrices in which inquiry takes placeInformational constraints that define objective situations of interest need to be freed from the locally linear confines of the &ldquo;dia-matrix&rdquo; and reposted within the realm of the &ldquo;tri-matrix&rdquo;, that is, reconstituted in a manner that allows critical reflection on their form and content.
+
As it happens, each of the relations <math>L = L(\text{A})\!</math> or <math>L = L(\text{B})\!</math> is uniquely determined by its projective triple <math>\mathrm{Proj}^{(2)} L.\!</math> This can be seen as follows.
   −
The descriptive and conceptual architectures needed to frame this task must allow space for interlacing forms of &ldquo;open work&rdquo;, projects that anticipate the desirability of higher order relations and build in the capability for higher order reflections at the very beginning, and do not merely hope against hope to arrange these capacities as afterthoughts.
+
Consider any coordinate position <math>(s, i)\!</math> in the plane <math>S \times I.\!</math>  If <math>(s, i)\!</math> is not in <math>L_{SI}\!</math> then there can be no element <math>(o, s, i)\!</math> in <math>L,\!</math> therefore we may restrict our attention to positions <math>(s, i)\!</math> in <math>L_{SI},\!</math> knowing that there exist at least <math>|L_{SI}| = 8\!</math> elements in <math>L,\!</math> and seeking only to determine what objects <math>o\!</math> exist such that <math>(o, s, i)\!</math> is an element in the objective ''fiber'' of <math>(s, i).\!</math>  In other words, for what <math>{o \in O}\!</math> is <math>(o, s, i) \in \mathrm{proj}_{SI}^{-1}((s, i))?\!</math>  The fact that <math>L_{OS}\!</math> has exactly one element <math>(o, s)\!</math> for each coordinate <math>s \in S\!</math> and that <math>L_{OI}\!</math> has exactly one element <math>(o, i)\!</math> for each coordinate <math>i \in I,\!</math> plus the &ldquo;coincidence&rdquo; of it being the same <math>o\!</math> at any one choice for <math>(s, i),\!</math> tells us that <math>L\!</math> has just the one element <math>(o, s, i)\!</math> over each point of <math>S \times I.\!</math>  This proves that both <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> are reducible in an informational sense to triples of dyadic relations, that is, they are ''dyadically reducible''.
   −
===6.40. Dynamic and Evaluative Frameworks===
+
===6.36. Irreducibly Triadic Relations===
   −
<pre>
+
Most likely, any triadic relation <math>L \subseteq X \times Y \times Z\!</math> imposed on arbitrary domains <math>X, Y, Z\!</math> could find use as a sign relation, provided it embodies any constraint at all, in other words, so long as it forms a proper subset of its total space, a relationship symbolized by writing <math>L \subset X \times Y \times Z.\!</math> However, triadic relations of this sort are not guaranteed to form the most natural examples of sign relations.
The sign relations A and B are lacking in several dimensions of realistic properties that would ordinarily be more fully developed in the kinds of sign relations that are found to be involved in inquiryThis section initiates a discussion of two such dimensions, the "dynamic" and the "evaluative" aspects of sign relations, and it treats the materials that are organized along these lines at two broad levels, either "within" or "between" particular examples of sign relations.
     −
1. The "dynamic dimension" deals with change.  Thus, it details the forms of diversity that sign relations distribute in a temporal process.  It is concerned with the transitions that take place from element to element within a sign relation and also with the changes that take place from one whole sign relation to another, thereby generating various types and levels of "sign process".
+
In order to show what an irreducibly triadic relation looks like, this Section presents a pair of triadic relations that have the same dyadic projections, and thus cannot be distinguished from each other on this basis alone.  As it happens, these examples of triadic relations can be discussed independently of sign relational concerns, but structures of their general ilk are frequently found arising in signal-theoretic applications, and they are undoubtedly closely associated with problems of reliable coding and communication.
   −
2. The "evaluative dimension" deals with goals. Thus, it details the forms of diversity that sign relations contribute to a definite purpose. It is concerned with the comparisons that can be made on a scale of values between the elements within a sign relation and also between whole sign relations themselves, with a view toward deciding which is better for a "designated purpose".
+
Tables&nbsp;74.1 and 75.1 show a pair of irreducibly triadic relations <math>L_0\!</math> and <math>L_1,\!</math> respectively.  Tables&nbsp;74.2 to 74.4 and Tables&nbsp;75.2 to 75.4 show the dyadic relations comprising <math>\mathrm{Proj}^{(2)} L_0\!</math> and <math>\mathrm{Proj}^{(2)} L_1,\!</math> respectively.
   −
At the primary level of analysis, one is concerned with the application of these two dimensions "within" particular sign relations.  At every subsequent level of analysis, one deals with the dynamic transitions and evaluative comparisons that can be contemplated "between" particular sign relations.  In order to cover all these dimensions, types, and levels of diversity in a unified way, there is need for a substantive term that can allow one to indicate any of the above objects of discussion and thought — including elements of sign relations, particular sign relations, and states of systems — and to regard it as an "object, sign, or state in a certain stage of construction".  I will use the word "station" for this purpose.
+
<br>
   −
In order to organize the discussion of these two dimensions, both within and between particular sign relations, and to coordinate their ordinary relation to each other in practical situations, it pays to develop a combined form of "dynamic evaluative framework" (DEF), similar in design and utility to the objective frameworks set up earlier.
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:75%"
 +
|+ style="height:30px" | <math>\text{Table 74.1} ~~ \text{Relation} ~ L_0 =\{ (x, y, z) \in \mathbb{B}^3 : x + y + z = 0 \}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>x\!</math>
 +
| width="33%" | <math>y\!</math>
 +
| width="33%" | <math>z\!</math>
 +
|-
 +
| valign="bottom" | <math>\begin{matrix}0\\0\\1\\1\end{matrix}</math>
 +
| valign="bottom" | <math>\begin{matrix}0\\1\\0\\1\end{matrix}</math>
 +
| valign="bottom" | <math>\begin{matrix}0\\1\\1\\0\end{matrix}</math>
 +
|}
   −
A DEF encompasses two dimensions of comparison between stations:
+
<br>
   −
1. A "dynamic" dimension, as swept out by a process of changing stations, permits comparison between stations in terms of before and after on a scale of temporal order.
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
 +
|+ style="height:30px" | <math>\text{Table 74.2} ~~ \text{Dyadic Projection} ~ (L_0)_{12}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>x\!</math>
 +
| width="33%" | <math>y\!</math>
 +
|-
 +
| valign="bottom" | <math>\begin{matrix}0\\0\\1\\1\end{matrix}</math>
 +
| valign="bottom" | <math>\begin{matrix}0\\1\\0\\1\end{matrix}</math>
 +
|}
   −
A terminal station on a dynamic dimension is called a "stable" station.
+
<br>
   −
2. An "evaluative" dimension permits comparison between stations on a scale of values.
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
 +
|+ style="height:30px" | <math>\text{Table 74.3} ~~ \text{Dyadic Projection} ~ (L_0)_{13}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>x\!</math>
 +
| width="33%" | <math>z\!</math>
 +
|-
 +
| valign="bottom" | <math>\begin{matrix}0\\0\\1\\1\end{matrix}</math>
 +
| valign="bottom" | <math>\begin{matrix}0\\1\\1\\0\end{matrix}</math>
 +
|}
   −
A terminal station on an evaluative dimension is called a "standard" or "canonical" station.
+
<br>
   −
A station that is both stable and standard is called a "normal" station.
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
 +
|+ style="height:30px" | <math>\text{Table 74.4} ~~ \text{Dyadic Projection} ~ (L_0)_{23}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>y\!</math>
 +
| width="33%" | <math>z\!</math>
 +
|-
 +
| valign="bottom" | <math>\begin{matrix}0\\1\\0\\1\end{matrix}</math>
 +
| valign="bottom" | <math>\begin{matrix}0\\1\\1\\0\end{matrix}</math>
 +
|}
   −
Consider the following analogies or correspondences that exist between different orders of sign relational structure:
+
<br>
   −
1. Just as a sign represents its object and becomes associated with more or less equivalent signs in the minds of interpretive agents, the corpus of signs that embodies a SOI represents in a collective way its own proper object, intended objective, or "try at objectivity" (TAO).
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:75%"
 +
|+ style="height:30px" | <math>\text{Table 75.1} ~~ \text{Relation} ~ L_1 =\{ (x, y, z) \in \mathbb{B}^3 : x + y + z = 1 \}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>x\!</math>
 +
| width="33%" | <math>y\!</math>
 +
| width="33%" | <math>z\!</math>
 +
|-
 +
| valign="bottom" | <math>\begin{matrix}0\\0\\1\\1\end{matrix}</math>
 +
| valign="bottom" | <math>\begin{matrix}0\\1\\0\\1\end{matrix}</math>
 +
| valign="bottom" | <math>\begin{matrix}1\\0\\0\\1\end{matrix}</math>
 +
|}
   −
2. Just as the relationship of a sign to its semantic objects and interpretive associates can be formalized within a single sign relation, the relation of a dynamically changing SOI to its reference environment, developmental goals, and desired characteristics of interpretive performance can be formalized by means of a higher order sign relation, one that further establishes a grounds of comparison for relating the growing SOI, not only to its former and future selves, but to a diverse company of other SOIs.
+
<br>
   −
From an outside perspective the distinction between a sign and its object is usually regarded as obvious, though agents operating in the thick of a SOI often act as though they cannot see the difference.  Nevertheless, as a rule in practice, a sign is not a good thing to be confused with its object.  Even in the rare and usually controversial cases where an identity of substance is contemplated, usually only for the sake of argument, there is still a distinction of roles to be maintained between the sign and its object.  Just so, ...
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
 +
|+ style="height:30px" | <math>\text{Table 75.2} ~~ \text{Dyadic Projection} ~ (L_1)_{12}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>x\!</math>
 +
| width="33%" | <math>y\!</math>
 +
|-
 +
| valign="bottom" | <math>\begin{matrix}0\\0\\1\\1\end{matrix}</math>
 +
| valign="bottom" | <math>\begin{matrix}0\\1\\0\\1\end{matrix}</math>
 +
|}
   −
Although there are aspects of inquiry processes that operate within the single sign relation, the characteristic features of inquiry do not come into full bloom until one considers the whole diversity of dynamically developing sign relations.  Because it will be some time before this discussion acquires the formal power it needs to deal with higher order sign relations, these issues will need to be treated on an informal basis as they arise, and often in cursory and ad hoc manner.
+
<br>
</pre>
     −
===6.41. Elective and Motive Forces===
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
 +
|+ style="height:30px" | <math>\text{Table 75.3} ~~ \text{Dyadic Projection} ~ (L_1)_{13}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>x\!</math>
 +
| width="33%" | <math>z\!</math>
 +
|-
 +
| valign="bottom" | <math>\begin{matrix}0\\0\\1\\1\end{matrix}</math>
 +
| valign="bottom" | <math>\begin{matrix}1\\0\\0\\1\end{matrix}</math>
 +
|}
   −
<pre>
+
<br>
In other ways the example of A and B, in the fragmentary aspects of their dialogue presented so far, is unrealistic in its simplification of semantic issues, lacking a full development of many kinds of attributes that almost always become significant in situations of practical interest.  Just to mention two related features of importance to inquiry that are missing from this example, there is no sense of directional process and no dimension of differential value defined either within or between the semantic equivalence classes.
     −
When there is a clear sense of dynamic tendency or purposeful direction driving the passage from signs to interpretants in the connotative project of a sign relation, then the study moves from sign relations, statically viewed, to genuine sign processes. In the pragmatic theory of signs, such processes are usually dignified with the name "semiosis", and their systematic investigation is called "semiotics".
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:50%"
 +
|+ style="height:30px" | <math>\text{Table 75.4} ~~ \text{Dyadic Projection} ~ (L_1)_{23}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>y\!</math>
 +
| width="33%" | <math>z\!</math>
 +
|-
 +
| valign="bottom" | <math>\begin{matrix}0\\1\\0\\1\end{matrix}</math>
 +
| valign="bottom" | <math>\begin{matrix}1\\0\\0\\1\end{matrix}</math>
 +
|}
   −
Further, when this dynamism or purpose is consistent and confluent with a differential value system defined on the syntactic domain, then the sign process in question becomes a candidate for the kind of clarity gaining, canon seeking process, capable of supporting learning and reasoning, that I classify as an "inquiry driven system".
+
<br>
   −
There is a mathematical "turn of thought" that I will often take in discussing these kinds of issues.  Instead of saying that a system has no attribute of a particular type, I will say that it has the attribute, but in a trivial sense or degenerate manner.  This is merely a strategy of classification that allows one to include null cases in a taxonomy and to make use of continuity arguments in passing from case to case in a class of examples.  Viewed in this way, each of the sign relations A or B can be taken to exhibit a trivial dynamic process and a trivial standard of value defined on the syntactic domain.
+
The relations <math>L_0, L_1 \subseteq \mathbb{B}^3\!</math> are defined by the following equations, with algebraic operations taking place as in <math>\text{GF}(2),\!</math> that is, with <math>1 + 1 = 0.\!</math>
</pre>
     −
===6.42. Sign Processes : A Start===
+
# The triple <math>(x, y, z)\!</math> in <math>\mathbb{B}^3\!</math> belongs to <math>L_0\!</math> if and only if <math>{x + y + z = 0}.\!</math>  Thus, <math>L_0\!</math> is the set of even-parity bit vectors, with <math>x + y = z.\!</math>
 +
# The triple <math>(x, y, z)\!</math> in <math>\mathbb{B}^3\!</math> belongs to <math>L_1\!</math> if and only if <math>{x + y + z = 1}.\!</math>  Thus, <math>L_1\!</math> is the set of odd-parity bit vectors, with <math>x + y = z + 1.\!</math>
   −
To articulate the dynamic aspects of a sign relation, one can interpret it as determining a discrete or finite state transition system.  In the usual ways of doing this, the states of the system are given by the elements of the syntactic domain, while the elements of the object domain correspond to input data or control parameters that affect transitions from signs to interpretant signs in the syntactic state space.
+
The corresponding projections of <math>\mathrm{Proj}^{(2)} L_0\!</math> and <math>\mathrm{Proj}^{(2)} L_1\!</math> are identical.  In fact, all six projections, taken at the level of logical abstraction, constitute precisely the same dyadic relation, isomorphic to the whole of <math>\mathbb{B} \times \mathbb{B}\!</math> and expressed by the universal constant proposition <math>1 : \mathbb{B} \times \mathbb{B} \to \mathbb{B}.\!</math>  In summary:
   −
Working from these principles alone, there are numerous ways that a plausible dynamics can be invented for a given sign relation.  I will concentrate on two principal forms of dynamic realization, or two ways of interpreting and augmenting sign relations as sign processes.
+
{| align="center" cellspacing="8" width="90%"
 +
|
 +
<math>\begin{array}{lllll}
 +
(L_0)_{12} & = & (L_1)_{12} & \cong & \mathbb{B}^2
 +
\\[4pt]
 +
(L_0)_{13} & = & (L_1)_{13} & \cong & \mathbb{B}^2
 +
\\[4pt]
 +
(L_0)_{23} & = & (L_1)_{23} & \cong & \mathbb{B}^2
 +
\end{array}</math>
 +
|}
   −
<pre>
+
Thus, <math>L_0\!</math> and <math>L_1\!</math> are both examples of irreducibly triadic relations.
One form of realization lets each element of the object domain O correspond to the observed presence of an object in the environment of the systematic agent.  In this interpretation, the object X acts as an input datum that causes the system Y to shift from whatever sign state it happens to occupy at a given moment to a random sign state in [X]Y.  Expressed in a cognitive vein, "Y notes X".
     −
Another form of realization lets each element of the object domain O correspond to the autonomous intention of the systematic agent to denote an object, achieve an objective, or broadly speaking to accomplish any other purpose with respect to an object in its domain. In this interpretation, the object X is a control parameter that brings the system Y into line with realizing a target set [X]Y.
+
===6.37. Propositional Types===
   −
Tables 75 and 76 show how the sign relations for A and B can be filled out as finite state processes in conformity with the interpretive principles just described.  Rather than letting the actions go undefined for some combinations of inputs C O and states C S, transitions have been added that take the interpreters from whatever else they might have been thinking about to the SECs of their objects.  In either modality of realization, cognitive or control oriented, the abstract structure of the resulting sign process is exactly the same.
+
This Section describes a formal system of ''type expressions'' that are analogous to formulas of propositional logic and discusses their use as a calculus of predicates for classifying, analyzing, and drawing typical inferences about <math>k\!</math>-place relations, in particular, for reasoning about the results of operations on relations and about the properties of their transformations and combinations.
   −
Table 75Sign Process of Interpreter A
+
'''Definition.''' Given a cartesian product <math>X \times Y,\!</math> an ordered pair <math>(x, y) \in X \times Y\!</math> has the type <math>S \cdot T,\!</math> written <math>(x, y) : S \cdot T,\!</math> if and only if <math>x \in S \subseteq X\!</math> and <math>y \in T \subseteq Y.\!</math>  Notice that an ordered pair may have many types.
Object Sign Interpretant
  −
A "A" "A"
  −
A "A" "i"
  −
A "i" "A"
  −
A "i" "i"
  −
A "B" "A"
  −
A "B" "i"
  −
A "u" "A"
  −
A "u" "i"
  −
B "A" "B"
  −
B "A" "u"
  −
B "i" "B"
  −
B "i" "u"
  −
B "B" "B"
  −
B "B" "u"
  −
B "u" "B"
  −
B "u" "u"
     −
Table 76Sign Process of Interpreter B
+
'''Definition.''' A relation <math>L \subseteq X \times Y\!</math> has type <math>S \cdot T,\!</math> written <math>L : S \cdot T,\!</math> if and only if every <math>(x, y) \in L\!</math> has type <math>S \cdot T,\!</math> that is, if and only if <math>L \subseteq S \times T\!</math> for some <math>S \subseteq X\!</math> and <math>T \subseteq Y.\!</math>
Object Sign Interpretant
  −
A "A" "A"
  −
A "A" "u"
  −
A "u" "A"
  −
A "u" "u"
  −
A "B" "A"
  −
A "B" "u"
  −
A "i" "A"
  −
A "i" "u"
  −
B "A" "B"
  −
B "A" "i"
  −
B "u" "B"
  −
B "u" "i"
  −
B "B" "B"
  −
B "B" "i"
  −
B "i" "B"
  −
B "i" "i"
     −
Treated in accord with these interpretations, the sign relations A and B constitute partially degenerate cases of dynamic processes, in which the transitions are totally non deterministic up to semantic equivalence classes but still manage to preserve those classesWhether construed as present observation or projective speculation, the most significant feature to note about a sign process is how the contemplation of an object or objective leads the system from a less determined to a more determined condition.
+
'''Notation.'''  Parentheses in the Courier or Teletype font, <math>\texttt{( ... )},\!</math> are used to indicate the negations of propositions and the complements of setsWhen a <math>k\!</math>-place relation <math>L\!</math> is initially given relative to the domains <math>X_1, \ldots, X_k\!</math> and a set <math>S\!</math> is mentioned as a subset of one of them, say <math>S \subseteq X_j,\!</math> then the ''relevant complement'' of <math>S\!</math> in such a context is the one taken relative to <math>X_j.\!</math>  Thus we have the following equivalents.
   −
On reflection, one observes that these processes are not completely trivial since they preserve the structure of their semantic partitions.  In fact, each sign process preserves the entire topology ("family of sets closed under finite intersections and arbitrary unions") generated by its semantic equivalence classes.  These topologies, Top(A) and Top(B), can be viewed as partially ordered sets, Pos(A) and Pos(B), by taking the inclusion ordering (c) as (<).  For each of the interpreters A and B, as things stand in their respective orderings Pos(A) and Pos(B), the semantic equivalence classes of "A" and "B" are situated as intermediate elements that are incomparable to each other.
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>\texttt{(} S \texttt{)} ~=~ -\!S ~=~ X_j - S\!</math>
 +
|}
   −
Top(A) = Pos(A)  =  { {}, {"A", "i"}, {"B", "u"}, S }.
+
In case of ambiguities that are not resolved by context, indices may be used as follows.
Top(B) = Pos(B)  =  { {}, {"A", "u"}, {"B", "i"}, S }.
     −
In anticipation of things to come, these orderings are germinal versions of the kinds of semantic hierarchies that will be used in this project to define the "ontologies", "world views", or "perspectives" corresponding to individual interpreters.
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>\texttt{(} S \texttt{)}_j ~=~ X_j - S\!</math>
 +
|}
   −
When it comes to discussing the stability properties of dynamic systems, the sets that remain invariant under iterated applications of a process are called its "attractors" or "basins of attraction".
+
In any case, the intended term can always be written out in full, as <math>X_j - S.\!</math>
   −
More care needed here.  Strongly and weakly connected components of a digraph?
+
<br>
   −
The dynamic realizations of the sign relations A and B augment their semantic equivalence relations in an "attractive" way.  To describe this additional structure, I introduce a set of graph theoretical concepts and notations.
+
<center>'''Fragments'''</center>
   −
The "attractor" of X in Y.
+
Consider a relation <math>L\!</math> of the following type.
   −
Y@X  = "Y at X" = @[X]Y  = [X]Y  U  { Arcs into [X]Y }.
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>L : \texttt{(} S \texttt{(} T \texttt{))}\!</math>
 +
|}
   −
In effect, this discussion of dynamic realizations of sign relations has advanced from considering SEPs as partitioning the set of points in S to considering attractors as partitioning the set of arcs in SxI = SxS.
+
[The following piece occurs in &sect; 6.35.]
</pre>
     −
===6.43. Reflective Extensions===
+
The set of triples of dyadic relations, with pairwise cartesian products chosen in a pre-arranged order from a triple of three sets <math>(X, Y, Z),\!</math> is called the ''dyadic explosion'' of <math>X \times Y \times Z.\!</math>  This object is denoted <math>\mathrm{Explo}(X, Y, Z ~|~ 2),\!</math> read as the ''explosion of <math>X \times Y \times Z\!</math> by twos'', or more simply as <math>X, Y, Z ~\mathrm{choose}~ 2,\!</math> and defined as follows:
   −
<pre>
+
{| align="center" cellspacing="8" width="90%"
This section takes up the topic of reflective extensions in a more systematic fashion, starting from the sign relations A and B once again and keeping its focus within their vicinity, but exploring the space of nearby extensions in greater detail.
+
| <math>\mathrm{Explo}(X, Y, Z ~|~ 2) ~=~ \mathrm{Pow}(X \times Y) \times \mathrm{Pow}(X \times Z) \times \mathrm{Pow}(Y \times Z)\!</math>
 +
|}
   −
Tables 77 and 78 show one way that the sign relations A and B can be extended in a reflective sense through the use of quotational devices, yielding the "first order reflective extensions", Ref1(A) and Ref1(B).
+
This domain is defined well enough to serve the immediate purposes of this section, but later it will become necessary to examine its construction more closely.
   −
Table 77. Reflective Extension Ref1(A)
+
[Maybe the following piece belongs there, too.]
Object Sign Interpretant
  −
A <A> <A>
  −
A <A> <i>
  −
A <i> <A>
  −
A <i> <i>
  −
B <B> <B>
  −
B <B> <u>
  −
B <u> <B>
  −
B <u> <u>
  −
<A> <<A>> <<A>>
  −
<B> <<B>> <<B>>
  −
<i> <<i>> <<i>>
  −
<u> <<u>> <<u>>
     −
Table 78.  Reflective Extension Ref1(B)
+
Just to provide a hint of what's at stake, consider the following suggestive identity:
Object Sign Interpretant
  −
A <A> <A>
  −
A <A> <u>
  −
A <u> <A>
  −
A <u> <u>
  −
B <B> <B>
  −
B <B> <i>
  −
B <i> <B>
  −
B <i> <i>
  −
<A> <<A>> <<A>>
  −
<B> <<B>> <<B>>
  −
<i> <<i>> <<i>>
  −
<u> <<u>> <<u>>
     −
The common "world" = {objects} U {signs} of the reflective extensions Ref1 (A) and Ref1 (B) is the set of 10 elements:
+
{| align="center" cellspacing="8" width="90%"
 +
| <math>2^{XY} \times 2^{XZ} \times 2^{YZ} ~=~ 2^{(XY + XY + YZ)}\!</math>
 +
|}
   −
W = { A, B, <A>, <B>, <i>, <u>, <<A>>, <<B>>, <<i>>, <<u>>}.
+
What sense would have to be found for the sums on the right in order to interpret this equation as a set theoretic isomorphism?  Answering this question requires the concept of a ''co-product'', roughly speaking, a &ldquo;disjointed union&rdquo; of sets.  By the time this discussion has detailed the forms of indexing necessary to maintain these constructions, it should have become patently obvious that the forms of analysis and synthesis that are called on to achieve the putative reductions to and reconstructions from dyadic relations in actual fact never really leave the realm of genuinely triadic relations, but merely reshuffle its contents in various convenient fashions.
   −
Here, I employ raised angle brackets or "supercilia" (<...>) on a par with ordinary quotation marks ("..."), using them in the context of informal discussion to configure a new sign whose object is precisely the sign they enclose.
+
===6.38. Considering the Source===
   −
Regarded as new sign relations in their own right, the domains of both Ref1 (A) and Ref1 (B) are constituted as follows:
+
There are several ways to contemplate the supplementation of signs, the sorts of augmentation that are crucial to meaning in the case of indices.  Some approaches are analytic, in the sense that they regard signs as derivative compounds and try to break up the unitary concept of an individual sign into a congeries of seemingly more real, more actual, or more determinate sign instances.  Other approaches are synthetic, in the sense that they accept a given collection of signs at face value and try to reconstruct more objective realities through the formation of abstract categories on this basis.
   −
O = O<1> U O<2> = { A, B }  U  {<A>, <B>, <i>, <u>}.
+
====6.38.1. Attributed Signs====
   −
S = S<1> U S<2> = {<A>, <B>, <i>, <u>}  U  {<<A>>, <<B>>, <<i>>, <<u>>}.
+
One type of analytic method takes it as a maxim for the logic of context that &ldquo;Every sign or text is indexed by the context in which it occurs&rdquo;.  This means that all signs, including indices, are themselves indexed, though initially only tacitly, by the objective situation, the syntactic context, and the actual interpreter that makes use of them.
   −
Thus, S overlaps with O in the set of first order signs or second order objects S<1> = O<2>, exemplifying the extent to which signs have become objects in the new sign relations.
+
To begin formalizing this brand of supplementation, it is necessary to mark salient aspects of the situational, contextual, and inclusively interpretive features of sign usage that were previously held tacit.  In effect, signs once regarded as primitive objects need to be newly analyzed as categorical abstractions that cover multitudes of existential sign instances or ''signs in use''.
   −
To discuss how the denotative and connotative aspects of sign relations are affected by their reflective extensions it is helpful to introduce a few abbreviations. For each sign relation R C {A, B}, define:
+
One way to develop these dimensions of the <math>\text{A}\!</math> and <math>\text{B}\!</math> example is to articulate the interpretive parameters of signs by means of subscripts or superscripts attached to the signs or their quotations, in this way forming a corresponding set of ''situated signs'' or ''attributed remarks''.
   −
Den1 (R) = (Ref1 (R))SO = PrOS (Ref1 (R)),
+
The attribution of signs to their interpreters preserves the original object domain but produces an expanded syntactic domain, a corresponding set of ''attributed signs''.  In our <math>\text{A}\!</math> and <math>\text{B}\!</math> example this gives the following domains.
Con1 (R) = (Ref1 (R))SI = PrSI (Ref1 (R)).
     −
The dyadic components of sign relations can be given graph theoretic representations, namely, as "digraphs" (directed graphs), that provide concise pictures of their structural and potential dynamic properties.  By way of terminology, a directed edge <x, y> is called an "arc" from point x to point y, and a self loop <x, x> is called a "sling" at x.
+
{| align="center" cellspacing="6" width="90%"
 +
|
 +
<math>\begin{array}{ccl}
 +
O & = &
 +
\{ \text{A}, \text{B} \}
 +
\\[6pt]
 +
S & = &
 +
\{
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}},
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}},
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}},
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}},
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}},
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}},
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}},
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}
 +
\}
 +
\\[6pt]
 +
I & = &
 +
\{
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}},
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}},
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}},
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}},
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}},
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}},
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}},
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}
 +
\}
 +
\end{array}</math>
 +
|}
   −
The denotative components Den1 (A) and Den1 (B) can be viewed as digraphs on the 10 points of the world set W.  The arcs of these digraphs are given as follows:
+
Table&nbsp;76 displays the results of indexing every sign of the <math>\text{A}\!</math> and <math>\text{B}\!</math> example with a superscript indicating its source or ''exponent'', namely, the interpreter who actively communicates or transmits the sign.  The operation of attribution produces two new sign relations, but it turns out that both sign relations have the same form and content, so a single Table will do.  The new sign relation generated by this operation will be denoted <math>\mathrm{At} (\text{A}, \text{B})\!</math> and called the ''attributed sign relation'' for the <math>\text{A}\!</math> and <math>\text{B}\!</math> example.
   −
1. Den1 (A) has an arc from each point of [A]A = {<A>, <i>} to A and from each point of [B]A = {<B>, <u>} to B.
+
<br>
   −
2. Den1 (B) has an arc from each point of [A]B = {<A>, <u>} to A and from each point of [B]B = {<B>, <i>} to B.
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 
+
|+ style="height:30px" | <math>\text{Table 76.} ~~ \text{Attributed Sign Relation for Interpreters A and B}\!</math>
3. In the parts added by reflective extension, Den1 (A) and Den1 (B) both have arcs from <s> to s, for each s C S<1>.
+
|- style="height:40px; background:#f0f0ff"
 
+
| width="33%" | <math>\text{Object}\!</math>
Taken as transition digraphs, Den1 (A) and Den1 (B) summarize the upshots, end results, or effective steps of computation that are involved in the respective evaluations of signs in S by Ref1 (A) and Ref1 (B).
+
| width="33%" | <math>\text{Sign}\!</math>
 
+
| width="33%" | <math>\text{Interpretant}\!</math>
The connotative components Con1 (A) and Con1 (B) can be pictured as digraphs on the eight points of the syntactic domain S.  The arcs are given as follows:
+
|-
 
+
| valign="bottom" |
1. Con1 (A) inherits from A the structure of a SER on S<1>, having a sling on each of the points in S<1> and two way arcs on the pairs {<A>, <i>} and {<B>, <u>}.  The reflective extension Ref1(A) adds a sling on each point of S<2>, creating a SER on S.
+
<math>\begin{matrix}
 
+
\text{A}
2. Con1 (B) inherits from B the structure of a SER on S<1>, having a sling on each of the points in S<1> and two way arcs on the pairs {<A>, <u>} and {<B>, <i>}.  The reflective extension Ref1(B) adds a sling on each point of S<2>, creating a SER on S.
+
\\
 
+
\text{A}
Taken as transition digraphs, Con1 (A) and Con1 (B) highlight the associations between signs in Ref1 (A) and Ref1 (B), respectively.
+
\\
 
+
\text{A}
The SER given by Con1 (A) for interpreter A has the semantic equations:
+
\\
 
+
\text{A}
[<A>]A  = [<i>]A,
+
\end{matrix}</math>
[<B>]A  = [<u>]A,
+
| valign="bottom" |
 
+
<math>\begin{matrix}
and the semantic partition:
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}
 
+
\\
{{ <A>, <i> }, { <<A>> }, { <<i>> },
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}
{ <B>, <u> }, { <<B>> }, { <<u>> }}.
+
\\
 
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}
The SER given by Con1 (B) for interpreter B has the semantic equations:
+
\\
 
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}
[<A>]B  = [<u>]B,
+
\end{matrix}</math>
[<B>]B  = [<i>]B,
+
| valign="bottom" |
 
+
<math>\begin{matrix}
and the semantic partition:
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}
 
+
\\
{{ <A>, <u> }, { <<A>> }, { <<u>> },
+
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}
{ <B>, <i> }, { <<B>> }, { <<i>> }}.
+
\\
 
+
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}
Notice that the semantic equivalences of nouns and pronouns for each interpreter do not extend to equivalences of their second order signs, exactly as demanded by the literal character of quotations.  Moreover, the new sign relations for A and B coincide in their reflective parts, since exactly the same triples were added to each set.
+
\\
 
+
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}
There are many ways to extend sign relations in an effort to increase their reflective capacities.  The implicit goal of a reflective project is to achieve "reflective closure", S c O, where every sign is an object.
+
\end{matrix}</math>
 
+
|-
Considered as reflective extensions, there is nothing unique about the constructions of Ref1 (A) and Ref1 (B), but their common pattern of development illustrates a typical approach toward reflective closure.  In a sense it epitomizes the project of "free", "naive", or "uncritical" reflection, since continuing this mode of production to its closure would generate an infinite sign relation, passing through infinitely many higher orders of signs, but without examining critically to what purpose the effort is directed or evaluating alternative constraints that might be imposed on the initial generators toward this end.
+
| valign="bottom" |
 
+
<math>\begin{matrix}
At first sight it seems as though the imposition of reflective closure has multiplied a finite sign relation into an infinite profusion of highly distracting and largely redundant signs, all by itself and all in one step.  But this explosion of orders happens only with the complicity of another requirement, that of deterministic interpretation.
+
\text{A}
 
+
\\
There are two types of non determinism that can affect a sign relation, denotative and connotative.
+
\text{A}
 
+
\\
1. A sign relation R has a non deterministic denotation if its dyadic component RSO (the converse of ROS) is not a function RSO: S >O, that is, if there are signs in S with missing or multiple objects in O.
+
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}}
 +
\end{matrix}</math>
 +
|}
   −
2. A sign relation R has a non deterministic connotation if its dyadic component RSI is not a function RSI: S >I, in other words, if there are signs in S with missing or multiple interpretants in I.  As a rule, sign relations are rife with this variety of non determinism, but it is usually felt to be under control so long as RSI remains close to being an equivalence relation.
+
<br>
 +
 
 +
Thus informed, the semiotic equivalence relation for interpreter <math>\text{A}\!</math> yields the following semiotic equations.
   −
Thus, it is really the denotative type of indeterminacy that is felt to be a problem in this context.
+
{| cellpadding="10"
 +
| width="10%" | &nbsp;
 +
| <math>[{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}]_\text{A}\!</math>
 +
| <math>=\!</math>
 +
| <math>[{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}]_\text{A}\!</math>
 +
| <math>=\!</math>
 +
| <math>[{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}]_\text{A}\!</math>
 +
| <math>=\!</math>
 +
| <math>[{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}]_\text{A}\!</math>
 +
|-
 +
| width="10%" | or
 +
| &nbsp;<math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}\!</math>
 +
| valign="bottom" | <math>=_\text{A}\!</math>
 +
| &nbsp;<math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}\!</math>
 +
| valign="bottom" | <math>=_\text{A}\!</math>
 +
| &nbsp;<math>{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}\!</math>
 +
| valign="bottom" | <math>=_\text{A}\!</math>
 +
| &nbsp;<math>{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}\!</math>
 +
|}
 +
 
 +
In comparison, the semiotic equivalence relation for interpreter <math>\text{B}\!</math> yields the following semiotic equations.
 +
 
 +
{| cellpadding="10"
 +
| width="10%" | &nbsp;
 +
| <math>[{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}]_\text{B}\!</math>
 +
| <math>=\!</math>
 +
| <math>[{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}]_\text{B}\!</math>
 +
| <math>=\!</math>
 +
| <math>[{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}]_\text{B}\!</math>
 +
| <math>=\!</math>
 +
| <math>[{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}]_\text{B}\!</math>
 +
|-
 +
| width="10%" | or
 +
| &nbsp;<math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}}\!</math>
 +
| valign="bottom" | <math>=_\text{B}\!</math>
 +
| &nbsp;<math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}}\!</math>
 +
| valign="bottom" | <math>=_\text{B}\!</math>
 +
| &nbsp;<math>{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}}\!</math>
 +
| valign="bottom" | <math>=_\text{B}\!</math>
 +
| &nbsp;<math>{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}\!</math>
 +
|}
 +
 
 +
Consequently, the semiotic equivalence relations for <math>\text{A}\!</math> and <math>\text{B}\!</math> both induce the same semiotic partition on <math>S,\!</math> namely, the following.
 +
 
 +
{| align="center" cellspacing="6" width="90%"
 +
|
 +
<math>
 +
\{ \{
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{A}},
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime\text{B}},
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{A}},
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{B}}
 +
\}~,~\{
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{A}},
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime\text{B}},
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime\text{B}},
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime\text{A}}
 +
\} \}.\!
 +
</math>
 +
|}
 +
 
 +
By means of a simple attribution step a certain level of congruity has been reached in the community of interpretation comprised of <math>\text{A}\!</math> and <math>\text{B}.\!</math>  This new-found agreement on what is abstractly a single semiotic equivalence relation means that its equivalence classes reconstruct the structure of the object domain within the parts of the corresponding semiotic partition.  This allows a measure of objectivity or inter-subjectivity to be predicated of the sign relation's representation. 
 +
 
 +
An instance of <math>\text{Y}\!</math> using <math>{}^{\backprime\backprime} \text{X} {}^{\prime\prime}\!</math> is considered to be an objective event, the kind of happening to which all suitably placed observers can point, and adverting to an occurrence of <math>{}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!</math> is more specific and less vague than resorting to instances of <math>{}^{\backprime\backprime} \text{X} {}^{\prime\prime}\!</math> as if being issued by anonymous sources.  The situated sign <math>{}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!</math> is a ''wider sign'' than <math>{}^{\backprime\backprime} \text{X} {}^{\prime\prime}\!</math> in the sense that it takes in a broader field of view on the interpretive situation and provides more information about the context of use.  As to the reception of attributed remarks, the interpreter that can recognize signs of the form <math>{}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!</math> is one that knows what it means to ''consider the source''.
 +
 
 +
It is best to read the superscripts on attributed signs as accentuations and integral parts of the quotation marks, taking <math>{}^{\backprime\backprime} \ldots {}^{\prime\prime\text{A}}\!</math> and <math>{}^{\backprime\backprime} \ldots {}^{\prime\prime\text{B}}\!</math> as variant inflections of <math>{}^{\backprime\backprime} \ldots {}^{\prime\prime}.\!</math>  Thus, I can refer to the sign <math>{}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!</math> just as I would refer to the sign <math>{}^{\backprime\backprime} \text{X} {}^{\prime\prime}\!</math> in the present informal context, without any additional marks of quotation.
 +
 
 +
Taking a cue from this usage, the ordinary quotes that I use to mark salient relationships of signs and expressions with respect to the informal context can now be regarded as quotes that I myself, operating as a casual interpreter, tacitly index.  Even without knowing the complete sign relation that I have in mind, the one that I presumably use to conduct this discussion, the sign relation that <math>{}^{\backprime\backprime} \text{I} {}^{\prime\prime}\!</math> represents can nevertheless be partially formalized by means of a certain functional equation, namely, the following equation between semantic functions:
 +
 
 +
{| align="center" cellspacing="8" width="90%"
 +
| <math>{}^{\backprime\backprime} \ldots {}^{\prime\prime} ~=~ {}^{\backprime\backprime} \ldots {}^{\prime\prime\text{I}}\!</math>
 +
|}
 +
 
 +
By way of vocal expression, the attributed sign <math>{}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!</math> can be pronounced in any of the following ways.
   −
The next two pairs of reflective extensions demonstrate that there are ways of achieving reflective closure that do not generate infinite sign relations.
+
{| align="center" cellspacing="8" width="90%"
 
+
|
As a flexible and fairly general strategy for describing reflective extensions it is convenient to take the following tack.  Given a syntactic domain S, there is an independent formal language F = F(S) = S<<>>, to be called "the free quotational extension of S", that can be generated from S by embedding each of its signs to any depth of quotation marks.  In F, the quoting operation can be regarded as a syntactic generator that is inherently free of constraining relations.  In other words, for every s C S, the sequence s, <s>, <<s>>, ... contains nothing but pairwise distinct elements in F no matter how far it is produced.  The set F(s) = s<<>> c F that collects the elements of this sequence is called "the subset of F generated from s by quotation".
+
<math>\begin{array}{l}
 
+
{}^{\backprime\backprime} \text{X} {}^{\prime\prime} ~\text{quoth}~ \text{Y}
Against this background, other varieties of reflective extension can be specified by means of semantic equations (SEQs) that are considered to be imposed on the elements of F.  Taking the reflective extensions Ref1 (A) and Ref1 (B) as the first orders of a "free" project toward reflective closure, variant extensions can be described by relating their entries with those of comparable members in the standard sequences Refn (A) and Refn (B).
+
\\[4pt]
 
+
{}^{\backprime\backprime} \text{X} {}^{\prime\prime} ~\text{said by}~ \text{Y}
A variant pair of reflective extensions, Ref1(A|E1) and Ref1(B|E1), are presented in Tables 79 and 80, respectively. These are identical to the corresponding "free" variants, Ref1(A) and Ref1(B), with the exception of those entries that are constrained by the system of semantic equations:
+
\\[4pt]
 
+
{}^{\backprime\backprime} \text{X} {}^{\prime\prime} ~\text{used by}~ \text{Y}
E1: <<A>> = <A>, <<B>> = <B>, <<i>> = <i>, <<u>> = <u>.
+
\end{array}</math>
 
+
|}
This has the effect of making all levels of quotation equivalent.
+
 
 
+
To facilitate visual imagery, each token of the type <math>{}^{\backprime\backprime} \text{X} {}^{\prime\prime\text{Y}}\!</math> can be pictured as a specific occasion where the sign <math>{}^{\backprime\backprime} \text{X} {}^{\prime\prime}\!</math> is being used or issued by the interpreter <math>\text{Y}.\!</math>
By calling attention to their intended status as "semantic" equations, meaning that signs are being set equal in the SECs they inhabit or the objects they denote, I hope to emphasize that these equations are able to say something significant about objects.
+
 
 
+
The construal of objects as classes of attributed signs leads to a measure of inter-subjective agreement between the interpreters <math>\text{A}\!</math> and <math>\text{B}.\!</math> Something like this must be the goal of any system of communication, and analogous forms of congruity and gregarity are likely to be found in any system for establishing mutually intelligible responses and maintaining socially coordinated practices.
??? Redo F(S) over W ??? Use WF = O U F ???
+
 
 
+
Nevertheless, the particular types of &ldquo;analytic&rdquo; solutions that were proposed for resolving the conflict of interpretations between <math>\text{A}\!</math> and <math>\text{B}\!</math> are conceptually unsatisfactory in several ways.  The constructions instituted retain the quality of hypotheses, especially due to the level of speculation about fundamental objects that is required to support them.  There remains something fictional and imaginary about the nature of the object instances that are posited to form the ontological infrastructure, the supposedly more determinate strata of being that are presumed to anchor the initial objects of discussion.
Table 79Reflective Extension Ref1(A|E1)
+
 
Object Sign Interpretant
+
Founding objects on a particular selection of object instances is always initially an arbitrary choice, a meet response to a judgment call and a responsibility that cannot be avoided, but still a bit of guesswork that needs to be tested for its reality in practice.
A <A> <A>
+
 
A <A> <i>
+
This means that the postulated objects of objects cannot have their reality probed and proved in detail but evaluated only in terms of their conceivable practical effects.
A <i> <A>
+
 
A <i> <i>
+
====6.38.2. Augmented Signs====
B <B> <B>
+
 
B <B> <u>
+
One synthetic method &hellip;
B <u> <B>
+
 
B <u> <u>
+
Suppose now that each of the agents <math>\text{A}\!</math> and <math>\text{B}\!</math> reflects on the situational context of their discussion and observes on every occasion of utterance exactly who is saying what. By this critically reflective operation of ''considering the source'' each interpreter is empowered to create, in effect, an ''extended token'' or ''situated sign'' out of each utterance by indexing it with the proper name of its utterer.  Though it arises by reflection, the augmented sign is not a higher order of abstraction so much as a restoration or reconstitution of what was lost by abstracting the sign from the signer in the first instance.
<A> <A> <A>
+
 
<B> <B> <B>
+
In order to continue the development of this example, I need to employ a more precise system of marking quotations in order to keep track of who says what and in what kinds of contextTo help with this, I use raised angle brackets <math>{}^\langle \ldots {}^\rangle\!</math> on a par with ordinary quotation marks <math>{}^{\backprime\backprime} \ldots {}^{\prime\prime}\!</math> to call attention to pieces of text as signs or expressions.  The angle quotes are especially useful for embedded quotations and for text regarded as used or mentioned by interpreters other than myself, for instance, by the fictional characters <math>\text{A}\!</math> and <math>\text{B}.\!</math> Whenever possible, I save ordinary quotes for the outermost level, the one that interfaces with the context of informal discussion.
<i> <i> <i>
+
 
<u> <u> <u>
+
A notation like <math>{}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle, \text{B}, \text{C} {}^{\rangle ~ \prime\prime}\!</math> is intended to indicate the construction of an extended (attributed, indexed, or situated) sign, in this case, by enclosing an initial sign <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> in a contextual envelope <math>{}^{\backprime\backprime ~ \langle\langle} ~\underline{~}~ {}^\rangle, ~\underline{~}~, ~\underline{~}~ {}^{\rangle ~ \prime\prime}\!</math> and inscribing it with relevant items of situational data, as represented by the signs <math>{}^{\backprime\backprime} \text{B} {}^{\prime\prime}\!</math> and <math>{}^{\backprime\backprime} \text{C} {}^{\prime\prime}.\!</math>
 
+
 
Table 80.  Reflective Extension Ref1(B|E1)
+
# When a salient component of the situational data represents an observation of the agent <math>\text{B}\!</math> communicating the sign <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime},\!</math> then the compressed form <math>{}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B}, \text{C} {}^{\rangle ~ \prime\prime}\!</math> can be used to mark that fact.
Object Sign Interpretant
+
# When there is no additional contextual information beyond the marking of a sign's source, the form <math>{}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B} {}^{\rangle ~ \prime\prime}\!</math> suffices to say that <math>\text{B}\!</math> said <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}.\!</math>
A <A> <A>
+
 
A <A> <u>
+
With this last modification, angle quotes become like ascribed quotes or attributed remarks, indexed with the name of the interpretive agent that issued the message in question.  In sum, the notation <math>{}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B} {}^{\rangle ~ \prime\prime}\!</math> is intended to situate the sign <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> in the context of its contemplated use and to index the sign <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> with the name of the interpreter that is considered to be using it on a given occasion.
A <u> <A>
+
 
A <u> <u>
+
The notation <math>{}^{\backprime\backprime ~ \langle\langle} \text{A} {}^\rangle \text{B} {}^{\rangle ~ \prime\prime},~\!</math> read <math>{}^{\backprime\backprime ~ \langle} \text{A} {}^\rangle ~\text{quoth}~ \text{B} {}^{\prime\prime}\!</math> or <math>{}^{\backprime\backprime ~ \langle} \text{A} {}^\rangle ~\text{used by}~ \text{B} {}^{\prime\prime},\!</math> is an expression that indicates the use of the sign <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> by the interpreter <math>\text{B}.\!</math> The expression inside the outer quotes is referred to as an ''indexed quotation'', since it is indexed by the name of the interpreter to which it is referred.
B <B> <B>
+
 
B <B> <i>
+
Since angle quotes with a blank index are equivalent to ordinary quotes, we have the following equivalence.  [Not sure about this.]
B <i> <B>
+
 
B <i> <i>
+
{| align="center" cellspacing="6" width="90%"
<A> <A> <A>
+
|
<B> <B> <B>
+
<math>{}^{\backprime\backprime} ~ {}^\langle \text{A} {}^\rangle \text{B} ~ {}^{\prime\prime} ~=~ {}^{\langle\langle} \text{A} {}^\rangle \text{B} {}^\rangle\!</math>
<i> <i> <i>
+
|}
<u> <u> <u>
+
 
 
+
Enclosing a piece of text with raised angle brackets and following it with the name of an interpreter is intended to call to mind &hellip;
Another pair of reflective extensions, Ref1(A|E2) and Ref1(B|E2), are presented in Tables 81 and 82, respectively.  These are identical to the corresponding "free" variants, Ref1(A) and Ref1(B), except for the entries constrained by the following semantic equations:
+
 
 
+
The augmentation of signs by the names of their interpreters preserves the original object domain but produces an extended syntactic domain.  In our <math>\text{A}\!</math> and <math>\text{B}\!</math> example this gives the following domains.
E2: <<A>> = A, <<B>> = B, <<i>> = i, <<u>> = u.
+
 
 
+
{| align="center" cellspacing="8" width="90%"
Table 81.  Reflective Extension Ref1(A|E2)
+
|
Object Sign Interpretant
+
<math>\begin{array}{lll}
A <A> <A>
+
O & = & \{ \text{A}, \text{B} \}
A <A> <i>
+
\end{array}</math>
A <i> <A>
+
|}
A <i> <i>
+
 
B <B> <B>
+
{| align="center" cellspacing="8" width="90%"
B <B> <u>
+
|
B <u> <B>
+
<math>\begin{array}{lllllll}
B <u> <u>
+
S
<A> A A
+
& = &
<B> B B
+
\{ &
<i> A A
+
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime},
<u> B B
+
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime},
 
+
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime},
Table 82.  Reflective Extension Ref1(B|E2)
+
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime},
Object Sign Interpretant
+
&
A <A> <A>
+
\\[4pt]
A <A> <u>
+
& & &
A <u> <A>
+
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime},
A <u> <u>
+
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime},
B <B> <B>
+
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime},
B <B> <i>
+
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime}
B <i> <B>
+
& \}
B <i> <i>
+
\\[10pt]
<A> A A
+
I
<B> B B
+
& = &
<i> B B
+
\{ &
<u> A A
+
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime},
</pre>
+
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime},
 
+
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime},
===6.44. Reflections on Closure===
+
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime},
 
+
&
<pre>
+
\\[4pt]
The previous section dealt with a formal operation that was dubbed "reflection", and found that it was closely associated with the device of "quotation" that makes it possible to treat signs as objects by making or finding other signs that refer to themClearly, an ability to take signs as objects is one component of a cognitive capacity for reflection.  But a genuine and less superficial species of reflection can do more than grasp just the isolated signs and the separate interpretants of the thinking process as objects — it can pause the fleeting procession of signs upon signs and seize their generic patterns of transition as valid objects of discussion. This involves the conception and composition of not just "higher order" signs but also "higher type" signs, orders of signs that aspire to catch whole sign relations up in one breath.
+
& & &
 
+
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime},
...
+
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime},
</pre>
+
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime},
 
+
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime}
===6.45. Intelligence => Critical Reflection===
+
& \}
 
+
\end{array}</math>
<pre>
+
|}
It is just at this point that the discussion of sign relations is forced to contemplate the prospects of intelligent interpretation.  For starters, I consider an intelligent interpreter to be one that can pursue alternative interpretations of a sign or text and pick one that makes sense.  If an interpreter can find all of the most sensible interpretations and order them according to a scale of meaningfulness, but without losing the time required to act on their import, then so much the better.
+
 
 
+
The situated sign or indexed expression <math>{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}\!</math> presents the sign or expression <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> as used by the interpreter <math>\text{B}.\!</math> In other words, the sign is indexed by the name of an interpreter to indicate a use of that sign by that interpreterThus, <math>{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}\!</math> augments <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> to form a new and more complete sign by including additional information about the context of its transmission, in particular, by the consideration of its source.
Intelligent interpreters are a centrally important species of intelligent agents in general, since hardly any intelligent action at all can be taken without the ability to interpret signs and texts, even if read only in the sense of "the text of nature".  In other words, making sense of dubious signs is a central component of all sensible action.
+
 
 
+
<br>
Thus, I regard the determining trait of intelligent agency to be its response to non deterministic situations.  Agents that find themselves at junctures of unavoidable uncertainty are required by objective features of the situation to gather together the available options and select among the multitude of possibilities a few choices that further their active purposes.
+
 
 
+
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
Reflection enables an interpreter to stand back from signs and view them as objects, that is, as objective possibilities for choice to be followed up in an experimental and critical fashion rather than pursued as automatic reactions whose habitual connections cannot be questioned.
+
|+ style="height:30px" | <math>\text{Table 77.} ~~ \text{Augmented Sign Relation for Interpreters A and B}\!</math>
 
+
|- style="height:40px; background:#f0f0ff"
The mark of an intelligent interpreter that is relevant in this context is the ability to face/ encounter/ countenance a non deterministic juncture of choices in a sign relation and to respond to it as such/ with actions appropriate to the uncertain nature of the situation.
+
| width="33%" | <math>\text{Object}\!</math>
 
+
| width="33%" | <math>\text{Sign}\!</math>
An intelligent interpreter is one that can follow up several different interpretations at once, experimenting with the denotations and connotations that are available in a non deterministic sign relation, ...
+
| width="33%" | <math>\text{Interpretant}\!</math>
 
+
|-
An intelligent interpreter is one that can face a situation of non deterministic choice and choose an interpretation (denotation or connotation) that is apposite to the objective and syntactic context.
+
| valign="bottom" |
 
+
<math>\begin{matrix}
An intelligent interpreter is one that can deal with non-deterministic situations, that is, one that can follow up several lines of possible meaning for signs and read between the lines to pick out meanings that are sensitive to both the objective situation and the syntactic context of interpretation.
+
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{A} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{B} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{i} {}^\rangle ]_\text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} [ {}^\langle \text{u} {}^\rangle ]_\text{A} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
===6.39. Prospective Indices : Pointers to Future Work===
 +
 
 +
In the effort to unify dynamical, connectionist, and symbolic approaches to intelligent systems, indices supply important stepping stones between the sorts of signs that remain bound to circumscribed theaters of action and the kinds of signs that can function globally as generic symbols.  Current technology presents an array of largely accidental discoveries that have been brought into being for implementing indexical systems.  Bringing systematic study to bear on this variety of accessory devices and trying to discern within the wealth of incidental features their essential principles and effective ingredients could help to improve the traction this form of bridge affords.
 +
 
 +
In the points where this project addresses work on the indexical front, a primary task is to show how the ''actual connections'' promised by the definition of indexical signs can be translated into system-theoretic terms and implemented by means of the class of ''dynamic connections'' that can persist in realistic systems.
 +
 
 +
An offshoot of this investigation would be to explore how indices like pointer variables could be realized within &ldquo;connectionist&rdquo; systems.  There is no reason in principle why this cannot be done, but I think that pragmatic reasons and practical success will force the contemplation of higher orders of connectivity than those currently fashioned in two-dimensional arrays of connections.  To be specific, further advances will require the generative power of genuinely triadic relations to be exploited to the fullest possible degree.
 +
 
 +
To avert one potential misunderstanding of what this entails, computing with triadic relations is not really a live option unless the algebraic tools and logical calculi needed to do so are developed to greater levels of facility than they are at present.  Merely officiating over the storage of &ldquo;dead letters&rdquo; in higher dimensional arrays will not do the trick.  Turning static sign relations into the orders of dynamic sign processes that can support live inquiries will demand new means of representation and new methods of computation.
 +
 
 +
To fulfill their intended roles, a formal calculus for sign relations and the associated implementation must be able to address and restore the full dimensionalities of the existential and social matrices in which inquiry takes place.  Informational constraints that define objective situations of interest need to be freed from the locally linear confines of the &ldquo;dia-matrix&rdquo; and reposted within the realm of the &ldquo;tri-matrix&rdquo;, that is, reconstituted in a manner that allows critical reflection on their form and content.
 +
 
 +
The descriptive and conceptual architectures needed to frame this task must allow space for interlacing forms of &ldquo;open work&rdquo;, projects that anticipate the desirability of higher order relations and build in the capability for higher order reflections at the very beginning, and do not merely hope against hope to arrange these capacities as afterthoughts.
 +
 
 +
===6.40. Dynamic and Evaluative Frameworks===
 +
 
 +
The sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> are lacking in several dimensions of realistic properties that would ordinarily be more fully developed in the kinds of sign relations that are found to be involved in inquiry.  This section initiates a discussion of two such dimensions, the ''dynamic'' and the ''evaluative'' aspects of sign relations, and it treats the materials that are organized along these lines at two broad levels, either ''within'' or ''between'' particular examples of sign relations.
 +
 
 +
# The ''dynamic dimension'' deals with change.  Thus, it details the forms of diversity that sign relations distribute in a temporal process.  It is concerned with the transitions that take place from element to element within a sign relation and also with the changes that take place from one whole sign relation to another, thereby generating various types and levels of ''sign&nbsp;process''.
 +
# The ''evaluative dimension'' deals with goals.  Thus, it details the forms of diversity that sign relations contribute to a definite purpose.  It is concerned with the comparisons that can be made on a scale of values between the elements within a sign relation and also between whole sign relations themselves, with a view toward deciding which is better for a ''designated&nbsp;purpose''.
 +
 
 +
At the primary level of analysis, one is concerned with the application of these two dimensions ''within'' particular sign relations.  At every subsequent level of analysis, one deals with the dynamic transitions and evaluative comparisons that can be contemplated ''between'' particular sign relations.  In order to cover all these dimensions, types, and levels of diversity in a unified way, there is need for a substantive term that can allow one to indicate any of the above objects of discussion and thought &mdash; including elements of sign relations, particular sign relations, and states of systems &mdash; and to regard it as an &ldquo;object, sign, or state in a certain stage of construction&rdquo;.  I will use the word ''station'' for this purpose.
 +
 
 +
In order to organize the discussion of these two dimensions, both within and between particular sign relations, and to coordinate their ordinary relation to each other in practical situations, it pays to develop a combined form of ''dynamic evaluative framework'' (DEF), similar in design and utility to the objective frameworks set up earlier.
 +
 
 +
A ''dynamic evaluative framework'' (DEF) encompasses two dimensions of comparison between stations:
 +
 
 +
<ol style="list-style-type:decimal">
 +
 
 +
<li>
 +
<p>A dynamic dimension, as swept out by a process of changing stations, permits comparison between stations in terms of before and after on a scale of temporal order.</p>
 +
 
 +
<p>A terminal station on a dynamic dimension is called a ''stable station''.</p></li>
 +
 
 +
<li>
 +
<p>An evaluative dimension permits comparison between stations on a scale of values.</p>
 +
 
 +
<p>A terminal station on an evaluative dimension is called a ''canonical station'' or a ''standard station''.</p></li></ol>
 +
 
 +
A station that is both stable and standard is called a ''normal station''.
 +
 
 +
Consider the following analogies or correspondences that exist between different orders of sign relational structure:
 +
 
 +
# Just as a sign represents its object and becomes associated with more or less equivalent signs in the minds of interpretive agents, the corpus of signs that embodies a SOI represents in a collective way its own proper object, intended objective, or ''try at objectivity'' (TAO).
 +
# Just as the relationship of a sign to its semantic objects and interpretive associates can be formalized within a single sign relation, the relation of a dynamically changing SOI to its reference environment, developmental goals, and desired characteristics of interpretive performance can be formalized by means of a higher order sign relation, one that further establishes a grounds of comparison for relating the growing SOI, not only to its former and future selves, but to a diverse company of other SOIs.
 +
 
 +
From an outside perspective the distinction between a sign and its object is usually regarded as obvious, though agents operating in the thick of a SOI often act as though they cannot see the difference.  Nevertheless, as a rule in practice, a sign is not a good thing to be confused with its object.  Even in the rare and usually controversial cases where an identity of substance is contemplated, usually only for the sake of argument, there is still a distinction of roles to be maintained between the sign and its object.  Just so, &hellip;
 +
 
 +
Although there are aspects of inquiry processes that operate within the single sign relation, the characteristic features of inquiry do not come into full bloom until one considers the whole diversity of dynamically developing sign relations.  Because it will be some time before this discussion acquires the formal power it needs to deal with higher order sign relations, these issues will need to be treated on an informal basis as they arise, and often in cursory and ''ad&nbsp;hoc'' manner.
 +
 
 +
===6.41. Elective and Motive Forces===
 +
 
 +
The <math>\text{A}\!</math> and <math>\text{B}\!</math> example, in the fragmentary aspects of its sign relations presented so far, is unrealistic in its simplification of semantic issues, lacking a full development of many kinds of attributes that almost always become significant in situations of practical interest.  Just to mention two related features of importance to inquiry that are missing from this example, there is no sense of directional process and no dimension of differential value defined either within or between the semantic equivalence classes.
 +
 
 +
When there is a clear sense of dynamic tendency or purposeful direction driving the passage from signs to interpretants in the connotative project of a sign relation, then the study moves from sign relations, statically viewed, to genuine sign processes.  In the pragmatic theory of signs, such processes are usually dignified with the name ''semiosis'' and their systematic investigation is called ''semiotics''.
 +
 
 +
Further, when this dynamism or purpose is consistent and confluent with a differential value system defined on the syntactic domain, then the sign process in question becomes a candidate for the kind of clarity-gaining, canon-seeking process, capable of supporting learning and reasoning, that I classify as an ''inquiry driven system''.
 +
 
 +
There is a mathematical turn of thought that I will often take in discussing these kinds of issues.  Instead of saying that a system has no attribute of a particular type, I will say that it has the attribute, but in a degenerate or trivial sense.  This is merely a strategy of classification that allows one to include null cases in a taxonomy and to make use of continuity arguments in passing from case to case in a class of examples.  Viewed in this way, each of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> can be taken to exhibit a trivial dynamic process and a trivial standard of value defined on the syntactic domain.
 +
 
 +
===6.42. Sign Processes : A Start===
 +
 
 +
To articulate the dynamic aspects of a sign relation, one can interpret it as determining a discrete or finite state transition system.  In the usual ways of doing this, the states of the system are given by the elements of the syntactic domain, while the elements of the object domain correspond to input data or control parameters that affect transitions from signs to interpretant signs in the syntactic state space.
 +
 
 +
Working from these principles alone, there are numerous ways that a plausible dynamics can be invented for a given sign relation.  I will concentrate on two principal forms of dynamic realization, or two ways of interpreting and augmenting sign relations as sign processes.
 +
 
 +
One form of realization lets each element of the object domain <math>O\!</math> correspond to the observed presence of an object in the environment of the systematic agent.  In this interpretation, the object <math>x\!</math> acts as an input datum that causes the system <math>Y\!</math> to shift from whatever sign state it happens to occupy at a given moment to a random sign state in <math>[x]_Y.\!</math>  Expressed in a cognitive vein, <math>{}^{\backprime\backprime} Y ~\mathrm{notes}~ x {}^{\prime\prime}.</math>
 +
 
 +
Another form of realization lets each element of the object domain <math>O\!</math> correspond to the autonomous intention of the systematic agent to denote an object, achieve an objective, or broadly speaking to accomplish any other purpose with respect to an object in its domain.  In this interpretation, the object <math>x\!</math> is a control parameter that brings the system <math>Y\!</math> into line with realizing a target set <math>[x]_Y.\!</math>
 +
 
 +
Tables&nbsp;78 and 79 show how the sign relations for <math>\text{A}\!</math> and <math>\text{B}\!</math> can be filled out as finite state processes in conformity with the interpretive principles just described.  Rather than letting the actions go undefined for some combinations of inputs in <math>O\!</math> and states in <math>S,\!</math> transitions have been added that take the interpreters from whatever else they might have been thinking about to the semantic equivalence classes of their objects.  In either modality of realization, cognitive-oriented or control-oriented, the abstract structure of the resulting sign process is exactly the same.
 +
 
 +
<br>
 +
 
 +
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 +
|+ style="height:30px" | <math>\text{Table 78.} ~~ \text{Sign Process of Interpreter A}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>\text{Object}\!</math>
 +
| width="33%" | <math>\text{Sign}\!</math>
 +
| width="33%" | <math>\text{Interpretant}\!</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 +
|+ style="height:30px" | <math>\text{Table 79.} ~~ \text{Sign Process of Interpreter B}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>\text{Object}\!</math>
 +
| width="33%" | <math>\text{Sign}\!</math>
 +
| width="33%" | <math>\text{Interpretant}\!</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime}
 +
\\
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\end{matrix}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
Treated in accord with these interpretations, the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> constitute partially degenerate cases of dynamic processes, in which the transitions are totally non-deterministic up to semantic equivalence classes but still manage to preserve those classes.  Whether construed as present observation or projective speculation, the most significant feature to note about a sign process is how the contemplation of an object or objective leads the system from a less determined to a more determined condition.
 +
 
 +
On reflection, one observes that these processes are not completely trivial since they preserve the structure of their semantic partitions.  In fact, each sign process preserves the entire topology &mdash; the family of sets closed under finite intersections and arbitrary unions &mdash; that is generated by its semantic equivalence classes.  These topologies, <math>\mathrm{Top}(\text{A})\!</math> and <math>\mathrm{Top}(\text{B}),\!</math> can be viewed as partially ordered sets, <math>\mathrm{Poset}(\text{A})\!</math> and <math>\mathrm{Poset}(\text{B}),\!</math> by taking the inclusion ordering <math>(\subseteq)\!</math> as <math>(\le).\!</math>  For each of the interpreters <math>\text{A}\!</math> and <math>\text{B},\!</math> as things stand in their respective orderings <math>\mathrm{Poset}(\text{A})\!</math> and <math>\mathrm{Poset}(\text{B}),\!</math> the semantic equivalence classes of <math>{}^{\backprime\backprime} \text{A} {}^{\prime\prime}\!</math> and <math>{}^{\backprime\backprime} \text{B} {}^{\prime\prime}\!</math> are situated as intermediate elements that are incomparable to each other.
 +
 
 +
{| align="center" cellspacing="6" width="90%"
 +
|
 +
<math>\begin{array}{lllll}
 +
\mathrm{Top}(\text{A})
 +
& = &
 +
\mathrm{Poset}(\text{A})
 +
& = &
 +
\{
 +
\varnothing,
 +
\{
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime},
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\},
 +
\{
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime},
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\},
 +
S
 +
\}.
 +
\\[6pt]
 +
\mathrm{Top}(\text{B})
 +
& = &
 +
\mathrm{Poset}(\text{B})
 +
& = &
 +
\{ \varnothing,
 +
\{
 +
{}^{\backprime\backprime} \text{A} {}^{\prime\prime},
 +
{}^{\backprime\backprime} \text{u} {}^{\prime\prime}
 +
\},
 +
\{
 +
{}^{\backprime\backprime} \text{B} {}^{\prime\prime},
 +
{}^{\backprime\backprime} \text{i} {}^{\prime\prime}
 +
\},
 +
S
 +
\}.
 +
\end{array}</math>
 +
|}
 +
 
 +
In anticipation of things to come, these orderings are germinal versions of the kinds of semantic hierarchies that will be used in this project to define the ''ontologies'', ''perspectives'', or ''world views'' corresponding to individual interpreters.
 +
 
 +
When it comes to discussing the stability properties of dynamic systems, the sets that remain invariant under iterated applications of a process are called its ''attractors'' or ''basins of attraction''.
 +
 
 +
'''Note.'''  More care needed here.  Strongly and weakly connected components of digraphs?
 +
 
 +
The dynamic realizations of the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> augment their semantic equivalence relations in an &ldquo;attractive&rdquo; way.  To describe this additional structure, I introduce a set of graph-theoretical concepts and notations.
 +
 
 +
The ''attractor'' of <math>x\!</math> in <math>Y.\!</math>
 +
 
 +
{| align="center" cellspacing="6" width="90%"
 +
|
 +
<math>Y ~\text{at}~ x ~=~ \mathrm{At}[x]_Y ~=~ [x]_Y \cup \{ \text{arcs into}~ [x]_Y \}.</math>
 +
|}
 +
 
 +
In effect, this discussion of dynamic realizations of sign relations has advanced from considering semiotic partitions as partitioning the set of points in <math>S\!</math> to considering attractors as partitioning the set of arcs in <math>S \times I = S \times S.\!</math>
 +
 
 +
===6.43. Reflective Extensions===
 +
 
 +
This section takes up the topic of reflective extensions in a more systematic fashion, starting from the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> once again and keeping its focus within their vicinity, but exploring the space of nearby extensions in greater detail.
 +
 
 +
Tables&nbsp;80 and 81 show one way that the sign relations <math>L(\text{A})\!</math> and <math>L(\text{B})\!</math> can be extended in a reflective sense through the use of quotational devices, yielding the ''first order reflective extensions'', <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B}).\!</math>
 +
 
 +
<br>
 +
 
 +
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 +
|+ style="height:30px" |
 +
<math>{\text{Table 80.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{A})}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>\text{Object}\!</math>
 +
| width="33%" | <math>\text{Sign}\!</math>
 +
| width="33%" | <math>\text{Interpretant}\!</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle\langle} \text{A} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{B} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{i} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{u} {}^{\rangle\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle\langle} \text{A} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{B} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{i} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{u} {}^{\rangle\rangle}
 +
\end{matrix}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 +
|+ style="height:30px" |
 +
<math>{\text{Table 81.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{B})}\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>\text{Object}\!</math>
 +
| width="33%" | <math>\text{Sign}\!</math>
 +
| width="33%" | <math>\text{Interpretant}\!</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle\langle} \text{A} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{B} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{i} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{u} {}^{\rangle\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle\langle} \text{A} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{B} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{i} {}^{\rangle\rangle}
 +
\\
 +
{}^{\langle\langle} \text{u} {}^{\rangle\rangle}
 +
\end{matrix}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
The common ''world'' <math>W\!</math> of the reflective extensions <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B})\!</math> is the totality of objects and signs they contain, namely, the following set of 10 elements.
 +
 
 +
{| align="center" cellspacing="8" width="90%"
 +
| <math>W = \{ \text{A}, \text{B}, {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle}, {}^{\langle\langle} \text{A} {}^{\rangle\rangle}, {}^{\langle\langle} \text{B} {}^{\rangle\rangle}, {}^{\langle\langle} \text{i} {}^{\rangle\rangle}, {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \}.</math>
 +
|}
 +
 
 +
Raised angle brackets or ''supercilia'' <math>({}^{\langle} \ldots {}^{\rangle})\!</math> are here being used on a par with ordinary quotation marks <math>({}^{\backprime\backprime} \ldots {}^{\prime\prime})\!</math> to construct a new sign whose object is precisely the sign they enclose.
 +
 
 +
Regarded as sign relations in their own right, <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B})\!</math> are formed on the following relational domains.
 +
 
 +
{| align="center" cellspacing="6" width="90%"
 +
|
 +
<math>\begin{array}{ccccl}
 +
O & = & O^{(1)} \cup O^{(2)} & = &
 +
\{ \text{A}, \text{B} \}
 +
~ \cup ~
 +
\{
 +
{}^{\langle} \text{A} {}^{\rangle},
 +
{}^{\langle} \text{B} {}^{\rangle},
 +
{}^{\langle} \text{i} {}^{\rangle},
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\}
 +
\\[8pt]
 +
S & = & S^{(1)} \cup S^{(2)} & = &
 +
\{
 +
{}^{\langle} \text{A} {}^{\rangle},
 +
{}^{\langle} \text{B} {}^{\rangle},
 +
{}^{\langle} \text{i} {}^{\rangle},
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\}
 +
~ \cup ~
 +
\{
 +
{}^{\langle\langle} \text{A} {}^{\rangle\rangle},
 +
{}^{\langle\langle} \text{B} {}^{\rangle\rangle},
 +
{}^{\langle\langle} \text{i} {}^{\rangle\rangle},
 +
{}^{\langle\langle} \text{u} {}^{\rangle\rangle}
 +
\}
 +
\\[8pt]
 +
I & = & I^{(1)} \cup I^{(2)} & = &
 +
\{
 +
{}^{\langle} \text{A} {}^{\rangle},
 +
{}^{\langle} \text{B} {}^{\rangle},
 +
{}^{\langle} \text{i} {}^{\rangle},
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\}
 +
~ \cup ~
 +
\{
 +
{}^{\langle\langle} \text{A} {}^{\rangle\rangle},
 +
{}^{\langle\langle} \text{B} {}^{\rangle\rangle},
 +
{}^{\langle\langle} \text{i} {}^{\rangle\rangle},
 +
{}^{\langle\langle} \text{u} {}^{\rangle\rangle}
 +
\}
 +
\end{array}</math>
 +
|}
 +
 
 +
It may be observed that <math>S\!</math> overlaps with <math>O\!</math> in the set of first-order signs or second-order objects, <math>S^{(1)} = O^{(2)},\!</math> exemplifying the extent to which signs have become objects in the new sign relations.
 +
 
 +
To discuss how the denotative and connotative aspects of a sign related are affected by its reflective extension it is useful to introduce a few abbreviations.  For each sign relation <math>L\!</math> in <math>\{ L_\text{A}, L_\text{B} \}\!</math> the following operations may be defined.
 +
 
 +
{| align="center" cellspacing="6" width="90%"
 +
|
 +
<math>\begin{array}{lllll}
 +
\mathrm{Den}^1 (L)
 +
& = &
 +
(\mathrm{Ref}^1 (L))_{SO}
 +
& = &
 +
\mathrm{proj}_{OS} (\mathrm{Ref}^1 (L))
 +
\\[6pt]
 +
\mathrm{Con}^1 (L)
 +
& = &
 +
(\mathrm{Ref}^1 (L))_{SI}
 +
& = &
 +
\mathrm{proj}_{SI} (\mathrm{Ref}^1 (L))
 +
\end{array}\!</math>
 +
|}
 +
 
 +
The dyadic components of sign relations can be given graph-theoretic representations, namely, as ''digraphs'' (directed graphs), that provide concise pictures of their structural and potential dynamic properties.  By way of terminology, a directed edge <math>(x, y)\!</math> is called an ''arc'' from point <math>x\!</math> to point <math>y,\!</math> and a self-loop <math>(x, x)\!</math> is called a ''sling'' at <math>x.\!</math>
 +
 
 +
The denotative components <math>\mathrm{Den}^1 (L_\text{A})\!</math> and <math>\mathrm{Den}^1 (L_\text{B})\!</math> can be viewed as digraphs on the 10 points of the world set <math>W.\!</math>  The arcs of these digraphs are given as follows.
 +
 
 +
<ol>
 +
<li><math>\mathrm{Den}^1 (L_\text{A})\!</math> has an arc from each point of <math>[\text{A}]_\text{A} = \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{i}{}^{\rangle} \}\!</math> to <math>\text{A}\!</math> and from each point of <math>[\text{B}]_\text{A} = \{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \}\!</math> to <math>\text{B}.\!</math></li>
 +
 
 +
<li><math>\mathrm{Den}^1 (L_\text{B})\!</math> has an arc from each point of <math>[\text{A}]_\text{B} = \{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{u}{}^{\rangle} \}\!</math> to <math>\text{A}\!</math> and from each point of <math>[\text{B}]_\text{B} = \{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle} \}\!</math> to <math>\text{B}.\!</math></li>
 +
 
 +
<li>In the parts added by reflective extension <math>\mathrm{Den}^1 (L_\text{A})\!</math> and <math>\mathrm{Den}^1 (L_\text{B})\!</math> both have arcs from <math>{}^{\langle} s {}^{\rangle}\!</math> to <math>s,\!</math> for each <math>s \in S^{(1)}.\!</math></li>
 +
</ol>
 +
 
 +
Taken as transition digraphs, <math>\mathrm{Den}^1 (L_\text{A})\!</math> and <math>\mathrm{Den}^1 (L_\text{B})\!</math> summarize the upshots, end results, or effective steps of computation that are involved in the respective evaluations of signs in <math>S\!</math> by <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B}).\!</math>
 +
 
 +
The connotative components <math>\mathrm{Con}^1 (L_\text{A})~\!</math> and <math>\mathrm{Con}^1 (L_\text{B})~\!</math> can be viewed as digraphs on the eight points of the syntactic domain <math>S.\!</math>  The arcs of these digraphs are given as follows.
 +
 
 +
<ol>
 +
<li><math>\mathrm{Con}^1 (L_\text{A})\!</math> inherits from <math>L_\text{A}\!</math> the structure of a semiotic equivalence relation on <math>S^{(1)},\!</math> having a sling on each point of <math>S^{(1)},\!</math> arcs in both directions between <math>{}^{\langle} \text{A} {}^{\rangle}\!</math> and <math>{}^{\langle} \text{i}{}^{\rangle},\!</math> and arcs in both directions between <math>{}^{\langle} \text{B} {}^{\rangle}~\!</math> and <math>{}^{\langle} \text{u}{}^{\rangle}.~\!</math>  The reflective extension <math>\mathrm{Ref}^1 (L_\text{A})\!</math> adds a sling on each point of <math>S^{(2)},\!</math> creating a semiotic equivalence relation on <math>S.\!</math></li>
 +
 
 +
<li><math>\mathrm{Con}^1 (L_\text{B})~\!</math> inherits from <math>L_\text{B}\!</math> the structure of a semiotic equivalence relation on <math>S^{(1)},\!</math> having a sling on each point of <math>S^{(1)},\!</math> arcs in both directions between <math>{}^{\langle} \text{A} {}^{\rangle}\!</math> and <math>{}^{\langle} \text{u}{}^{\rangle},\!</math> and arcs in both directions between <math>{}^{\langle} \text{B} {}^{\rangle}~\!</math> and <math>{}^{\langle} \text{i}{}^{\rangle}.~\!</math>  The reflective extension <math>\mathrm{Ref}^1 (L_\text{B})\!</math> adds a sling on each point of <math>S^{(2)},\!</math> creating a semiotic equivalence relation on <math>S.\!</math></li>
 +
</ol>
 +
 
 +
Taken as transition digraphs, <math>\mathrm{Con}^1 (L_\text{A})~\!</math> and <math>\mathrm{Con}^1 (L_\text{B})~\!</math> highlight the associations between signs in <math>\mathrm{Ref}^1 (L_\text{A})\!</math> and <math>\mathrm{Ref}^1 (L_\text{B}),\!</math> respectively.
 +
 
 +
The semiotic equivalence relation given by <math>\mathrm{Con}^1 (L_\text{A})\!</math> for interpreter <math>\text{A}\!</math> has the following semiotic equations.
 +
 
 +
{| cellpadding="10"
 +
| width="10%" | &nbsp;
 +
| <math>[ {}^{\langle} \text{A} {}^{\rangle} ]_\text{A}\!</math>
 +
| <math>=\!</math>
 +
| <math>[ {}^{\langle} \text{i} {}^{\rangle} ]_\text{A}\!</math>
 +
| width="20%" | &nbsp;
 +
| <math>[ {}^{\langle} \text{B} {}^{\rangle} ]_\text{A}\!</math>
 +
| <math>=\!</math>
 +
| <math>[ {}^{\langle} \text{u} {}^{\rangle} ]_\text{A}\!</math>
 +
|-
 +
| width="10%" | or
 +
| &nbsp;<math>{}^{\langle} \text{A} {}^{\rangle}~\!</math>
 +
| <math>=_\text{A}\!</math>
 +
| &nbsp;<math>{}^{\langle} \text{i} {}^{\rangle}~\!</math>
 +
| width="20%" | &nbsp;
 +
| &nbsp;<math>{}^{\langle} \text{B} {}^{\rangle}~\!</math>
 +
| <math>=_\text{A}\!</math>
 +
| &nbsp;<math>{}^{\langle} \text{u} {}^{\rangle}~\!</math>
 +
|}
 +
 
 +
These equations induce the following semiotic partition.
 +
 
 +
{| align="center" cellspacing="6" width="90%"
 +
|
 +
<math>
 +
\{
 +
\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle} \},
 +
\{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \},
 +
\{ {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \},
 +
\{ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \},
 +
\{ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \},
 +
\{ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \}
 +
\}.\!
 +
</math>
 +
|}
 +
 
 +
The semiotic equivalence relation given by <math>\mathrm{Con}^1 (L_\text{B})~\!</math> for interpreter <math>\text{B}\!</math> has the following semiotic equations.
 +
 
 +
{| cellpadding="10"
 +
| width="10%" | &nbsp;
 +
| <math>[ {}^{\langle} \text{A} {}^{\rangle} ]_\text{B}\!</math>
 +
| <math>=\!</math>
 +
| <math>[ {}^{\langle} \text{u} {}^{\rangle} ]_\text{B}\!</math>
 +
| width="20%" | &nbsp;
 +
| <math>[ {}^{\langle} \text{B} {}^{\rangle} ]_\text{B}\!</math>
 +
| <math>=\!</math>
 +
| <math>[ {}^{\langle} \text{i} {}^{\rangle} ]_\text{B}\!</math>
 +
|-
 +
| width="10%" | or
 +
| &nbsp;<math>{}^{\langle} \text{A} {}^{\rangle}~\!</math>
 +
| <math>=_\text{B}\!</math>
 +
| &nbsp;<math>{}^{\langle} \text{u} {}^{\rangle}~\!</math>
 +
| width="20%" | &nbsp;
 +
| &nbsp;<math>{}^{\langle} \text{B} {}^{\rangle}~\!</math>
 +
| <math>=_\text{B}\!</math>
 +
| &nbsp;<math>{}^{\langle} \text{i} {}^{\rangle}~\!</math>
 +
|}
 +
 
 +
These equations induce the following semiotic partition.
 +
 
 +
{| align="center" cellspacing="6" width="90%"
 +
|
 +
<math>
 +
\{
 +
\{ {}^{\langle} \text{A} {}^{\rangle}, {}^{\langle} \text{u} {}^{\rangle} \},
 +
\{ {}^{\langle} \text{B} {}^{\rangle}, {}^{\langle} \text{i} {}^{\rangle} \},
 +
\{ {}^{\langle\langle} \text{A} {}^{\rangle\rangle} \},
 +
\{ {}^{\langle\langle} \text{i} {}^{\rangle\rangle} \},
 +
\{ {}^{\langle\langle} \text{B} {}^{\rangle\rangle} \},
 +
\{ {}^{\langle\langle} \text{u} {}^{\rangle\rangle} \}
 +
\}.\!
 +
</math>
 +
|}
 +
 
 +
Notice that the semiotic equivalences of nouns and pronouns for each interpreter do not extend to equivalences of their second-order signs, exactly as demanded by the literal character of quotations.  Moreover, the new sign relations for interpreters <math>\text{A}\!</math> and <math>\text{B}\!</math> coincide in their reflective parts, since exactly the same triples are added to each set.
 +
 
 +
There are many ways to extend sign relations in an effort to increase their reflective capacities.  The implicit goal of a reflective project is to achieve ''reflective closure'', <math>S \subseteq O,\!</math> where every sign is an object.
 +
 
 +
Considered as reflective extensions, there is nothing unique about the constructions of <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B})\!</math> but their common pattern of development illustrates a typical approach toward reflective closure.  In a sense it epitomizes the project of ''free'', ''naive'', or ''uncritical'' reflection, since continuing this mode of production to its closure would generate an infinite sign relation, passing through infinitely many higher orders of signs, but without examining critically to what purpose the effort is directed or evaluating alternative constraints that might be imposed on the initial generators toward this end.
 +
 
 +
At first sight it seems as though the imposition of reflective closure has multiplied a finite sign relation into an infinite profusion of highly distracting and largely redundant signs, all by itself and all in one step.  But this explosion of orders happens only with the complicity of another requirement, that of deterministic interpretation.
 +
 
 +
There are two types of non-determinism, denotative and connotative, that can affect a sign relation.
 +
 
 +
<ol>
 +
<li>A sign relation <math>L\!</math> has a non-deterministic denotation if its dyadic component <math>{L_{SO}}\!</math> is not a function <math>L_{SO} : S \to O,\!</math> in other words, if there are signs in <math>S\!</math> with missing or multiple objects in <math>O.\!</math></li>
 +
 
 +
<li>A sign relation <math>L\!</math> has a non-deterministic connotation if its dyadic component <math>L_{SI}\!</math> is not a function <math>L_{SI} : S \to I,\!</math> in other words, if there are signs in <math>S\!</math> with missing or multiple interpretants in <math>I.\!</math>  As a rule, sign relations are rife with this variety of non-determinism, but it is usually felt to be under control so long as <math>L_{SI}\!</math> remains close to being an equivalence relation.</li>
 +
</ol>
 +
 
 +
Thus, it is really the denotative type of indeterminacy that is felt to be a problem in this context.
 +
 
 +
The next two pairs of reflective extensions demonstrate that there are ways of achieving reflective closure that do not generate infinite sign relations.
 +
 
 +
As a flexible and fairly general strategy for describing reflective extensions, it is convenient to take the following tack.  Given a syntactic domain <math>S,\!</math> there is an independent formal language <math>F = F(S) = S \langle {}^{\langle\rangle} \rangle,\!</math> called the ''free quotational extension of <math>S,\!</math>'' that can be generated from <math>S\!</math> by embedding each of its signs to any depth of quotation marks.  Within <math>F,\!</math> the quoting operation can be regarded as a syntactic generator that is inherently free of constraining relations.  In other words, for every <math>s \in S,\!</math> the sequence <math>s, {}^{\langle} s {}^{\rangle}, {}^{\langle\langle} s {}^{\rangle\rangle}, \ldots\!</math> contains nothing but pairwise distinct elements in <math>F\!</math> no matter how far it is produced.  The set <math>F(s) = s \langle {}^{\langle\rangle} \rangle \subseteq F\!</math> that collects the elements of this sequence is called the ''subset of <math>F\!</math> generated from <math>s\!</math> by quotation''.
 +
 
 +
Against this background, other varieties of reflective extension can be specified by means of semantic equations that are considered to be imposed on the elements of <math>F.\!</math>  Taking the reflective extensions <math>\mathrm{Ref}^1 (\text{A})\!</math> and <math>\mathrm{Ref}^1 (\text{B})\!</math> as the first orders of a &ldquo;free&rdquo; project toward reflective closure, variant extensions can be described by relating their entries with those of comparable members in the standard sequences <math>\mathrm{Ref}^n (\text{A})\!</math> and <math>\mathrm{Ref}^n (\text{B}).\!</math>
 +
 
 +
A variant pair of reflective extensions, <math>\mathrm{Ref}^1 (\text{A} | E_1)\!</math> and <math>\mathrm{Ref}^1 (\text{B} | E_1),\!</math> is presented in Tables&nbsp;82 and 83, respectively.  These are identical to the corresponding free variants, <math>\mathrm{Ref}^1 (\text{A})~\!</math> and <math>\mathrm{Ref}^1 (\text{B}),~\!</math> with the exception of those entries that are constrained by the following system of semantic equations.
 +
 
 +
{| align="center" cellspacing="8" width="90%"
 +
|
 +
<math>\begin{matrix}
 +
E_1 :
 +
&
 +
{}^{\langle\langle} \text{A} {}^{\rangle\rangle} = {}^{\langle} \text{A} {}^{\rangle},
 +
&
 +
{}^{\langle\langle} \text{B} {}^{\rangle\rangle} = {}^{\langle} \text{B} {}^{\rangle},
 +
&
 +
{}^{\langle\langle} \text{i} {}^{\rangle\rangle} = {}^{\langle} \text{i} {}^{\rangle},
 +
&
 +
{}^{\langle\langle} \text{u} {}^{\rangle\rangle} = {}^{\langle} \text{u} {}^{\rangle}.
 +
\end{matrix}</math>
 +
|}
 +
 
 +
This has the effect of making all levels of quotation equivalent.
 +
 
 +
<br>
 +
 
 +
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 +
|+ style="height:30px" | <math>\text{Table 82.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{A} | E_1)\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>\text{Object}\!</math>
 +
| width="33%" | <math>\text{Sign}\!</math>
 +
| width="33%" | <math>\text{Interpretant}\!</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 +
|+ style="height:30px" | <math>\text{Table 83.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{B} | E_1)\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>\text{Object}\!</math>
 +
| width="33%" | <math>\text{Sign}\!</math>
 +
| width="33%" | <math>\text{Interpretant}\!</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
Another pair of reflective extensions, <math>\mathrm{Ref}^1 (\text{A} | E_2)\!</math> and <math>\mathrm{Ref}^1 (\text{B} | E_2),\!</math> is presented in Tables&nbsp;84 and 85, respectively.  These are identical to the corresponding free variants, <math>\mathrm{Ref}^1 (\text{A})~\!</math> and <math>\mathrm{Ref}^1 (\text{B}),~\!</math> except for the entries constrained by the following semantic equations.
 +
 
 +
{| align="center" cellspacing="8" width="90%"
 +
|
 +
<math>\begin{matrix}
 +
E_2 :
 +
&
 +
{}^{\langle\langle} \text{A} {}^{\rangle\rangle} = \text{A},
 +
&
 +
{}^{\langle\langle} \text{B} {}^{\rangle\rangle} = \text{B},
 +
&
 +
{}^{\langle\langle} \text{i} {}^{\rangle\rangle} = \text{i},
 +
&
 +
{}^{\langle\langle} \text{u} {}^{\rangle\rangle} = \text{u}.
 +
\end{matrix}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 +
|+ style="height:30px" | <math>\text{Table 84.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{A} | E_2)\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>\text{Object}\!</math>
 +
| width="33%" | <math>\text{Sign}\!</math>
 +
| width="33%" | <math>\text{Interpretant}\!</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{B}
 +
\\
 +
\text{A}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{B}
 +
\\
 +
\text{A}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
{| align="center" border="1" cellpadding="8" cellspacing="0" style="text-align:center; width:80%"
 +
|+ style="height:30px" | <math>\text{Table 85.} ~~ \text{Reflective Extension} ~ \mathrm{Ref}^1 (\text{B} | E_2)\!</math>
 +
|- style="height:40px; background:#f0f0ff"
 +
| width="33%" | <math>\text{Object}\!</math>
 +
| width="33%" | <math>\text{Sign}\!</math>
 +
| width="33%" | <math>\text{Interpretant}\!</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\end{matrix}</math>
 +
|-
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
{}^{\langle} \text{A} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{B} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{i} {}^{\rangle}
 +
\\
 +
{}^{\langle} \text{u} {}^{\rangle}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
| valign="bottom" |
 +
<math>\begin{matrix}
 +
\text{A}
 +
\\
 +
\text{B}
 +
\\
 +
\text{B}
 +
\\
 +
\text{A}
 +
\end{matrix}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
By calling attention to their intended status as ''semantic'' equations, meaning that signs are being set equal in the semantic equivalence classes they inhabit or the objects they denote, I hope to emphasize that these equations are able to say something significant about objects.
 +
 
 +
'''Question.''' Redo <math>F(S)\!</math> over <math>W\!</math>? Use <math>W_F = O \cup F\!</math>?
 +
 
 +
===6.44. Reflections on Closure===
 +
 
 +
The previous section dealt with a formal operation that was dubbed ''reflection'' and found that it was closely associated with the device of ''quotation'' that makes it possible to treat signs as objects by making or finding other signs that refer to them.  Clearly, an ability to take signs as objects is one component of a cognitive capacity for reflection.  But a genuine and less superficial species of reflection can do more than grasp just the isolated signs and the separate interpretants of the thinking process as objects &mdash; it can pause the fleeting procession of signs upon signs and seize their generic patterns of transition as valid objects of discussion.  This involves the conception and composition of not just ''higher order'' signs but also ''higher type'' signs, orders of signs that aspire to catch whole sign relations up in one breath.
 +
 
 +
&hellip;
 +
 
 +
===6.45. Intelligence &rArr; Critical Reflection===
 +
 
 +
It is just at this point that the discussion of sign relations is forced to contemplate the prospects of intelligent interpretation.  For starters, I consider an intelligent interpreter to be one that can pursue alternative interpretations of a sign or text and pick one that makes sense.  If an interpreter can find all of the most sensible interpretations and order them according to a scale of meaningfulness, but without losing the time required to act on their import, then so much the better.
 +
 
 +
Intelligent interpreters are a centrally important species of intelligent agents in general, since hardly any intelligent action at all can be taken without the ability to interpret signs and texts, even if read only in the sense of &ldquo;the text of nature&rdquo;.  In other words, making sense of dubious signs is a central component of all sensible action.
 +
 
 +
Thus, I regard the determining trait of intelligent agency to be its response to non-deterministic situations.  Agents that find themselves at junctures of unavoidable uncertainty are required by objective features of the situation to gather together the available options and select among the multitude of possibilities a few choices that further their active purposes.
 +
 
 +
Reflection enables an interpreter to stand back from signs and view them as objects, that is, as objective possibilities for choice to be followed up in a critical and experimental fashion rather than pursued as automatic reactions whose habitual connections cannot be questioned.
 +
 
 +
The mark of an intelligent interpreter that is relevant in this context is the ability to face (encounter, countenance) a non-deterministic juncture of choices in a sign relation and to respond to it as such with actions appropriate to the uncertain nature of the situation.
 +
 
 +
'''[Variants]'''
 +
 
 +
An intelligent interpreter is one that can follow up several different interpretations at once, experimenting with the denotations and connotations that are available in a non-deterministic sign relation, &hellip;
 +
 
 +
An intelligent interpreter is one that can face a situation of non deterministic choice and choose an interpretation (denotation or connotation) that fits the objective and syntactic context.
 +
 
 +
An intelligent interpreter is one that can deal with non-deterministic situations, that is, one that can follow up several lines of possible meaning for signs and read between the lines to pick out meanings that are sensitive to both the objective situation and the syntactic context of interpretation.
 +
 
 +
An intelligent interpreter is one that can reflect critically on the process of interpretation.  This involves a capacity for standing back from signs and interpretants and viewing them as objects, seeing their connections as objective possibilities for choice, to be compared with each other and tested against the objective and syntactic contexts, rather than taking the usual paths responding in a reflexive manner with the &hellip;
 +
 
 +
To do this it is necessary to interrupt the customary connections and favored associations of signs and interpretants in a sign relation and to consider a plurality of interpretations, not merely to pursue many lines of meaning in a parallel or experimental fashion, but to question seriously whether anything at all is meant by a sign.
 +
 
 +
&hellip; follow up alternatives in an experimental fashion, evaluate choices with a sensitivity to both the objective and syntactic contexts.
 +
 
 +
The mark of intelligence that is relevant to this context is the ability to comprehend a non deterministic situation of choice precisely as it is, &hellip;
 +
 
 +
If a species of determinism is nevertheless expected, then the extra measure of determination must be attributed to a worldly context of objects and signs extending beyond those taken into account by the sign relation in question, or else to powers of choice as yet unformalized in the character of interpreters.
 +
 
 +
This means that the recursions involved in the process of interpretation, besides having recourse to the inner resources of interpreters, will also recur to interfaces with objective situations and syntactic contexts.  Interpretation, to be intelligent, must have the capacity to address the full scope of objects and signs and must be given the room to operate interactively with everything up to and including the undetermined horizons of the external world.
 +
 
 +
===6.46. Looking Ahead===
 +
 
 +
On the whole throughout this project, the &ldquo;meta&rdquo; issue that has been raised here will be treated at three different levels of sophistication.
 +
 
 +
<ol>
 +
<li>The way I have chosen to deal with this issue in the present case is not by injecting more features of the informal discussion into the dialogue of <math>\text{A}\!</math> and <math>\text{B},\!</math> but by trying to imagine how agents like <math>\text{A}\!</math> and <math>\text{B}\!</math> might be enabled to reflect on these aspects of their own discussion.</li>
 +
 
 +
<li>
 +
<p>In the series of examples that I will use to develop further aspects of the <math>\text{A}\!</math> and <math>\text{B}\!</math> dialogue, several different ways of extending the sign relations for <math>\text{A}\!</math> and <math>\text{B}\!</math> will be explored.  The most pressing task is to capture facts of the following sort.</p>
 +
 
 +
{| align="center" cellspacing="8" width="90%"
 +
| <math>\text{A}\!</math> knows that <math>\text{B}\!</math> uses <math>{}^{\backprime\backprime} \text{i} {}^{\prime\prime}\!</math> to denote <math>\text{B}\!</math> and <math>{}^{\backprime\backprime} \text{u} {}^{\prime\prime}\!</math> to denote <math>\text{A}.\!</math>
 +
|-
 +
| <math>\text{B}\!</math> knows that <math>\text{A}\!</math> uses <math>{}^{\backprime\backprime} \text{i} {}^{\prime\prime}\!</math> to denote <math>\text{A}\!</math> and <math>{}^{\backprime\backprime} \text{u} {}^{\prime\prime}\!</math> to denote <math>\text{B}.\!</math>
 +
|}
   −
An intelligent interpreter is one that can reflect critically on the process of interpretation.  This involves a capacity for standing back from signs and interpretants and viewing them as objects, seeing their connections as objective possibilities for choice, to be compared with each other and tested against the objective and syntactic contexts, rather than taking the usual paths responding in a reflexive manner with the ...
+
<p>Toward this aim, I will present a variety of constructions for motivating ''extended'', ''indexed'', or ''situated'' sign relations, all designed to meet the following requirements.</p>
   −
To do this it is necessary to interrupt the customary connections and favored associations of signs and interpretants in a sign relation and to consider a plurality of interpretations, not merely to pursue many lines of meaning in a parallel or experimental fashion, but to question seriously whether anything at all is meant by a sign.
+
<ol style="list-style-type:lower-alpha">
 +
<li>To incorporate higher components of &ldquo;meta-knowledge&rdquo; about language use as it works in a community of interpreters, in reality the most basic ingredients of pragmatic competence.</li>
   −
... follow up alternatives in an experimental fashion, evaluate choices with a sensitivity to both the objective and syntactic contexts.
+
<li>To amalgamate the fragmentary sign relations of individual interpreters into &ldquo;broader-minded&rdquo; sign relations, in the use and understanding of which a plurality of agents can share.</li>
 +
</ol>
   −
The mark of intelligence that is relevant to this context is the ability to comprehend a non deterministic situation of choice precisely as it is,  ...
+
<p>Work at this level of concrete investigation will proceed in an incremental fashion, augmenting the discussion of A and B with features of increasing interest and relevance to inquiry.  The plan for this series of developments is as follows.</p>
   −
If a species of determinism is nevertheless expected, then the extra measure of determination must be attributed to a worldly context of objects and signs extending beyond those taken into account by the sign relation in question, or else to powers of choice as yet unformalized in the character of interpreters.
+
<ol style="list-style-type:lower-alpha" start="3">
 +
<li>I start by gathering materials and staking out intermediate goals for investigation.  This involves making a tentative foray into ways that dimensions of directed change and motivated value can be added to the sign relations initially given for <math>\text{A}\!</math> and <math>\text{B}.\!</math></li>
   −
This means that the recursions involved in the process of interpretation, besides having recourse to the inner resources of interpreters, will also recur to interfaces with objective situations and syntactic contexts.  Interpretation, to be intelligent, must have the capacity to address the full scope of objects and signs and must be given the room to operate interactively with everything up to and including the undetermined horizons of the external world.
+
<li>With this preparation, I return to the dialogue of <math>\text{A}\!</math> and <math>\text{B}\!</math> and pursue ways of integrating their independent selections of information into a unified system of interpretation.
</pre>
     −
===6.46. Looking Ahead===
+
<ol style="list-style-type:lower-roman">
 +
<li>First, I employ the sign relations <math>L_\text{A}\!</math> and <math>L_\text{B}\!</math> to illustrate two basic kinds of set theoretic merges, the ordinary or ''simple'' union and the indexed or ''situated'' union of extensional relations. On review, both forms of combination are observed to fall short of what is needed to constitute the desired characteristics of a shared sign relation.</li>
   −
<pre>
+
<li>Next, I present two other ways of extending the sign relations <math>L_\text{A}\!</math> and <math>L_\text{B}\!</math> into a common system of interpretation.  These extensions succeed in capturing further aspects of what interpreters know about their shared language use.  Although motivated on different grounds, the alternative constructions that develop coincide in exactly the same abstract structure.</li>
On the whole throughout this project, the "meta" issue that has been raised here will be treated at three different levels of sophistication.
+
</ol>
 +
</li>
 +
</ol>
 +
</li>
   −
1. The way I have chosen to deal with this issue in the present case is not by interjecting more features of the informal discussion into the dialogue of A and B, but by trying to imagine how agents like A and B might be enabled to reflect on these aspects of their own discussion.
+
<li>As this project begins to take on sign relations that are complex enough to convey the impression of genuine inquiry processes, a fuller explication of this issue will become mandatory.  Eventually, this will demand a concept of ''higher-order sign relations'', whose objects, signs, and interpretants can all be complete sign relations in their own rights.</li>
 
+
</ol>
2. In the series of examples that I will use to develop further aspects of the A and B dialogue, several different ways of extending the sign relations for A and B will be explored.  The most pressing task is to capture facts of the following sort:
  −
 
  −
A knows that B uses "i" to denote B and "u" to denote A.
  −
B knows that A uses "i" to denote A and "u" to denote B.
  −
 
  −
Toward this aim, I will present a variety of constructions for motivating "indexed", "situated", or "extended" sign relations, all designed to meet the following requirements:
  −
 
  −
To incorporate higher components of "meta-knowledge" about language use as it works in a community of interpreters, in reality the most basic ingredients of pragmatic competence.
  −
 
  −
To amalgamate the fragmentary sign relations of individual interpreters into "broader-minded" sign relations, in the use and understanding of which a plurality of agents can share.
  −
 
  −
Work at this level of concrete investigation will proceed in an incremental fashion, augmenting the discussion of A and B with features of increasing interest and relevance to inquiry.  The plan for this series of developments is as follows:
  −
 
  −
a. I start by gathering materials and staking out intermediate goals for investigation.  This involves making a tentative foray into ways that dimensions of directed change and motivated value can be added to the sign relations initially given for A and B.
  −
 
  −
b. With this preparation, I return to the dialogue of A and B and pursue ways of integrating their independent selections of information into a unified SOI.
  −
 
  −
i. First, I employ the sign relations A and B to illustrate two basic kinds of set theoretic merges, the ordinary or "simple" union and the indexed or "situated" union of extensional relations.  On review, both forms of combination are observed to fall short of what is needed to constitute the desired characteristics of a shared sign relation.
  −
 
  −
ii. Next, I present two other ways of extending the sign relations A and B into a common SOI.  These extensions succeed in capturing further aspects of what interpreters know about their shared language use.  Although motivated on different grounds, the alternative constructions that develop coincide in exactly the same abstract structure.
  −
 
  −
3. As this project begins to take on sign relations that are complex enough to convey the impression of genuine inquiry processes, a fuller explication of this issue will become mandatory.  Eventually, this will demand a concept of "higher order sign relations", whose objects, signs, and interpretants can all be complete sign relations in their own rights.
      
In principle, the successive grades of complexity enumerated above could be ascended in a straightforward way, if only the steps did not go straight up the cliffs of abstraction.  As always, the kinds of intentional objects that are the toughest to face are those whose realization is so distant that even the gear needed to approach their construction is not yet in existence.
 
In principle, the successive grades of complexity enumerated above could be ascended in a straightforward way, if only the steps did not go straight up the cliffs of abstraction.  As always, the kinds of intentional objects that are the toughest to face are those whose realization is so distant that even the gear needed to approach their construction is not yet in existence.
</pre>
  −
  −
===6.47. Mutually Intelligible Codes===
  −
  −
<pre>
  −
Before this complex of relationships can be formalized in much detail, I must introduce linguistic devices for generating "higher order signs", used to indicate other signs, and "situated signs", indexed by the names of their users, their contexts of use, and other types of information incidental to their usage in general.  This leads to the consideration of SOIs that maintain recursive mechanisms for naming everything within their purview.  This "nominal generosity" gives them a new order of generative capacity, producing a sufficient number of distinctive signs to name all the objects and then name the names that are needed in a given discussion.
  −
  −
Symbolic systems for quoting inscriptions and ascribing quotations are associated in metamathematics with "godel numberings" of formal objects, enumerative functions that provide systematic but ostensibly arbitrary reference numbers for the signs and expressions in a formal language.  Assuming these signs and expressions denote anything at all, their formal enumerations become the "codes" of formal objects, just as programs taken literally are code names for certain mathematical objects known as computable functions.  Partial forms of specification not withstanding, these codes are the only complete modes of representation that formal objects can have in the medium of mechanical activity.
  −
  −
In the dialogue of A and B there happens to be an exact coincidence between signs and states.  That is, the states of the interpretive systems A and B are not distinguished from the signs in S that are imagined to be mediating, moment by moment, the attentions of the interpretive agents A and B toward their respective objects in O.  So the question arises:  Is this identity bound to be a general property of all useful sign relations, or is it only a degenerate feature occurring by chance or unconscious design in the immediate example?
  −
  −
To move toward a resolution of this question I reason as follows.  In one direction, it seems obvious that a "sign in use" (SIU) by a particular interpreter constitutes a component of that agent's state.  In other words, the very notion of an identifiable SIU refers to numerous instances of a particular interpreter's state that share in the abstract property of being such instances, whether or not anyone can give a more concise or illuminating characterization of the concept under which these momentary states are gathered.  Conversely, it is at least conceivable that the whole state of a system, constituting its transitory response to the entirety of its environment, history, and goals, can be interpreted as a sign of something to someone.  In sum, there remains an outside chance of signs and states being precisely the same things, since nothing precludes the existence of an IF that could make it so.
  −
  −
Still, if the question about the distinction or coincidence between signs and states is restricted to the domains where existential realizations are conceivable, no matter whether in biological or computational media, then the prerequisites of the task become more severe, due to the narrower scope of materials that are admitted to answer them.  In focussing on this arena the problem is threefold:
  −
  −
1. The crucial point is not just whether it is possible to imagine an ideal SOI, an external perspective or an independent POV, for which all states are signs, but whether this is so for the prospective SOI of the very agent that passes through these states.
  −
  −
2. To what extent can the transient states and persistent conduct of each agent in a community of interpretation take on a moderately public and objective aspect in relation to the other participants?
  −
  −
3. How far in this respect, in the common regard for this species of outward demeanor, can each agent's behavior act as a sign of genuine objects in the eyes of other interpreters?
  −
  −
The  special task of a nuanced hermeneutic approach to computational interpretation is to realize the relativity of all formal codes to their formal coders, and to seek ways of facilitating mutual intelligibility among interpreters whose internal codes can be thoroughly private, synchronistically keyed to external events, and even a bit idiosyncratic.
  −
  −
Ultimately, working through this maze of "meta" questions, as posed on the tentative grounds of the present project, leads to a question about the "logical reference frames" or "metamathematical coordinate systems" that are supposed to distinguish "objective" from "symbolic" entities and are imagined to discriminate a range of gradations along their lines.  The question is:  Whether any gague of objectivity or scale of virtuality has invariant properties discoverable by all independent interpreters, or whether all is vanity and inane relativism, and everything concerning a subjective point of view is sheer caprice?
  −
  −
Thus, the problem of mutual intelligibility turns on the question of "common significance":  How can there be signs that are truly public, when the most natural signs that distinct agents can know, their own internal states, have no guarantee and very little likelihood of being related in systematically fathomable ways?  As a partial answer to this, I am willing to contemplate certain forms of pre established harmony, like the common evolution of a biological species or the shared culture of an interpretive community, but my experience has been that harmony, once established, quickly corrupts unless active means are available to maintain it.  So there still remains the task of identifying these means.  With or without the benefit of a prior consensus, or the assumption of an initial, but possibly fragile equilibrium, an explanation of robust harmony must detail the modes of maintaining communication that enable coordinated action to persist in the meanest of times.
  −
  −
The formal character of these questions, in the potential complexities that can be forced on contemplation in the pursuit of their answers, is independent of the species of interpreters that are chosen for the termini of comparison, whether person to person, person to computer, or computer to computer.  As always, the truth of this kind of thesis is formal, all too formal.  What it brings is a new refrain of an old motif:  Are there meaningful, if necessarily formal series of analogies that can be strung from the patterns of whizzing electrons and humming protons, whose controlled modes of collective excitation form and inform the conducts of computers, all the way to the rather different patterns of wizened electrons and humbled protons, whose deliberate energies of communal striving substantiate the forms of life known to be intelligible?
  −
  −
A full consideration of the geometries available for the spaces in which these levels of reflective abstraction are commonly imagined to reside leads to the conclusion that familiar distinctions of "top down" versus "bottom up" are being taken for granted in an arena that has not even been established to be orientable.  Thus, it needs to be recognized that the distinction between objects and signs is relative to a definite SOI.  The pragmatic theory of signs is designed, in part, precisely to deal with the circumstance that thoroughly objective states of systems can be signs of each other, undermining any pretended distinction between objects and signs that one might propose to draw on essential grounds.
  −
  −
From now on, I will reuse the ancient term "gnomon" in a technical sense to refer to the godel numbers or code names of formal objects.  In other words, a gnomon is a godel numbering or enumeration function that maps a domain of objects into a domain of signs, Gno : O  > S.  When the syntactic domain S is contained within the object domain O, then the part of the gnomon that maps S into S, providing names for signs and expressions, is usually regarded as a "quoting function".
  −
  −
In the pluralistic contexts that go with pragmatic theories of signs, it is no longer entirely appropriate to refer to "the" gnomon of any object.  At any moment of discussion, I can only have so and so's gnomon or code word for each thing under the sun.  Thus, apparent references to a uniquely determined gnomon only make sense if taken as enthymemic invocations of the ordinary context and of all that is comprehended to be implied in it, promising to convert tacit common sense into definite articulations of what is understood.  Actually achieving this requires each elliptic reference to the gnomon to be explicitly grounded in the context of informal discussion, interpreted with respect to the conventional basis of understanding assumed in it, and relayed to the indexing function taken for granted by all parties to it.
  −
  −
In computational terms, this brand of pluralism means that neither the gnomon nor the quoting function that forms a part of it can be viewed as well defined unless it is indexed, explicity or implicitly, by the name of a particular interpreter.  I will use the notations "Gnoi(x)" = "<x, i>" to indicate the gnomon of the object x with respect to the interpreter i.  The value Gnoi(x) = <x, i> C S is the "nominal sign in use" or the "name in use" (NIU) of the object x with respect to the interpreter i, and thus it constitutes a component of i's state.
  −
  −
In the special case where x is a sign or expression in the syntactic domain, then Gnoi(x) = <x, i> is tantamount to the quotation of x by and for the use of the ith interpreter, in short, the nominal sign to i that makes x an object for i.  For signs and expressions, it is usually only the quoting function that makes them objects.  But nothing is an object in any sense for an interpreter unless it is an object of a sign relation for that interpreter.  Therefore, ...
  −
  −
If it is now asked what measure of invariant understanding can be enjoyed by diverse parties of interpretive agents, then the discussion has come upon an issue with a familiar echo in mathematical analysis.  The organization of many local coordinate frames into systems capable of supporting communicative references to relatively "objective" objects is usually handled by means of the concept of a "manifold".  Therefore, the analogous task that is suggested for this project is to arrive at a workable definition of "sign relational manifolds".
  −
  −
The discrete nature of the A and B dialogue renders moot the larger share of issues of interest in continuous and differentiable manifolds.  However, it is still possible to get things moving in this direction by looking at simple structural analogies that connect the pragmatic theory of sign relations with the basic notions of analysis on manifolds.
  −
</pre>
  −
  −
===6.48. Discourse Analysis : Ways and Means===
  −
  −
<pre>
  −
Before the discussion of the A and B dialogue can proceed to richer veins of semantic structure it will be necessary to extract the relevant traces of embedded sign relations from their environments of informally interpreted syntax.
  −
  −
On the substantive front, sign relations serving as raw materials of discourse need to be refined and their content assayed, but first their identifying signatures must be sounded out, carved out, and lifted from their embroiling inclusions in the dense strata of obscure intuitions that sediment ordinary discussion.  On the instrumental front, sign relations serving as primitive tools of discourse analysis need to be identified and improved by a deliberate examination of their designs and purposes.
  −
  −
So far, the models and methods made available to formal treatment were borrowed outright, with little hesitation and less recognition, from the context of casual discussion.  Thus, these materials and mechanisms have come to the threshold of critical reflection already in play, devoid of concern for the presuppositions and consequences associated with their use, and only belatedly turned to the effortful work and odious formalities of self conscious exposition.
  −
  −
To reflect on the properties of complex and higher order sign relations with any degree of clarity it is necessary to arrange a clearer field of investigation and a less cluttered staging area for analytic work than is commonly provided.  Habitual processes of interpretation that typically operate as automatic routines and uncritical defaults in the informal context of discussion have to be selectively inhibited, slowed down, and critically examined as objective possibilities, instead of being taken for granted as absolute necessities.
  −
  −
In other words, an apparatus for critical reflection does not merely add more mirrors to the kaleidoscopic fun house of interpretive discourse, but it provides transient moments of equanimity, or balanced neutrality, and a moderately detached perspective on alternative points of view.  A scope so limited does not by any means grant a God's Eye View (GEV), but permits a sufficient quantity of light to consider how the original array of sights and reflections might have been created otherwise.
  −
  −
Ordinarily, the extra degree of attention to syntax that is needed for critical reflection on interpretive processes is called into play by means of syntactic operators and diacritical devices acting at the level of individual signs and elementary expressions.  For example, quotation marks are used to force one type of "semantic ascent", causing signs to be treated as objects and marking points of interpretive shift as they occur in the syntactic medium.  But these operators and devices must be symbolized, and these symbols must be interpreted.  Consequently, there is no way to avoid the invocation of a cohering interpretive framework, one that needs to be specialized for analytic purposes.
  −
  −
The best way to achieve the desired type of reflective capacity is by attaching a parameter to the IF used as an instrument of formal study, specifying certain choices or interpretive presumptions that affect the entire context of discussion.  The aesthetic distance needed to arrive at a formal perspective on sign relations is maintained, not by jury rigging ordinary discussion with locally effective syntactic devices, but by asking the reader to consider certain dimensions of parametric variation in the global IFs used to comprehend the sign relations under study.
  −
  −
The interpretive parameter of paramount importance to this work is one that is critical to reflection.  It can be presented as a choice between two alternative conventions, affecting the way one reflexively regards each sign in a text:  (1) as a sign provoking interest only in passing, exchanged for the sake of a meaningful object it is always taken for granted to have, or (2) as a sign comprising an interest in and of itself, a state of a system or a modification of a medium that can signify an external value but does not necessarily denote anything else at all.  I will name these options for responding to signs according to the aspects of character that are most appreciated in their net effects, whether signs for the sake of objects, or signs for their own sake, respectively.
  −
  −
The first option I call the "object convention", recognizing it as the natural default of informal language use.  In the ordinary language context it is the automatic assumption that signs and expressions are intended to denote something external to themselves, and even though it is quite obvious to all interpreters that the medium is filled with the appearances of signs and not with the objects themselves, this fact passes for little more than transitory interest in the rush to cash out tokens for their indicated values.
  −
  −
The object convention, as appropriate to an introduction that needs to begin in the context of ordinary discussion, is the parametric choice that was left in force throughout the treatment of the A and B example.  Doing things this way is like trying to roller skate in a buffalo herd, that is, it attempts to formalize a fragment of discussion on a patchwork of local scales without interrupting the automatic routines and default assumptions that prevail on a global basis in the informal context.  Ultimately, one cannot avoid stumbling over the hoofprints ("...") of overly cited and opaquely enthymemic textual deposits.
  −
  −
The second option I call the "sign convention", observing it to be the treatment of choice in programming and formal language studies.  In the formal language context it is necessary to consider the possibility that not all signs and expressions are assured to denote or even connote much of anything at all.  This danger is amplified in computational frameworks where it resonates with a related theme, that not all programs are guaranteed to terminate normally with a definite result.  In order to deal with these eventualities, a more cautious approach to sign relations is demanded to cover the risk of generating nonsense, in other words, to guard against degenerate forms of sign relations that fail to serve any significant purpose in communication or inquiry.
  −
  −
Whenever a greater degree of care is required, it becomes necessary to replace the object convention with the sign convention, which presumes to take for granted only what can be obvious to all observers, namely, the phenomenal appearances and temporal occurrences of objectified states of systems.  To be sure, these modulations of media are still presented as signs, but only potentially as signs of other things.  It goes with the territory of the formal language context to constantly check the inveterate impulses of the literate mind, to reflect on its automatic reflex toward meaning, to inhibit its uncontrolled operation, and to pause long enough in the rush to judgment to question whether its constant presumption of a motive is itself innocent.
  −
  −
In order to deal with these issues of discourse analysis in an explicit way, it is necessary to have in place a technical notation for marking the very kinds of interpretive assumptions that normally go unmarked.  Thus, I will describe a set of devices for annotating certain kinds of interpretive contingencies, called the "discourse analysis frames" (DAFs) or the "global interpretive frames" (GIFs), that can be operative at any given moment in a particular context of discussion.
  −
  −
To mark a context of discussion where a particular set J of interpretive conventions is being maintained, I use labeled brackets of the following two forms:  "unitary", as "{J| ... |J}, or "divided", as {J| ... | ... |J}.  The unitary form encloses a context of discussion by delimiting a range of text whose reading is subject to the interpretive constraints J.  The divided form specifies the objects, signs, and interpretive information in accord with which a species of discussion is generated.  Labeled brackets enclosing contexts can be nested in their scopes, with interpretive data on each outer envelope applying to every inclusion.  Labeled brackets arranging the "conversation pieces" or the "generators and relations" of a topic can lead to discussions that spill outside their frames, and thus are permitted to constitute overlapping contexts.
  −
  −
For the present, I will consider two types of interpretive parameters to be used as indices of labeled brackets.
  −
  −
1. Names of interpreters or other references to context can be used to indicate the provenance of the objects and signs that make up the assorted contents of brackets.  On occasion, I will use the first person singular pronoun to signify the immediate context of informal discussion, as in "{I| ... |I}", but more often than not this context goes unmarked.
  −
  −
2. Two other modifiers can be used to toggle between the options of the object convention, more common in casual or ordinary contexts, and the sign convention, more useful in formal or sign theoretic contexts.
  −
  −
a. The brackets "{o| ... |o}" mark a context of informal language use or ordinary discussion, where the object convention applies.  To specify the elements of a sign relation under these conditions, I use a form of presentation like the following:
  −
  −
{o|  A,  B  |||  "A", "B", "i", "u"  |o}.
  −
  −
Here, the names of objects are placed on the left side and the names of signs on the right side of the central divide, and the outer brackets stipulate that the object convention is in force throughout the discussion of a sign relation that is generated on these elements.
  −
  −
b. The brackets "{s| ... |s}" mark a context of formal language use or controlled discussion, where the sign convention applies.  To specify the elements of a sign relation in this case, I use a form like:
  −
  −
{s|  [A], [B]  |||  A,  B,  i,  u  |s}.
  −
  −
Again, expressions for objects are placed on the left and expressions of signs on the right, but formal language conventions are now invoked to let the alphabet letters and the lexical items of a formal vocabulary stand for themselves, and denotation brackets "[]" are placed around signs to indicate the corresponding objects, when they exist.
  −
  −
When the information carried by labeled brackets becomes more involved and more extensive, a set of convenient abbreviations and suggestions for "pretty printing" can be followed.  When the bracket labels become too long to bother repeating, I will leave the last label blank or use ditto marks, as with {a, b, c| ... |"}.  When it is necessary to break labeled brackets over several lines, multiple dividers "|" and dittos """ can be used to fill out corresponding columns, as in the following text.
  −
  −
{I, o| A ,  B
  −
|||||| "A", "B", "i", "u"
  −
|""""}
  −
  −
A notation for discourse analysis ought to find a crucial test of its usefulness in whether it can help to disclose structural properties of interpretive frameworks that would otherwise escape the attention due.  If the dimensions of interpretive choice that are represented by these devices are to serve a useful function, then ...
  −
  −
Although these devices for discourse analysis are bound to seem a bit ad hoc at this point, they have been designed with a sign relational bootstrap in mind, that is, with a view to being formalized and recognized as a species within the domain of sign relations itself, where this is the very domain that is laid out as their field of application.
  −
  −
One note of caution may help to prevent a common misunderstanding.  It is futile to imagine that any system of interpretive markers for discourse can become totally self sufficient, like the Worm Uroboros, determining all aspects of interpretation and eliminating all ambiguity.  The ultimate appeal of signs, and signs upon signs, is always to an intelligent interpreter, a reader who knows there are more interpretive choices to make than could ever be surrendered to signs, and whose free responsibility to appropriate interpretations cannot be abdicated to any text or abridged by any gloss on it, no matter how fit or finished.
  −
  −
In a sense, at least at first, nothing is being created that could not have been noticed without signs.  It is merely that actions are being articulated that were not articulated before, and hopefully in ways that make transient insights easier to remember and reuse on new occasions.  Instead, the requirement here is to devise a language, the marks of which can reflect the ambient light of observation on its own process.  It is not unusual to succeed at this in artificial environments crafted especially for the purpose, but to achieve the critical angle in vivo, in the living context of a natural language, takes more art.
  −
</pre>
  −
  −
===6.49. Combinations of Sign Relations===
  −
  −
<pre>
  −
At a point like this in the development of a formal subject matter, it is customary to introduce elements of a logical calculus that can be used to describe relevant aspects of the formal structures involved and to expedite reasoning about their manifold combinations and decompositions.  I will hold off from doing this for sign relations in any formal way at present.  Instead, I consider the informal requirements and the forseeable ends that a suitable calculus for sign relations might be expected to meet, and I present as tentative alternatives a few different ways of proceeding to formalize these intentions.
  −
  −
The first order of business for the "comparative anatomy" and the "developmental biology" of sign relations is to undertake a pair of closely related tasks:  (1) to examine the structural articulation of highly complex sign relations in terms of the primitive constituents that are found available, and (2) to explain the functional genesis of formal (that is, reflectively considered and critically regarded) sign relations as they naturally arise within the informal context of representational and communicational activities.
  −
  −
Converting to a political metaphor, how does the "republic" constituted by a sign relation — the representational community of agents invested with a congeries of legislative, executive, and interpretive powers, employing a consensual body of conventional languages, encompassing a commonwealth of comprehensible meanings, diversely but flexibly manifested in the practical administration of abiding and shared representations — how does all of this first come into being?
  −
  −
... and their development from primitive/ rudimentary to highly structured ...
  −
  −
The grasp of the discussion between A and B that is represented in the separate sign relations given for them can best be described as fragmentary.  It fails to capture what everyone knows A and B would know about each other's language use.
  −
  −
How can the fragmentary system of interpretation (SOI) constituted by the juxtaposition of individual sign relations A and B be combined or developed into a new SOI that represents what agents like A and B are sure to know about each other's language use?  In order to make it clear that this is a non trivial question, and in the process to illustrate different ways of combining sign relations, I begin by considering a couple of obvious suggestions for their integration that immediate reflection will show to miss the mark.
  −
  −
The first thing to try is the set theoretic union of the sign relations.  This commonly leads to a "confused" or "confounded" combination of the component sign relations.  For example, the sign relation defined as C = A U B is shown in Table 83.  Interpreted as a transition digraph on the four points of the syntactic domain S = {"A", "B", "i", "u"}, the sign relation C specifies the following behavior for the conduct of its interpreter:
  −
  −
1. AC has a sling at each point of {"A", "i", "u"} and two way arcs on the pairs {"A", "i"} and {"A", "u"}.
  −
  −
2. BC has a sling at each point of {"B", "i", "u"} and two way arcs on the pairs {"B", "i"} and {"B", "u"}.
  −
  −
These sub-relations do not form equivalence relations on the relevant sets of signs.  If closed up under transitive compositions, then {"A", "i", "u"} are all equivalent in the presence of object A, but {"B", "i", "u"} are all equivalent in the presence of object B.  This may accurately represent certain types of political thinking, but it does not constitute the kind of sign relation that is wanted here.
  −
  −
Reflecting on this disappointing experience with using simple unions to combine sign relations, it appears that some type of indexed union or categorical co product might be demanded.  Table 84 presents the results of taking the disjoint union D = A U B to constitute a new sign relation.
  −
  −
Table 83.  Confounded Sign Relation C
  −
Object Sign Interpretant
  −
A "A" "A"
  −
A "A" "i"
  −
A "A" "u"
  −
A "i" "A"
  −
A "i" "i"
  −
A "u" "A"
  −
A "u" "u"
  −
B "B" "B"
  −
B "B" "i"
  −
B "B" "u"
  −
B "i" "B"
  −
B "i" "i"
  −
B "u" "B"
  −
B "u" "u"
  −
  −
Table 84.  Disjointed Sign Relation D
  −
Object Sign Interpretant
  −
AA "A"A "A"A
  −
AA "A"A "i"A
  −
AA "i"A "A"A
  −
AA "i"A "i"A
  −
AB "A"B "A"B
  −
AB "A"B "u"B
  −
AB "u"B "A"B
  −
AB "u"B "u"B
  −
BA "B"A "B"A
  −
BA "B"A "u"A
  −
BA "u"A "B"A
  −
BA "u"A "u"A
  −
BB "B"B "B"B
  −
BB "B"B "i"B
  −
BB "i"B "B"B
  −
BB "i"B "i"B
  −
</pre>
      
===6.50. Revisiting the Source===
 
===6.50. Revisiting the Source===
Line 9,668: Line 11,503:  
</div>
 
</div>
 
----
 
----
  −
<br><sharethis />
      
[[Category:Artificial Intelligence]]
 
[[Category:Artificial Intelligence]]
12,080

edits

Navigation menu