Changes

MyWikiBiz, Author Your Legacy — Tuesday April 30, 2024
Jump to navigationJump to search
Line 15: Line 15:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_3_Visible_Frame.jpg|500px]]
+
| [[File:Logical_Graph_Figure_3_Visible_Frame.jpg|500px]]
 
|}
 
|}
   −
This can be written inline as <math>{}^{\backprime\backprime} \texttt{(} ~ \texttt{(} ~ \texttt{)} ~ \texttt{)} = \quad {}^{\prime\prime}\!</math> or set off in a text display:
+
This can be written inline as <math>{}^{\backprime\backprime} \texttt{(} ~ \texttt{(} ~ \texttt{)} ~ \texttt{)} = \quad {}^{\prime\prime}~\!</math> or set off in a text display:
    
{| align="center" cellspacing="10"
 
{| align="center" cellspacing="10"
| width="33%" | <math>\texttt{(} ~ \texttt{(} ~ \texttt{)} ~ \texttt{)}\!</math>
+
| width="33%" | <math>\texttt{(} ~ \texttt{(} ~ \texttt{)} ~ \texttt{)}~\!</math>
| width="34%" | <math>=\!</math>
+
| width="34%" | <math>=~\!</math>
 
| width="33%" | &nbsp;
 
| width="33%" | &nbsp;
 
|}
 
|}
Line 31: Line 31:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_4_Visible_Frame.jpg|500px]]
+
| [[File:Logical_Graph_Figure_4_Visible_Frame.jpg|500px]]
 
|}
 
|}
   Line 39: Line 39:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_5_Visible_Frame.jpg|500px]]
+
| [[File:Logical_Graph_Figure_5_Visible_Frame.jpg|500px]]
 
|}
 
|}
    
It is easy to see the relationship between the parenthetical expressions of Peirce's logical graphs, that somewhat clippedly picture the ordered containments of their formal contents, and the associated dual graphs, that constitute the species of rooted trees here to be described.
 
It is easy to see the relationship between the parenthetical expressions of Peirce's logical graphs, that somewhat clippedly picture the ordered containments of their formal contents, and the associated dual graphs, that constitute the species of rooted trees here to be described.
   −
In the case of our last example, a moment's contemplation of the following picture will lead us to see that we can get the corresponding parenthesis string by starting at the root of the tree, climbing up the left side of the tree until we reach the top, then climbing back down the right side of the tree until we return to the root, all the while reading off the symbols, in this case either <math>{}^{\backprime\backprime} \texttt{(} {}^{\prime\prime}\!</math> or <math>{}^{\backprime\backprime} \texttt{)} {}^{\prime\prime},\!</math> that we happen to encounter in our travels.
+
In the case of our last example, a moment's contemplation of the following picture will lead us to see that we can get the corresponding parenthesis string by starting at the root of the tree, climbing up the left side of the tree until we reach the top, then climbing back down the right side of the tree until we return to the root, all the while reading off the symbols, in this case either <math>{}^{\backprime\backprime} \texttt{(} {}^{\prime\prime}~\!</math> or <math>{}^{\backprime\backprime} \texttt{)} {}^{\prime\prime},~\!</math> that we happen to encounter in our travels.
    
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_6_Visible_Frame.jpg|500px]]
+
| [[File:Logical_Graph_Figure_6_Visible_Frame.jpg|500px]]
 
|}
 
|}
   Line 55: Line 55:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_14_Banner.jpg|500px]]
+
| [[File:Logical_Graph_Figure_14_Banner.jpg|500px]]
 
|-
 
|-
| [[Image:Logical_Graph_Figure_15_Banner.jpg|500px]]
+
| [[File:Logical_Graph_Figure_15_Banner.jpg|500px]]
 
|}
 
|}
   Line 67: Line 67:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_11_Visible_Frame.jpg|500px]]
+
| [[File:Logical_Graph_Figure_11_Visible_Frame.jpg|500px]]
 
|}
 
|}
   Line 79: Line 79:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_12_Visible_Frame.jpg|500px]]
+
| [[File:Logical_Graph_Figure_12_Visible_Frame.jpg|500px]]
 
|}
 
|}
   Line 85: Line 85:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_13_Visible_Frame.jpg|500px]]
+
| [[File:Logical_Graph_Figure_13_Visible_Frame.jpg|500px]]
 
|}
 
|}
   Line 95: Line 95:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_7_Visible_Frame.jpg|500px]]
+
| [[File:Logical_Graph_Figure_7_Visible_Frame.jpg|500px]]
 
|}
 
|}
   Line 101: Line 101:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_8_Visible_Frame.jpg|500px]]
+
| [[File:Logical_Graph_Figure_8_Visible_Frame.jpg|500px]]
 
|}
 
|}
   Line 107: Line 107:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_9_Visible_Frame.jpg|500px]]
+
| [[File:Logical_Graph_Figure_9_Visible_Frame.jpg|500px]]
 
|}
 
|}
   Line 113: Line 113:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_10_Visible_Frame.jpg|500px]]
+
| [[File:Logical_Graph_Figure_10_Visible_Frame.jpg|500px]]
 
|}
 
|}
   Line 178: Line 178:  
|}
 
|}
   −
The practical use of Peirce's categories is simply to organize our thoughts about what sorts of formal models are demanded by a material situation, for instance, a domain of phenomena from atoms to biology to culture.  To say that "k-ness" is involved in a phenomenon is simply to say that we need k-adic relations to model it adequately, and that the phenomenon itself appears to demand nothing less.  Aside from this, Peirce's realization that k-ness for k = 1, 2, 3 affords us with a sufficient basis for all that we need to model is a formal fact that depends on a particular theorem in the logic of relatives.  If it weren't for that, there would hardly be any reason to single out three.
+
The practical use of Peirce's categories is simply to organize our thoughts about what sorts of formal models are demanded by a material situation, for instance, a domain of phenomena from atoms to biology to culture.  To say that &ldquo;''k''-ness&rdquo; is involved in a phenomenon is simply to say that we need ''k''-adic relations to model it adequately, and that the phenomenon itself appears to demand nothing less.  Aside from this, Peirce's realization that ''k''-ness for ''k'' = 1, 2, 3 affords us with a sufficient basis for all that we need to model is a formal fact that depends on a particular theorem in the logic of relatives.  If it weren't for that, there would hardly be any reason to single out three.
   −
In order to discuss the various forms of iconicity that might be involved in the application of Peirce's logical graphs and their kind to the object domain of logic itself, we will need to bring out two or three ''categories of structured individuals'' (COSIs), depending on how one counts.  These are called the ''object domain'', the ''sign domain'', and the ''interpretant sign domain'', which may be written <math>{O, S, I},\!</math> respectively, or <math>{X, Y, Z},\!</math> respectively, depending on the style that fits the current frame of discussion.
+
In order to discuss the various forms of iconicity that might be involved in the application of Peirce's logical graphs and their kind to the object domain of logic itself, we will need to bring out two or three ''categories of structured individuals'' (COSIs), depending on how one counts.  These are called the ''object domain'', the ''sign domain'', and the ''interpretant sign domain'', which may be written <math>{O, S, I},~\!</math> respectively, or <math>{X, Y, Z},~\!</math> respectively, depending on the style that fits the current frame of discussion.
    
For the time being, we will be considering systems where the sign domain and the interpretant domain are the same sets of entities, although, of course, their roles in a given ''[[sign relation]]'', say, <math>L \subseteq O \times S \times I</math> or <math>L \subseteq X \times Y \times Z,</math> remain as distinct as ever.  We may use the term ''semiotic domain'' for the common set of elements that constitute the signs and the interpretant signs in any setting where the sign domain and the interpretant domain are equal as sets.
 
For the time being, we will be considering systems where the sign domain and the interpretant domain are the same sets of entities, although, of course, their roles in a given ''[[sign relation]]'', say, <math>L \subseteq O \times S \times I</math> or <math>L \subseteq X \times Y \times Z,</math> remain as distinct as ever.  We may use the term ''semiotic domain'' for the common set of elements that constitute the signs and the interpretant signs in any setting where the sign domain and the interpretant domain are equal as sets.
Line 192: Line 192:  
To consider how a system of logical graphs, taken together as a semiotic domain, might bear an iconic relationship to a system of logical objects that make up our object domain, we will next need to consider what our logical objects are.
 
To consider how a system of logical graphs, taken together as a semiotic domain, might bear an iconic relationship to a system of logical objects that make up our object domain, we will next need to consider what our logical objects are.
   −
A popular answer, if by popular one means that both Peirce and Frege agreed on it, is to say that our ultimate logical objects are without loss of generality most conveniently referred to as Truth and Falsity.  If nothing else, it serves the end of beginning simply to go along with this thought for a while, and so we can start with an object domain that consists of just two ''objects'' or ''values'', to wit, <math>O = \mathbb{B} = \{ \operatorname{false}, \operatorname{true} \}.</math>
+
A popular answer, if by popular one means that both Peirce and Frege agreed on it, is to say that our ultimate logical objects are without loss of generality most conveniently referred to as Truth and Falsity.  If nothing else, it serves the end of beginning simply to go along with this thought for a while, and so we can start with an object domain that consists of just two ''objects'' or ''values'', to wit, <math>O = \mathbb{B} = \{ \mathrm{false}, \mathrm{true} \}.</math>
   −
Given those two categories of structured individuals, namely, <math>O = \mathbb{B} = \{ \operatorname{false}, \operatorname{true} \}</math> and <math>S = \{ \text{logical graphs} \},\!</math> the next task is to consider the brands of morphisms from <math>S\!</math> to <math>O\!</math> that we might reasonably have in mind when we speak of the ''arrows of interpretation''.
+
Given those two categories of structured individuals, namely, <math>O = \mathbb{B} = \{ \mathrm{false}, \mathrm{true} \}</math> and <math>S = \{ \text{logical graphs} \},~\!</math> the next task is to consider the brands of morphisms from <math>S~\!</math> to <math>O~\!</math> that we might reasonably have in mind when we speak of the ''arrows of interpretation''.
   −
With the aim of embedding our consideration of logical graphs, as seems most fitting, within Peirce's theory of triadic sign relations, we have declared the first layers of our object, sign, and interpretant domains.  As we often do in formal studies, we've taken the sign and interpretant domains to be the same set, <math>S = I,\!</math> calling it the ''semiotic domain'', or, as I see that I've done in some other notes, the ''syntactic domain''.
+
With the aim of embedding our consideration of logical graphs, as seems most fitting, within Peirce's theory of triadic sign relations, we have declared the first layers of our object, sign, and interpretant domains.  As we often do in formal studies, we've taken the sign and interpretant domains to be the same set, <math>S = I,~\!</math> calling it the ''semiotic domain'', or, as I see that I've done in some other notes, the ''syntactic domain''.
    
Truth and Falsity, the objects that we've so far declared, are recognizable as abstract objects, and like so many other hypostatic abstractions that we use they have their use in moderating between a veritable profusion of more concrete objects and more concrete signs, in ''factoring complexity'' as some people say, despite the fact that some complexities are irreducible in fact.
 
Truth and Falsity, the objects that we've so far declared, are recognizable as abstract objects, and like so many other hypostatic abstractions that we use they have their use in moderating between a veritable profusion of more concrete objects and more concrete signs, in ''factoring complexity'' as some people say, despite the fact that some complexities are irreducible in fact.
Line 204: Line 204:  
As agents of systems, whether that system is our own physiology or our own society, we move through what we commonly imagine to be a continuous manifold of states, but with distinctions being drawn in that space that are every bit as compelling to us, and often quite literally, as the difference between life and death.  So the relation of discretion to continuity is not one of those issues that we can take lightly, or simply dissolve by choosing a side and ignoring the other, as we may imagine in abstraction.  I'll try to get back to this point later, one in a long list of cautionary notes that experience tells me has to be attached to every tale of our pilgrimage, but for now we must get under way.
 
As agents of systems, whether that system is our own physiology or our own society, we move through what we commonly imagine to be a continuous manifold of states, but with distinctions being drawn in that space that are every bit as compelling to us, and often quite literally, as the difference between life and death.  So the relation of discretion to continuity is not one of those issues that we can take lightly, or simply dissolve by choosing a side and ignoring the other, as we may imagine in abstraction.  I'll try to get back to this point later, one in a long list of cautionary notes that experience tells me has to be attached to every tale of our pilgrimage, but for now we must get under way.
   −
Returning to <math>\operatorname{En}</math> and <math>\operatorname{Ex},</math> the two most popular interpretations of logical graphs, ones that happen to be dual to each other in a certain sense, let's see how they fly as ''hermeneutic arrows'' from the syntactic domain <math>S\!</math> to the object domain <math>O,\!</math> at any rate, as their trajectories can be spied in the radar of what George Spencer Brown called the ''primary arithmetic''.
+
Returning to <math>\mathrm{En}</math> and <math>\mathrm{Ex},</math> the two most popular interpretations of logical graphs, ones that happen to be dual to each other in a certain sense, let's see how they fly as ''hermeneutic arrows'' from the syntactic domain <math>S~\!</math> to the object domain <math>O,~\!</math> at any rate, as their trajectories can be spied in the radar of what George Spencer Brown called the ''primary arithmetic''.
   −
Taking <math>\operatorname{En}\!</math> and <math>\operatorname{Ex}\!</math> as arrows of the form <math>\operatorname{En}, \operatorname{Ex} : S \to O,</math> at the level of arithmetic taking <math>S = \{ \text{rooted trees} \}\!</math> and <math>O = \{ \operatorname{falsity}, \operatorname{truth} \},\!</math> it is possible to factor each arrow across the domain <math>S_0\!</math> that consists of a single rooted node plus a single rooted edge, in other words, the domain of formal constants <math>S_0 = \{ \ominus, \vert \} = \{</math>[[Image:Rooted Node.jpg|16px]],&nbsp;[[Image:Rooted Edge.jpg|12px]]<math>\}.\!</math>  This allows each arrow to be broken into a purely syntactic part <math>\operatorname{En}_\text{syn}, \operatorname{Ex}_\text{syn} : S \to S_0</math> and a purely semantic part <math>\operatorname{En}_\text{sem}, \operatorname{Ex}_\text{sem} : S_0 \to O.</math>
+
Taking <math>\mathrm{En}~\!</math> and <math>\mathrm{Ex}~\!</math> as arrows of the form <math>\mathrm{En}, \mathrm{Ex} : S \to O,</math> at the level of arithmetic taking <math>S = \{ \text{rooted trees} \}~\!</math> and <math>O = \{ \mathrm{falsity}, \mathrm{truth} \},~\!</math> it is possible to factor each arrow across the domain <math>S_0~\!</math> that consists of a single rooted node plus a single rooted edge, in other words, the domain of formal constants <math>S_0 = \{ \ominus, \vert \} = \{</math>[[File:Rooted Node Big.jpg|16px]],&nbsp;[[File:Rooted Edge Big.jpg|12px]]<math>\}.~\!</math>  This allows each arrow to be broken into a purely syntactic part <math>\mathrm{En}_\text{syn}, \mathrm{Ex}_\text{syn} : S \to S_0</math> and a purely semantic part <math>\mathrm{En}_\text{sem}, \mathrm{Ex}_\text{sem} : S_0 \to O.</math>
    
As things work out, the syntactic factors are formally the same, leaving our dualing interpretations to differ in their semantic components alone.  Specifically, we have the following mappings:
 
As things work out, the syntactic factors are formally the same, leaving our dualing interpretations to differ in their semantic components alone.  Specifically, we have the following mappings:
Line 212: Line 212:  
{| cellpadding="6"
 
{| cellpadding="6"
 
| width="5%" | &nbsp;
 
| width="5%" | &nbsp;
| width="5%" | <math>\operatorname{En}_\text{sem} :</math>
+
| width="5%" | <math>\mathrm{En}_\text{sem} :</math>
| width="5%" align="center" | [[Image:Rooted Node.jpg|16px]]
+
| width="5%" align="center" | [[File:Rooted Node Big.jpg|16px]]
 
| width="5%" | <math>\mapsto</math>
 
| width="5%" | <math>\mapsto</math>
| <math>\operatorname{false},</math>
+
| <math>\mathrm{false},</math>
 
|-
 
|-
 
| &nbsp;
 
| &nbsp;
 
| &nbsp;
 
| &nbsp;
| align="center" | [[Image:Rooted Edge.jpg|12px]]
+
| align="center" | [[File:Rooted Edge Big.jpg|12px]]
 
| <math>\mapsto</math>
 
| <math>\mapsto</math>
| <math>\operatorname{true}.</math>
+
| <math>\mathrm{true}.</math>
 
|-
 
|-
 
| &nbsp;
 
| &nbsp;
| <math>\operatorname{Ex}_\text{sem} :</math>
+
| <math>\mathrm{Ex}_\text{sem} :</math>
| align="center" | [[Image:Rooted Node.jpg|16px]]
+
| align="center" | [[File:Rooted Node Big.jpg|16px]]
 
| <math>\mapsto</math>
 
| <math>\mapsto</math>
| <math>\operatorname{true},</math>
+
| <math>\mathrm{true},</math>
 
|-
 
|-
 
| &nbsp;
 
| &nbsp;
 
| &nbsp;
 
| &nbsp;
| align="center" | [[Image:Rooted Edge.jpg|12px]]
+
| align="center" | [[File:Rooted Edge Big.jpg|12px]]
 
| <math>\mapsto</math>
 
| <math>\mapsto</math>
| <math>\operatorname{false}.</math>
+
| <math>\mathrm{false}.</math>
 
|}
 
|}
   −
On the other side of the ledger, because the syntactic factors, <math>\operatorname{En}_\text{syn}</math> and <math>\operatorname{Ex}_\text{syn},</math> are indiscernible from each other, there is a syntactic contribution to the overall interpretation process that can be most readily investigated on purely formal grounds.  That will be the task to face when next we meet on these lists.
+
On the other side of the ledger, because the syntactic factors, <math>\mathrm{En}_\text{syn}</math> and <math>\mathrm{Ex}_\text{syn},</math> are indiscernible from each other, there is a syntactic contribution to the overall interpretation process that can be most readily investigated on purely formal grounds.  That will be the task to face when next we meet on these lists.
    
Cast into the form of a 3-adic sign relation, the situation before us can now be given the following shape:
 
Cast into the form of a 3-adic sign relation, the situation before us can now be given the following shape:
Line 270: Line 270:  
|}
 
|}
   −
The interpretation maps <math>\operatorname{En}, \operatorname{Ex} : Y \to X</math> are factored into (1) a common syntactic part and (2) a couple of distinct semantic parts:
+
The interpretation maps <math>\mathrm{En}, \mathrm{Ex} : Y \to X</math> are factored into (1) a common syntactic part and (2) a couple of distinct semantic parts:
    
{| align="center" cellpadding="10" width="90%"
 
{| align="center" cellpadding="10" width="90%"
Line 276: Line 276:  
<math>\begin{array}{ll}
 
<math>\begin{array}{ll}
 
1. &
 
1. &
\operatorname{En}_\text{syn} = \operatorname{Ex}_\text{syn} = \operatorname{E}_\text{syn} : Y \to Y_0
+
\mathrm{En}_\text{syn} = \mathrm{Ex}_\text{syn} = \mathrm{E}_\text{syn} : Y \to Y_0
 
\\[10pt]
 
\\[10pt]
 
2. &
 
2. &
\operatorname{En}_\text{sem}, \operatorname{Ex}_\text{sem} : Y_0 \to X
+
\mathrm{En}_\text{sem}, \mathrm{Ex}_\text{sem} : Y_0 \to X
 
\end{array}</math>
 
\end{array}</math>
 
|}
 
|}
   −
The functional images of the syntactic reduction map <math>\operatorname{E}_\text{syn} : Y \to Y_0</math> are the two simplest signs or the most reduced pair of expressions, regarded as the rooted trees [[Image:Rooted Node.jpg|16px]] and [[Image:Rooted Edge.jpg|12px]], and these may be treated as the canonical representatives of their respective equivalence classes.
+
The functional images of the syntactic reduction map <math>\mathrm{E}_\text{syn} : Y \to Y_0</math> are the two simplest signs or the most reduced pair of expressions, regarded as the rooted trees [[File:Rooted Node Big.jpg|16px]] and [[File:Rooted Edge Big.jpg|12px]], and these may be treated as the canonical representatives of their respective equivalence classes.
    
The more Peirce-sistent among you, on contemplating that last picture, will naturally ask, "What happened to the irreducible 3-adicity of sign relations in this portrayal of logical graphs?"
 
The more Peirce-sistent among you, on contemplating that last picture, will naturally ask, "What happened to the irreducible 3-adicity of sign relations in this portrayal of logical graphs?"
Line 317: Line 317:  
|}
 
|}
   −
The answer is that the last bastion of 3-adic irreducibility presides precisely in the duality of the dual interpretations <math>\operatorname{En}_\text{sem}</math> and <math>\operatorname{Ex}_\text{sem}.</math>  To see this, consider the consequences of there being, contrary to all that we've assumed up to this point, some ultimately compelling reason to assert that the clean slate, the empty medium, the vacuum potential, whatever one wants to call it, is inherently more meaningful of either Falsity or Truth.  This would issue in a conviction forthwith that the 3-adic sign relation involved in this case decomposes as a composition of a couple of functions, that is to say, reduces to a 2-adic relation.
+
The answer is that the last bastion of 3-adic irreducibility presides precisely in the duality of the dual interpretations <math>\mathrm{En}_\text{sem}</math> and <math>\mathrm{Ex}_\text{sem}.</math>  To see this, consider the consequences of there being, contrary to all that we've assumed up to this point, some ultimately compelling reason to assert that the clean slate, the empty medium, the vacuum potential, whatever one wants to call it, is inherently more meaningful of either Falsity or Truth.  This would issue in a conviction forthwith that the 3-adic sign relation involved in this case decomposes as a composition of a couple of functions, that is to say, reduces to a 2-adic relation.
    
The duality of interpretation for logical graphs tells us that the empty medium, the tabula rasa, what Peirce called the ''Sheet of Assertion'' (SA) is a genuine symbol, not to be found among the degenerate species of signs that make up icons and indices, nor, as the SA has no parts, can it number icons or indices among its parts.  What goes for the medium must go for all of the signs that it mediates.  Thus we have the kinds of signs that Peirce in one place called "pure symbols", naming a selection of signs for basic logical operators specifically among them.
 
The duality of interpretation for logical graphs tells us that the empty medium, the tabula rasa, what Peirce called the ''Sheet of Assertion'' (SA) is a genuine symbol, not to be found among the degenerate species of signs that make up icons and indices, nor, as the SA has no parts, can it number icons or indices among its parts.  What goes for the medium must go for all of the signs that it mediates.  Thus we have the kinds of signs that Peirce in one place called "pure symbols", naming a selection of signs for basic logical operators specifically among them.
Line 333: Line 333:     
:* Pure Symbols
 
:* Pure Symbols
:: http://stderr.org/pipermail/inquiry/2005-March/thread.html#2465
+
:: [http://web.archive.org/web/20141124153003/http://stderr.org/pipermail/inquiry/2005-March/thread.html#2465 2005 &bull; March]
:: http://stderr.org/pipermail/inquiry/2005-April/thread.html#2517
+
:: [http://web.archive.org/web/20120601160642/http://stderr.org/pipermail/inquiry/2005-April/thread.html#2517 2005 &bull; April]
    
:* Pure Symbols : Discussion
 
:* Pure Symbols : Discussion
:: http://stderr.org/pipermail/inquiry/2005-March/thread.html#2466
+
:: [http://web.archive.org/web/20141124153003/http://stderr.org/pipermail/inquiry/2005-March/thread.html#2466 2005 &bull; March]
:: http://stderr.org/pipermail/inquiry/2005-April/thread.html#2514
+
:: [http://web.archive.org/web/20120601160642/http://stderr.org/pipermail/inquiry/2005-April/thread.html#2514 2005 &bull; April]
:: http://stderr.org/pipermail/inquiry/2005-May/thread.html#2654
+
:: [http://web.archive.org/web/20120421003708/http://stderr.org/pipermail/inquiry/2005-May/thread.html#2654 2005 &bull; May]
    
And some will find an ethical principle in this freedom of interpretation.  The act of interpretation bears within it an inalienable degree of freedom.  In consequence of this truth, as far as the activity of interpretation goes, freedom and responsibility are the very same thing.  We cannot blame objects for what we say or what we think.  We cannot blame symbols for what we do.  We cannot escape our response ability.  We cannot escape our freedom.
 
And some will find an ethical principle in this freedom of interpretation.  The act of interpretation bears within it an inalienable degree of freedom.  In consequence of this truth, as far as the activity of interpretation goes, freedom and responsibility are the very same thing.  We cannot blame objects for what we say or what we think.  We cannot blame symbols for what we do.  We cannot escape our response ability.  We cannot escape our freedom.
Line 350: Line 350:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_14_Banner.jpg|500px]]
+
| [[File:Logical_Graph_Figure_14_Banner.jpg|500px]]
 
|-
 
|-
| [[Image:Logical_Graph_Figure_15_Banner.jpg|500px]]
+
| [[File:Logical_Graph_Figure_15_Banner.jpg|500px]]
 
|}
 
|}
   −
Let <math>S\!</math> be the set of rooted trees and let <math>S_0\!</math> be the 2-element subset of <math>S\!</math> that consists of a rooted node and a rooted edge.
+
Let <math>S~\!</math> be the set of rooted trees and let <math>S_0~\!</math> be the 2-element subset of <math>S~\!</math> that consists of a rooted node and a rooted edge.
    
{| align="center" cellpadding="10" style="text-align:center"
 
{| align="center" cellpadding="10" style="text-align:center"
| <math>S\!</math>
+
| <math>S~\!</math>
| <math>=\!</math>
+
| <math>=~\!</math>
| <math>\{ \text{rooted trees} \}\!</math>
+
| <math>\{ \text{rooted trees} \}~\!</math>
 
|-
 
|-
| <math>S_0\!</math>
+
| <math>S_0~\!</math>
| <math>=\!</math>
+
| <math>=~\!</math>
| <math>\{ \ominus, \vert \} = \{</math>[[Image:Rooted Node.jpg|16px]], [[Image:Rooted Edge.jpg|12px]]<math>\}\!</math>
+
| <math>\{ \ominus, \vert \} = \{</math>[[File:Rooted Node Big.jpg|16px]], [[File:Rooted Edge Big.jpg|12px]]<math>\}~\!</math>
 
|}
 
|}
   −
Simple intuition, or a simple inductive proof, assures us that any rooted tree can be reduced by way of the arithmetic initials either to a root node [[Image:Rooted Node.jpg|16px]] or else to a rooted edge [[Image:Rooted Edge.jpg|12px]]&nbsp;.
+
Simple intuition, or a simple inductive proof, assures us that any rooted tree can be reduced by way of the arithmetic initials either to a root node [[File:Rooted Node Big.jpg|16px]] or else to a rooted edge [[File:Rooted Edge Big.jpg|12px]]&nbsp;.
    
For example, consider the reduction that proceeds as follows:
 
For example, consider the reduction that proceeds as follows:
    
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_16.jpg|500px]]
+
| [[File:Logical_Graph_Figure_16.jpg|500px]]
 
|}
 
|}
   Line 386: Line 386:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_16.jpg|500px]]
+
| [[File:Logical_Graph_Figure_16.jpg|500px]]
 
|}
 
|}
   Line 394: Line 394:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_17.jpg|500px]]
+
| [[File:Logical_Graph_Figure_17.jpg|500px]]
 
|}
 
|}
   Line 402: Line 402:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_18.jpg|500px]]
+
| [[File:Logical_Graph_Figure_18.jpg|500px]]
 
|-
 
|-
| [[Image:Logical_Graph_Figure_19.jpg|500px]]
+
| [[File:Logical_Graph_Figure_19.jpg|500px]]
 
|}
 
|}
   Line 419: Line 419:  
Thus, if you find youself in an argument with another interpreter who swears to the influence of some quality common to the object and the sign and that really does affect his or her conduct in regard to the two of them, then that argument is almost certainly bound to be utterly futile.  I am sure we've all been there.
 
Thus, if you find youself in an argument with another interpreter who swears to the influence of some quality common to the object and the sign and that really does affect his or her conduct in regard to the two of them, then that argument is almost certainly bound to be utterly futile.  I am sure we've all been there.
   −
When I first became acquainted with the Entish and Extish hermenautics of logical graphs, back in the late great 1960s, I was struck in the spirit of those times by what I imagined to be their Zen and Zenoic sensibilities, the ''tao is silent'' wit of the Zen mind being the empty mind, that seems to go along with the <math>\operatorname{Ex}\!</math> interpretation, and the way from ''the way that's marked is not the true way'' to ''the mark that's marked is not the remarkable mark'' and to ''the sign that's signed is not the significant sign'' of the <math>\operatorname{En}\!</math> interpretation, reminding us that the sign is not the object, no matter how apt the image.  And later, when my discovery of the cactus graph extension of logical graphs led to the leimons of neural pools, where <math>\operatorname{En}\!</math> says that truth is an active condition, while <math>\operatorname{Ex}\!</math> says that sooth is a quiescent mind, all these themes got reinforced more still.
+
When I first became acquainted with the Entish and Extish hermenautics of logical graphs, back in the late great 1960s, I was struck in the spirit of those times by what I imagined to be their Zen and Zenoic sensibilities, the ''tao is silent'' wit of the Zen mind being the empty mind, that seems to go along with the <math>\mathrm{Ex}~\!</math> interpretation, and the way from ''the way that's marked is not the true way'' to ''the mark that's marked is not the remarkable mark'' and to ''the sign that's signed is not the significant sign'' of the <math>\mathrm{En}~\!</math> interpretation, reminding us that the sign is not the object, no matter how apt the image.  And later, when my discovery of the cactus graph extension of logical graphs led to the leimons of neural pools, where <math>\mathrm{En}~\!</math> says that truth is an active condition, while <math>\mathrm{Ex}~\!</math> says that sooth is a quiescent mind, all these themes got reinforced more still.
    
We hold these truths to be self-iconic, but they come in complementary couples, in consort to the flip-side of the tao.
 
We hold these truths to be self-iconic, but they come in complementary couples, in consort to the flip-side of the tao.
Line 429: Line 429:  
A ''sort'' of signs is more formally known as an ''equivalence class'' (EC).  There are in general many sorts of sorts of signs that we might wish to consider in this inquiry, but let's begin with the sort of signs all of whose members denote the same object as their referent, a sort of signs to be henceforth referred to as a ''referential equivalence class'' (REC).
 
A ''sort'' of signs is more formally known as an ''equivalence class'' (EC).  There are in general many sorts of sorts of signs that we might wish to consider in this inquiry, but let's begin with the sort of signs all of whose members denote the same object as their referent, a sort of signs to be henceforth referred to as a ''referential equivalence class'' (REC).
   −
:* [http://stderr.org/pipermail/inquiry/2005-October/thread.html#3104 FOLG]
+
:* [http://web.archive.org/web/20150224133200/http://stderr.org/pipermail/inquiry/2005-October/thread.html#3104 Inquiry List &bull; Futures Of Logical Graphs]
:* [http://stderr.org/pipermail/inquiry/2005-October/003113.html FOLG 5]
+
:* [http://web.archive.org/web/20120206123011/http://stderr.org/pipermail/inquiry/2005-October/003113.html Inquiry List &bull; Futures Of Logical Graphs &bull; Note 5]
    
Toward the outset of this excursion, I mentioned the distinction between a ''pointwise-restricted iconic map'' or a ''pointedly rigid iconic map'' (PRIM) and a ''system-wide iconic map'' (SWIM).  The time has come to make use of that mention.
 
Toward the outset of this excursion, I mentioned the distinction between a ''pointwise-restricted iconic map'' or a ''pointedly rigid iconic map'' (PRIM) and a ''system-wide iconic map'' (SWIM).  The time has come to make use of that mention.
Line 465: Line 465:  
|}
 
|}
   −
The object domain <math>O\!</math> is the boolean domain <math>\mathbb{B} = \{ \operatorname{falsity}, \operatorname{truth} \},</math> the semiotic domain <math>S\!</math> is any of the spaces isomorphic to the set of rooted trees, matched-up parentheses, or unlabeled alpha graphs, and we treat a couple of ''denotation maps'' <math>\operatorname{den}_\text{en}, \operatorname{den}_\text{ex} : S \to O.</math>
+
The object domain <math>O~\!</math> is the boolean domain <math>\mathbb{B} = \{ \mathrm{falsity}, \mathrm{truth} \},</math> the semiotic domain <math>S~\!</math> is any of the spaces isomorphic to the set of rooted trees, matched-up parentheses, or unlabeled alpha graphs, and we treat a couple of ''denotation maps'' <math>\mathrm{den}_\text{en}, \mathrm{den}_\text{ex} : S \to O.</math>
   −
Either one of the denotation maps induces the same partition of <math>S\!</math> into RECs, a partition whose structure is suggested by the following two sets of strings:
+
Either one of the denotation maps induces the same partition of <math>S~\!</math> into RECs, a partition whose structure is suggested by the following two sets of strings:
    
{| align="center" cellpadding="10" width="90%"
 
{| align="center" cellpadding="10" width="90%"
Line 492: Line 492:  
In thinking about mappings between categories of structured individuals, we can take each mapping in two parts.  At the first level of analysis, there is the part that maps individuals to individuals.  At the second level of analysis, there is the part that maps the structural parts of each individual to the structural parts of the individual that forms its counterpart under the first part of the mapping in question.
 
In thinking about mappings between categories of structured individuals, we can take each mapping in two parts.  At the first level of analysis, there is the part that maps individuals to individuals.  At the second level of analysis, there is the part that maps the structural parts of each individual to the structural parts of the individual that forms its counterpart under the first part of the mapping in question.
   −
The general scheme of things is suggested by the following Figure, where the mapping <math>f\!</math> from COSI <math>U\!</math> to COSI <math>V\!</math> is analyzed in terms of a mapping <math>g\!</math> that takes individuals to individuals, ignoring their inner structures, and a set of mappings <math>h_j,\!</math> where <math>j\!</math> ranges over the individuals of COSI <math>U,\!</math> and where <math>h_j\!</math> specifies just how the parts of <math>j\!</math> map to the parts of <math>{g(j)},\!</math> its counterpart under <math>{g}.\!</math>
+
The general scheme of things is suggested by the following Figure, where the mapping <math>f~\!</math> from COSI <math>U~\!</math> to COSI <math>V~\!</math> is analyzed in terms of a mapping <math>g~\!</math> that takes individuals to individuals, ignoring their inner structures, and a set of mappings <math>h_j,~\!</math> where <math>j~\!</math> ranges over the individuals of COSI <math>U,~\!</math> and where <math>h_j~\!</math> specifies just how the parts of <math>j~\!</math> map to the parts of <math>{g(j)},~\!</math> its counterpart under <math>{g}.~\!</math>
    
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
Line 520: Line 520:  
|}
 
|}
   −
Next time we'll apply this general scheme to the <math>\operatorname{En}\!</math> and <math>\operatorname{Ex}\!</math> interpretations of logical graphs, and see how it helps us to sort out the varieties of iconic mapping that are involved in that setting.
+
Next time we'll apply this general scheme to the <math>\mathrm{En}~\!</math> and <math>\mathrm{Ex}~\!</math> interpretations of logical graphs, and see how it helps us to sort out the varieties of iconic mapping that are involved in that setting.
   −
Corresponding to the Entitative and Existential interpretations of the primary arithmetic, there are two distinct mappings from the sign domain <math>S,\!</math> containing the topological equivalents of bare and rooted trees, onto the object domain <math>O,\!</math> containing the two objects whose conventional, ordinary, or meta-language names are ''falsity'' and ''truth'', respectively.
+
Corresponding to the Entitative and Existential interpretations of the primary arithmetic, there are two distinct mappings from the sign domain <math>S,~\!</math> containing the topological equivalents of bare and rooted trees, onto the object domain <math>O,~\!</math> containing the two objects whose conventional, ordinary, or meta-language names are ''falsity'' and ''truth'', respectively.
   −
The next two Figures suggest how one might view the interpretation maps as mappings from a COSI <math>S\!</math> to a COSI <math>O.\!</math>  Here I have placed names of categories at the bottom, indices of individuals at the next level, and extended upward from there whatever structures the individuals may have.
+
The next two Figures suggest how one might view the interpretation maps as mappings from a COSI <math>S~\!</math> to a COSI <math>O.~\!</math>  Here I have placed names of categories at the bottom, indices of individuals at the next level, and extended upward from there whatever structures the individuals may have.
    
Here is the Figure for the Entitative interpretation:
 
Here is the Figure for the Entitative interpretation:
Line 576: Line 576:  
|}
 
|}
   −
Note that the structure of a tree begins at its root, marked by an "O".  The objects in <math>O\!</math> have no further structure to speak of, so there is nothing much happening in the object domain <math>O\!</math> between the level of individuals and the level of structures.  In the sign domain <math>S\!</math> the individuals are the parts of the partition into referential equivalence classes, each part of which contains a countable infinity of syntactic structures, rooted trees, or whatever form one views their structures taking.  The sense of the Figures is that the interpretation under consideration maps the individual on the left side of <math>S\!</math> to the individual on the left side of <math>O\!</math> and maps the individual on the right side of <math>S\!</math> to the individual on the right side of <math>O.\!</math>
+
Note that the structure of a tree begins at its root, marked by an "O".  The objects in <math>O~\!</math> have no further structure to speak of, so there is nothing much happening in the object domain <math>O~\!</math> between the level of individuals and the level of structures.  In the sign domain <math>S~\!</math> the individuals are the parts of the partition into referential equivalence classes, each part of which contains a countable infinity of syntactic structures, rooted trees, or whatever form one views their structures taking.  The sense of the Figures is that the interpretation under consideration maps the individual on the left side of <math>S~\!</math> to the individual on the left side of <math>O~\!</math> and maps the individual on the right side of <math>S~\!</math> to the individual on the right side of <math>O.~\!</math>
    
An iconic mapping, that gets formalized in mathematical terms as a ''morphism'', is said to be a ''structure-preserving map''.  This does not mean that all of the structure of the source domain is preserved in the map ''images'' of the target domain, but only ''some'' of the structure, that is, specific types of relation that are defined among the elements of the source and target, respectively.
 
An iconic mapping, that gets formalized in mathematical terms as a ''morphism'', is said to be a ''structure-preserving map''.  This does not mean that all of the structure of the source domain is preserved in the map ''images'' of the target domain, but only ''some'' of the structure, that is, specific types of relation that are defined among the elements of the source and target, respectively.
Line 582: Line 582:  
For example, let's start with the archetype of all morphisms, namely, a ''linear function'' or a ''linear mapping'' <math>f : X \to Y.</math>
 
For example, let's start with the archetype of all morphisms, namely, a ''linear function'' or a ''linear mapping'' <math>f : X \to Y.</math>
   −
To say that the function <math>f\!</math> is ''linear'' is to say that we have already got in mind a couple of relations on <math>X\!</math> and <math>Y\!</math> that have forms roughly analogous to "addition tables", so let's signify their operation by means of the symbols <math>{}^{\backprime\backprime} \# {}^{\prime\prime}</math> for addition in <math>X\!</math> and <math>{}^{\backprime\backprime} + {}^{\prime\prime}</math> for addition in <math>Y.\!</math>
+
To say that the function <math>f~\!</math> is ''linear'' is to say that we have already got in mind a couple of relations on <math>X~\!</math> and <math>Y~\!</math> that have forms roughly analogous to "addition tables", so let's signify their operation by means of the symbols <math>{}^{\backprime\backprime} \# {}^{\prime\prime}</math> for addition in <math>X~\!</math> and <math>{}^{\backprime\backprime} + {}^{\prime\prime}</math> for addition in <math>Y.~\!</math>
   −
More exactly, the use of <math>{}^{\backprime\backprime} \# {}^{\prime\prime}</math> refers to a 3-adic relation <math>L_X \subseteq X \times X \times X</math> that licenses the formula <math>a ~\#~ b = c</math> just when <math>(a, b, c)\!</math> is in <math>L_X\!</math> and the use of <math>{}^{\backprime\backprime} + {}^{\prime\prime}</math> refers to a 3-adic relation <math>L_Y \subseteq Y \times Y \times Y</math> that licenses the formula <math>p + q = r\!</math> just when <math>(p, q, r)\!</math> is in <math>L_Y.\!</math>
+
More exactly, the use of <math>{}^{\backprime\backprime} \# {}^{\prime\prime}</math> refers to a 3-adic relation <math>L_X \subseteq X \times X \times X</math> that licenses the formula <math>a ~\#~ b = c</math> just when <math>(a, b, c)~\!</math> is in <math>L_X~\!</math> and the use of <math>{}^{\backprime\backprime} + {}^{\prime\prime}</math> refers to a 3-adic relation <math>L_Y \subseteq Y \times Y \times Y</math> that licenses the formula <math>p + q = r~\!</math> just when <math>(p, q, r)~\!</math> is in <math>L_Y.~\!</math>
   −
In this setting the mapping <math>f : X \to Y</math> is said to be ''linear'', and to ''preserve'' the structure of <math>L_X\!</math> in the structure of <math>L_Y,\!</math> if and only if <math>f(a ~\#~ b) = f(a) + f(b),</math> for all pairs <math>a, b\!</math> in <math>X.\!</math>  In other words, the function <math>f\!</math> ''distributes'' over the two additions, from <math>\#</math> to <math>+,\!</math> just as if <math>f\!</math> were a form of multiplication, analogous to <math>m(a + b) = ma + mb.\!</math>
+
In this setting the mapping <math>f : X \to Y</math> is said to be ''linear'', and to ''preserve'' the structure of <math>L_X~\!</math> in the structure of <math>L_Y,~\!</math> if and only if <math>f(a ~\#~ b) = f(a) + f(b),</math> for all pairs <math>a, b~\!</math> in <math>X.~\!</math>  In other words, the function <math>f~\!</math> ''distributes'' over the two additions, from <math>\#</math> to <math>+,~\!</math> just as if <math>f~\!</math> were a form of multiplication, analogous to <math>m(a + b) = ma + mb.~\!</math>
   −
Writing this more directly in terms of the 3-adic relations <math>L_X\!</math> and <math>L_Y\!</math> instead of via their operation symbols, we would say that <math>f : X \to Y</math> is linear with regard to <math>L_X\!</math> and <math>L_Y\!</math> if and only if <math>(a, b, c)\!</math> being in the relation <math>L_X\!</math> determines that its map image <math>(f(a), f(b), f(c))\!</math> be in <math>L_Y.\!</math>  To see this, observe that <math>(a, b, c)\!</math> being in <math>L_X\!</math> implies that <math>c = a ~\#~ b,</math> and <math>(f(a), f(b), f(c))\!</math> being in <math>L_Y\!</math> implies that <math>f(c) = f(a) + f(b),\!</math> so we have that <math>f(a ~\#~ b) = f(c) = f(a) + f(b),</math> and the two notions are one.
+
Writing this more directly in terms of the 3-adic relations <math>L_X~\!</math> and <math>L_Y~\!</math> instead of via their operation symbols, we would say that <math>f : X \to Y</math> is linear with regard to <math>L_X~\!</math> and <math>L_Y~\!</math> if and only if <math>(a, b, c)~\!</math> being in the relation <math>L_X~\!</math> determines that its map image <math>(f(a), f(b), f(c))~\!</math> be in <math>L_Y.~\!</math>  To see this, observe that <math>(a, b, c)~\!</math> being in <math>L_X~\!</math> implies that <math>c = a ~\#~ b,</math> and <math>(f(a), f(b), f(c))~\!</math> being in <math>L_Y~\!</math> implies that <math>f(c) = f(a) + f(b),~\!</math> so we have that <math>f(a ~\#~ b) = f(c) = f(a) + f(b),</math> and the two notions are one.
    
The idea of mappings that preserve 3-adic relations should ring a few bells here.
 
The idea of mappings that preserve 3-adic relations should ring a few bells here.
   −
Once again into the breach between the interpretations <math>\operatorname{En}, \operatorname{Ex} : S \to O,</math> drawing but a single Figure in the sand and relying on the reader to recall:
+
Once again into the breach between the interpretations <math>\mathrm{En}, \mathrm{Ex} : S \to O,</math> drawing but a single Figure in the sand and relying on the reader to recall:
    
{| align="center" cellpadding="10" width="90%"
 
{| align="center" cellpadding="10" width="90%"
 
|
 
|
<p><math>\operatorname{En}\!</math> maps every tree on the left side of <math>S\!</math> to the left side of <math>O.\!</math></p>
+
<p><math>\mathrm{En}~\!</math> maps every tree on the left side of <math>S~\!</math> to the left side of <math>O.~\!</math></p>
   −
<p><math>\operatorname{En}\!</math> maps every tree on the right side of <math>S\!</math> to the right side of <math>O.\!</math></p>
+
<p><math>\mathrm{En}~\!</math> maps every tree on the right side of <math>S~\!</math> to the right side of <math>O.~\!</math></p>
 
|-
 
|-
 
|
 
|
<p><math>\operatorname{Ex}\!</math> maps every tree on the left side of <math>S\!</math> to the right side of <math>O.\!</math></p>
+
<p><math>\mathrm{Ex}~\!</math> maps every tree on the left side of <math>S~\!</math> to the right side of <math>O.~\!</math></p>
   −
<p><math>\operatorname{Ex}\!</math> maps every tree on the right side of <math>S\!</math> to the left side of <math>O.\!</math></p>
+
<p><math>\mathrm{Ex}~\!</math> maps every tree on the right side of <math>S~\!</math> to the left side of <math>O.~\!</math></p>
 
|}
 
|}
   Line 631: Line 631:  
Those who wish to say that these logical signs are iconic of their logical objects must not only find some reason that logic itself singles out one interpretation over the other, but, even if they succeed in that, they must further make us believe that every sign for Truth is iconic of Truth, while every sign for Falsity is iconic of Falsity.
 
Those who wish to say that these logical signs are iconic of their logical objects must not only find some reason that logic itself singles out one interpretation over the other, but, even if they succeed in that, they must further make us believe that every sign for Truth is iconic of Truth, while every sign for Falsity is iconic of Falsity.
   −
One of the questions that arises at this point, where we have a very small object domain <math>O = \{ \operatorname{falsity}, \operatorname{truth} \}</math> and a very large sign domain <math>S \cong \{ \text{rooted trees} \},</math> is the following:
+
One of the questions that arises at this point, where we have a very small object domain <math>O = \{ \mathrm{falsity}, \mathrm{truth} \}</math> and a very large sign domain <math>S \cong \{ \text{rooted trees} \},</math> is the following:
    
:* Why do we have so many ways of saying the same thing?
 
:* Why do we have so many ways of saying the same thing?
Line 643: Line 643:  
The first order of business is to give the exact forms of the axioms that I use, devolving from Peirce's Logical Graphs via Spencer-Brown's ''Laws of Form'' (LOF).  In formal proofs, I will use a variation of the annotation scheme from LOF to mark each step of the proof according to which axiom, or ''initial'', is being invoked to justify the corresponding step of syntactic transformation, whether it applies to graphs or to strings.
 
The first order of business is to give the exact forms of the axioms that I use, devolving from Peirce's Logical Graphs via Spencer-Brown's ''Laws of Form'' (LOF).  In formal proofs, I will use a variation of the annotation scheme from LOF to mark each step of the proof according to which axiom, or ''initial'', is being invoked to justify the corresponding step of syntactic transformation, whether it applies to graphs or to strings.
   −
The axioms are just four in number, and they come in a couple of flavors:  the ''arithmetic initials'' <math>I_1\!</math> and <math>I_2,\!</math> and the ''algebraic initials'' <math>J_1\!</math> and <math>J_2.\!</math>
+
The axioms are just four in number, and they come in a couple of flavors:  the ''arithmetic initials'' <math>I_1~\!</math> and <math>I_2,~\!</math> and the ''algebraic initials'' <math>J_1~\!</math> and <math>J_2.~\!</math>
    
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_20.jpg|500px]]
+
| [[File:Logical_Graph_Figure_20.jpg|500px]]
 
|-
 
|-
| [[Image:Logical_Graph_Figure_21.jpg|500px]]
+
| [[File:Logical_Graph_Figure_21.jpg|500px]]
 
|-
 
|-
| [[Image:Logical_Graph_Figure_22.jpg|500px]]
+
| [[File:Logical_Graph_Figure_22.jpg|500px]]
 
|-
 
|-
| [[Image:Logical_Graph_Figure_23.jpg|500px]]
+
| [[File:Logical_Graph_Figure_23.jpg|500px]]
 
|}
 
|}
   −
Notice that all of the axioms in this set have the form of equations.  This means that all of the inference steps they allow are reversible.  In the proof annotation scheme below, I will use a double bar <math>=\!=\!=\!=\!=</math> to mark this fact, but I may at times leave it to the reader to pick which direction is the one required for applying the indicated axiom.
+
Notice that all of the axioms in this set have the form of equations.  This means that all of the inference steps they allow are reversible.  In the proof annotation scheme below, I will use a double bar <math>=~\!=~\!=~\!=~\!=</math> to mark this fact, but I may at times leave it to the reader to pick which direction is the one required for applying the indicated axiom.
    
==Frequently used theorems==
 
==Frequently used theorems==
Line 667: Line 667:  
===C<sub>1</sub>. Double negation theorem===
 
===C<sub>1</sub>. Double negation theorem===
   −
The first theorem goes under the names of ''Consequence&nbsp;1'' <math>(C_1)\!</math>, the ''double negation theorem'' (DNT), or ''Reflection''.
+
The first theorem goes under the names of ''Consequence&nbsp;1'' <math>(C_1)~\!</math>, the ''double negation theorem'' (DNT), or ''Reflection''.
    
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_24.jpg|500px]]
+
| [[File:Logical_Graph_Figure_24.jpg|500px]]
 
|}
 
|}
   Line 676: Line 676:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_25.jpg|500px]]
+
| [[File:Logical_Graph_Figure_25.jpg|500px]]
 
|}
 
|}
    
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_26.jpg|500px]]
+
| [[File:Logical_Graph_Figure_26.jpg|500px]]
 
|}
 
|}
    
===C<sub>2</sub>. Generation theorem===
 
===C<sub>2</sub>. Generation theorem===
   −
One theorem of frequent use goes under the nickname of the ''weed and seed theorem'' (WAST).  The proof is just an exercise in mathematical induction, once a suitable basis is laid down, and it will be left as an exercise for the reader.  What the WAST says is that a label can be freely distributed or freely erased anywhere in a subtree whose root is labeled with that label.  The second in our list of frequently used theorems is in fact the base case of this weed and seed theorem.  In LOF, it goes by the names of ''Consequence&nbsp;2'' <math>(C_2)\!</math> or ''Generation''.
+
One theorem of frequent use goes under the nickname of the ''weed and seed theorem'' (WAST).  The proof is just an exercise in mathematical induction, once a suitable basis is laid down, and it will be left as an exercise for the reader.  What the WAST says is that a label can be freely distributed or freely erased anywhere in a subtree whose root is labeled with that label.  The second in our list of frequently used theorems is in fact the base case of this weed and seed theorem.  In LOF, it goes by the names of ''Consequence&nbsp;2'' <math>(C_2)~\!</math> or ''Generation''.
    
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_27.jpg|500px]]
+
| [[File:Logical_Graph_Figure_27.jpg|500px]]
 
|}
 
|}
   Line 694: Line 694:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_28.jpg|500px]]
+
| [[File:Logical_Graph_Figure_28.jpg|500px]]
 
|}
 
|}
   Line 725: Line 725:  
What sorts of sign relation are implicated in this sign process?  For simplicity, let's answer for the existential interpretation.
 
What sorts of sign relation are implicated in this sign process?  For simplicity, let's answer for the existential interpretation.
   −
In <math>\operatorname{Ex},</math> all four of the listed signs are expressions of Falsity, and, viewed within the special type of semiotic procedure that is being considered here, each sign interprets its predecessor in the sequence.  Thus we might begin by drawing up this Table:
+
In <math>\mathrm{Ex},</math> all four of the listed signs are expressions of Falsity, and, viewed within the special type of semiotic procedure that is being considered here, each sign interprets its predecessor in the sequence.  Thus we might begin by drawing up this Table:
    
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
Line 872: Line 872:  
===C<sub>3</sub>. Dominant form theorem===
 
===C<sub>3</sub>. Dominant form theorem===
   −
The third of the frequently used theorems of service to this survey is one that Spencer-Brown annotates as ''Consequence&nbsp;3'' <math>(C_3)\!</math> or ''Integration''.  A better mnemonic might be ''dominance and recession theorem'' (DART), but perhaps the brevity of ''dominant form theorem'' (DFT) is sufficient reminder of its double-edged role in proofs.
+
The third of the frequently used theorems of service to this survey is one that Spencer-Brown annotates as ''Consequence&nbsp;3'' <math>(C_3)~\!</math> or ''Integration''.  A better mnemonic might be ''dominance and recession theorem'' (DART), but perhaps the brevity of ''dominant form theorem'' (DFT) is sufficient reminder of its double-edged role in proofs.
    
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_29.jpg|500px]]
+
| [[File:Logical_Graph_Figure_29.jpg|500px]]
 
|}
 
|}
   Line 881: Line 881:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_30.jpg|500px]]
+
| [[File:Logical_Graph_Figure_30.jpg|500px]]
 
|}
 
|}
   Line 901: Line 901:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_31.jpg|500px]]
+
| [[File:Logical_Graph_Figure_31.jpg|500px]]
 
|}
 
|}
   Line 907: Line 907:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_32.jpg|500px]]
+
| [[File:Logical_Graph_Figure_32.jpg|500px]]
 
|}
 
|}
   Line 931: Line 931:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_33.jpg|500px]]
+
| [[File:Logical_Graph_Figure_33.jpg|500px]]
 
|}
 
|}
   Line 937: Line 937:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_34.jpg|500px]]
+
| [[File:Logical_Graph_Figure_34.jpg|500px]]
 
|}
 
|}
   Line 1,173: Line 1,173:  
|}
 
|}
   −
* <math>\operatorname{En},</math> for which blank = false and cross = true, calls this "equivalence".
+
* <math>\mathrm{En},</math> for which blank = false and cross = true, calls this "equivalence".
* <math>\operatorname{Ex},</math> for which blank = true and cross = false, calls this "distinction".
+
* <math>\mathrm{Ex},</math> for which blank = true and cross = false, calls this "distinction".
    
The step of controlled reflection that we just took can be iterated just as far as we wish to take it, as suggested by the following set:
 
The step of controlled reflection that we just took can be iterated just as far as we wish to take it, as suggested by the following set:
Line 1,578: Line 1,578:  
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center"
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 00.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 00.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 01.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 01.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast A.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast A.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 02.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 02.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Domination ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Domination ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 03.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 03.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 04.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 04.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Domination ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Domination ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 05.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 05.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 06.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 06.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast D.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast D.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 07.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 07.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Domination ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Domination ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 08.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 08.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 09.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 09.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Domination ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Domination ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 10.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 10.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 11.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 11.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast B.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast B.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 12.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 12.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 13.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 13.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Domination ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Domination ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 14.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 14.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 15.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 15.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast C ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast C ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 16.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 16.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 17.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 17.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof Praeclarum Theorema CAST 18.jpg|500px]]
+
| [[File:Proof Praeclarum Theorema CAST 18.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- QED.jpg|500px]]
+
| [[File:Equational Inference Bar -- QED.jpg|500px]]
 
|}
 
|}
 
|}
 
|}
Line 1,660: Line 1,660:  
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center"
 
|-
 
|-
| [[Image:Praeclarum Theorema CAST 500 x 389 Animation.gif]]
+
| [[File:Praeclarum Theorema CAST 500 x 389 Animation.gif]]
 
|}
 
|}
 
|}
 
|}
Line 1,667: Line 1,667:     
{| align="center" cellpadding="8"
 
{| align="center" cellpadding="8"
| [[Image:Praeclarum Theorema DNF.jpg|500px]]
+
| [[File:Praeclarum Theorema DNF.jpg|500px]]
 
|}
 
|}
   −
Remembering that a blank node is the graphical equivalent of a logical value <math>\operatorname{true},</math> the resulting DNF may be read as follows:
+
Remembering that a blank node is the graphical equivalent of a logical value <math>\mathrm{true},</math> the resulting DNF may be read as follows:
    
{| align="center" cellpadding="10" style="text-align:center; width:60%"
 
{| align="center" cellpadding="10" style="text-align:center; width:60%"
Line 1,705: Line 1,705:     
{| align="center" cellpadding="10"
 
{| align="center" cellpadding="10"
| [[Image:Logical_Graph_Figure_33.jpg|500px]]
+
| [[File:Logical_Graph_Figure_33.jpg|500px]]
 
|}
 
|}
   Line 1,714: Line 1,714:  
|}
 
|}
   −
What we have here amounts to a couple of different styles of communicative conduct, that is, two sequences of signs of the form <math>e_1, e_2, \ldots, e_n,\!</math> each one beginning with a problematic expression and eventually ending with a clear expression of the ''logical equivalence class'' to which every sign or expression in the sequence belongs.  Ordinarily, any orbit through a locus of signs can be taken to reflect an underlying sign-process, a case of ''semiosis''.  So what we have here are two very special cases of semiosis, and what we may find it useful to contemplate is how to characterize them as two species of a very general class.  Ordinarily, any orbit through a locus of signs can be taken to reflect an underlying sign-process, a case of ''semiosis''.  So what we have here are two very special cases of semiosis, and what we might just find it useful to contemplate is how to characterize them as two species of a very general class.
+
What we have here amounts to a couple of different styles of communicative conduct, that is, two sequences of signs of the form <math>e_1, e_2, \ldots, e_n,~\!</math> each one beginning with a problematic expression and eventually ending with a clear expression of the ''logical equivalence class'' to which every sign or expression in the sequence belongs.  Ordinarily, any orbit through a locus of signs can be taken to reflect an underlying sign-process, a case of ''semiosis''.  So what we have here are two very special cases of semiosis, and what we may find it useful to contemplate is how to characterize them as two species of a very general class.  Ordinarily, any orbit through a locus of signs can be taken to reflect an underlying sign-process, a case of ''semiosis''.  So what we have here are two very special cases of semiosis, and what we might just find it useful to contemplate is how to characterize them as two species of a very general class.
    
We are starting to delve into some fairly picayune details of a particular sign system, non-trivial enough in its own right but still rather simple compared to the types of our ultimate interest, and though I believe that this exercise will be worth the effort in prospect of understanding more complicated sign systems, I feel that I ought to say a few words about the larger reasons for going through this work.
 
We are starting to delve into some fairly picayune details of a particular sign system, non-trivial enough in its own right but still rather simple compared to the types of our ultimate interest, and though I believe that this exercise will be worth the effort in prospect of understanding more complicated sign systems, I feel that I ought to say a few words about the larger reasons for going through this work.
Line 1,749: Line 1,749:  
|}
 
|}
   −
This can be read as <math>{}^{\backprime\backprime} \operatorname{not}~ p ~\operatorname{without}~ q ~\operatorname{and~not}~ q ~\operatorname{without}~ p {}^{\prime\prime},</math> in symbols, <math>(p \Rightarrow q) \land (q \Rightarrow p).</math>
+
This can be read as <math>{}^{\backprime\backprime} \mathrm{not}~ p ~\mathrm{without}~ q ~\mathrm{and~not}~ q ~\mathrm{without}~ p {}^{\prime\prime},</math> in symbols, <math>(p \Rightarrow q) \land (q \Rightarrow p).</math>
    
Graphing the topological dual form, one obtains the following rooted tree:
 
Graphing the topological dual form, one obtains the following rooted tree:
Line 1,796: Line 1,796:     
{| align="center" cellpadding="8"
 
{| align="center" cellpadding="8"
| [[Image:Logical Graph (P (Q)) (P (R)).jpg|500px]] || (26)
+
| [[File:Logical Graph (P (Q)) (P (R)).jpg|500px]] || (26)
 
|}
 
|}
   −
For the sake of simplicity in discussing this example, let's stick with the existential interpretation (<math>\operatorname{Ex}\!</math>) of logical graphs and their corresponding parse strings.  Under <math>\operatorname{Ex}\!</math> the formal expression <math>{\texttt{(} p \texttt{(} q \texttt{))(} p \texttt{(} r \texttt{))}}\!</math> translates into the vernacular expression <math>{}^{\backprime\backprime} p ~\operatorname{implies}~ q ~\operatorname{and}~ p ~\operatorname{implies}~ r {}^{\prime\prime},</math> in symbols, <math>(p \Rightarrow q) \land (p \Rightarrow r),</math> so this is the reading that we'll want to keep in mind for the present.  Where brevity is required, we may refer to the propositional expression <math>{\texttt{(} p \texttt{(} q \texttt{))(} p \texttt{(} r \texttt{))}}\!</math> under the name <math>f\!</math> by making use of the following definition:
+
For the sake of simplicity in discussing this example, let's stick with the existential interpretation (<math>\mathrm{Ex}~\!</math>) of logical graphs and their corresponding parse strings.  Under <math>\mathrm{Ex}~\!</math> the formal expression <math>{\texttt{(} p \texttt{(} q \texttt{))(} p \texttt{(} r \texttt{))}}~\!</math> translates into the vernacular expression <math>{}^{\backprime\backprime} p ~\mathrm{implies}~ q ~\mathrm{and}~ p ~\mathrm{implies}~ r {}^{\prime\prime},</math> in symbols, <math>(p \Rightarrow q) \land (p \Rightarrow r),</math> so this is the reading that we'll want to keep in mind for the present.  Where brevity is required, we may refer to the propositional expression <math>{\texttt{(} p \texttt{(} q \texttt{))(} p \texttt{(} r \texttt{))}}~\!</math> under the name <math>f~\!</math> by making use of the following definition:
    
{| align="center" cellpadding="8"
 
{| align="center" cellpadding="8"
| <math>f ~=~ \texttt{(} p \texttt{(} q \texttt{))(} p \texttt{(} r \texttt{))}\!</math>
+
| <math>f ~=~ \texttt{(} p \texttt{(} q \texttt{))(} p \texttt{(} r \texttt{))}~\!</math>
 
|}
 
|}
   −
Since the expression <math>{\texttt{(} p \texttt{(} q \texttt{))(} p \texttt{(} r \texttt{))}}\!</math> involves just three variables, it may be worth the trouble to draw a venn diagram of the situation.  There are in fact two different ways to execute the picture.
+
Since the expression <math>{\texttt{(} p \texttt{(} q \texttt{))(} p \texttt{(} r \texttt{))}}~\!</math> involves just three variables, it may be worth the trouble to draw a venn diagram of the situation.  There are in fact two different ways to execute the picture.
   −
Figure&nbsp;27 indicates the points of the universe of discourse <math>X\!</math> for which the proposition <math>f : X \to \mathbb{B}</math> has the value 1, here interpreted as the logical value <math>\operatorname{true}.</math>  In this ''paint by numbers'' style of picture, one simply paints over the cells of a generic template for the universe <math>X,\!</math> going according to some previously adopted convention, for instance:  Let the cells that get the value 0 under <math>f\!</math> remain untinted and let the cells that get the value 1 under <math>f\!</math> be painted or shaded.  In doing this, it may be good to remind ourselves that the value of the picture as a whole is not in the ''paints'', in other words, the <math>0, 1\!</math> in <math>\mathbb{B},</math> but in the pattern of regions that they indicate.
+
Figure&nbsp;27 indicates the points of the universe of discourse <math>X~\!</math> for which the proposition <math>f : X \to \mathbb{B}</math> has the value 1, here interpreted as the logical value <math>\mathrm{true}.</math>  In this ''paint by numbers'' style of picture, one simply paints over the cells of a generic template for the universe <math>X,~\!</math> going according to some previously adopted convention, for instance:  Let the cells that get the value 0 under <math>f~\!</math> remain untinted and let the cells that get the value 1 under <math>f~\!</math> be painted or shaded.  In doing this, it may be good to remind ourselves that the value of the picture as a whole is not in the ''paints'', in other words, the <math>0, 1~\!</math> in <math>\mathbb{B},</math> but in the pattern of regions that they indicate.
    
{| align="center" cellpadding="8" style="text-align:center"
 
{| align="center" cellpadding="8" style="text-align:center"
| [[Image:Venn Diagram (P (Q)) (P (R)).jpg|500px]] || (27)
+
| [[File:Venn Diagram (P (Q)) (P (R)).jpg|500px]] || (27)
 
|-
 
|-
 
| <math>\text{Venn Diagram for}~ \texttt{(} p \texttt{~(} q \texttt{))~(} p \texttt{~(} r \texttt{))}</math>
 
| <math>\text{Venn Diagram for}~ \texttt{(} p \texttt{~(} q \texttt{))~(} p \texttt{~(} r \texttt{))}</math>
 
|}
 
|}
   −
There are a number of standard ways in mathematics and statistics for talking about the subset <math>W\!</math> of the functional domain <math>X\!</math> that gets painted with the value <math>z \in \mathbb{B}</math> by the indicator function <math>f : X \to \mathbb{B}.</math>  The region <math>W \subseteq X</math> is called by a variety of names in different settings, for example, the ''antecedent'', the ''fiber'', the ''inverse image'', the ''level set'', or the ''pre-image'' in <math>X\!</math> of <math>z\!</math> under <math>f.\!</math>  It is notated and defined as <math>W = f^{-1}(z).\!</math>  Here, <math>f^{-1}\!</math> is called the ''converse relation'' or the ''inverse relation'' &mdash; it is not in general an inverse function &mdash; corresponding to the function <math>f.\!</math>  Whenever possible in simple examples, we use lower case letters for functions <math>f : X \to \mathbb{B},</math> and it is sometimes useful to employ capital letters for subsets of <math>X,\!</math> if possible, in such a way that <math>F\!</math> is the fiber of 1 under <math>f,\!</math> in other words, <math>F = f^{-1}(1).\!</math>
+
There are a number of standard ways in mathematics and statistics for talking about the subset <math>W~\!</math> of the functional domain <math>X~\!</math> that gets painted with the value <math>z \in \mathbb{B}</math> by the indicator function <math>f : X \to \mathbb{B}.</math>  The region <math>W \subseteq X</math> is called by a variety of names in different settings, for example, the ''antecedent'', the ''fiber'', the ''inverse image'', the ''level set'', or the ''pre-image'' in <math>X~\!</math> of <math>z~\!</math> under <math>f.~\!</math>  It is notated and defined as <math>W = f^{-1}(z).~\!</math>  Here, <math>f^{-1}~\!</math> is called the ''converse relation'' or the ''inverse relation'' &mdash; it is not in general an inverse function &mdash; corresponding to the function <math>f.~\!</math>  Whenever possible in simple examples, we use lower case letters for functions <math>f : X \to \mathbb{B},</math> and it is sometimes useful to employ capital letters for subsets of <math>X,~\!</math> if possible, in such a way that <math>F~\!</math> is the fiber of 1 under <math>f,~\!</math> in other words, <math>F = f^{-1}(1).~\!</math>
   −
The easiest way to see the sense of the venn diagram is to notice that the expression <math>\texttt{(} p \texttt{(} q \texttt{))},</math> read as <math>p \Rightarrow q,</math> can also be read as <math>{}^{\backprime\backprime} \operatorname{not}~ p ~\operatorname{without}~ q {}^{\prime\prime}.</math>  Its assertion effectively excludes any tincture of truth from the region of <math>P\!</math> that lies outside the region <math>Q.\!</math>  In a similar manner, the expression <math>\texttt{(} p \texttt{(} r \texttt{))},</math> read as <math>p \Rightarrow r,</math> can also be read as <math>{}^{\backprime\backprime} \operatorname{not}~ p ~\operatorname{without}~ r {}^{\prime\prime}.</math>  Asserting it effectively excludes any tincture of truth from the region of <math>P\!</math> that lies outside the region <math>R.\!</math>
+
The easiest way to see the sense of the venn diagram is to notice that the expression <math>\texttt{(} p \texttt{(} q \texttt{))},</math> read as <math>p \Rightarrow q,</math> can also be read as <math>{}^{\backprime\backprime} \mathrm{not}~ p ~\mathrm{without}~ q {}^{\prime\prime}.</math>  Its assertion effectively excludes any tincture of truth from the region of <math>P~\!</math> that lies outside the region <math>Q.~\!</math>  In a similar manner, the expression <math>\texttt{(} p \texttt{(} r \texttt{))},</math> read as <math>p \Rightarrow r,</math> can also be read as <math>{}^{\backprime\backprime} \mathrm{not}~ p ~\mathrm{without}~ r {}^{\prime\prime}.</math>  Asserting it effectively excludes any tincture of truth from the region of <math>P~\!</math> that lies outside the region <math>R.~\!</math>
   −
Figure&nbsp;28 shows the other standard way of drawing a venn diagram for such a proposition.  In this ''punctured soap film'' style of picture &mdash; others may elect to give it the more dignified title of a ''logical quotient topology'' &mdash; one begins with Figure&nbsp;27 and then proceeds to collapse the fiber of 0 under <math>X\!</math> down to the point of vanishing utterly from the realm of active contemplation, arriving at the following picture:
+
Figure&nbsp;28 shows the other standard way of drawing a venn diagram for such a proposition.  In this ''punctured soap film'' style of picture &mdash; others may elect to give it the more dignified title of a ''logical quotient topology'' &mdash; one begins with Figure&nbsp;27 and then proceeds to collapse the fiber of 0 under <math>X~\!</math> down to the point of vanishing utterly from the realm of active contemplation, arriving at the following picture:
    
{| align="center" cellpadding="8" style="text-align:center"
 
{| align="center" cellpadding="8" style="text-align:center"
| [[Image:Venn Diagram (P (Q R)).jpg|500px]] || (28)
+
| [[File:Venn Diagram (P (Q R)).jpg|500px]] || (28)
 
|-
 
|-
 
| <math>\text{Venn Diagram for}~ \texttt{(} p \texttt{~(} q ~ r \texttt{))}</math>
 
| <math>\text{Venn Diagram for}~ \texttt{(} p \texttt{~(} q ~ r \texttt{))}</math>
 
|}
 
|}
   −
This diagram indicates that the region where <math>p\!</math> is true is wholly contained in the region where both <math>q\!</math> and <math>r\!</math> are true.  Since only the regions that are painted true in the previous figure show up at all in this one, it is no longer necessary to distinguish the fiber of 1 under <math>f\!</math> by means of any shading.
+
This diagram indicates that the region where <math>p~\!</math> is true is wholly contained in the region where both <math>q~\!</math> and <math>r~\!</math> are true.  Since only the regions that are painted true in the previous figure show up at all in this one, it is no longer necessary to distinguish the fiber of 1 under <math>f~\!</math> by means of any shading.
    
In sum, it is immediately obvious from the venn diagram that in drawing a representation of the following propositional expression:
 
In sum, it is immediately obvious from the venn diagram that in drawing a representation of the following propositional expression:
Line 1,856: Line 1,856:     
{| align="center" cellpadding="8"
 
{| align="center" cellpadding="8"
| [[Image:Logical Graph (P (Q)) (P (R)) = (P (Q R)).jpg|500px]] || (29)
+
| [[File:Logical Graph (P (Q)) (P (R)) = (P (Q R)).jpg|500px]] || (29)
 
|}
 
|}
   Line 1,863: Line 1,863:  
While we go through each of these ways let us keep one eye out for the character and the conduct of each type of proceeding as a semiotic process, that is, as an orbit, in this case discrete, through a locus of signs, in this case propositional expressions, and as it happens in this case, a sequence of transformations that perseveres in the denotative objective of each expression, that is, in the abstract proposition that it expresses, while it preserves the informed constraint on the universe of discourse that gives us one viable candidate for the informational content of each expression in the interpretive chain of sign metamorphoses.
 
While we go through each of these ways let us keep one eye out for the character and the conduct of each type of proceeding as a semiotic process, that is, as an orbit, in this case discrete, through a locus of signs, in this case propositional expressions, and as it happens in this case, a sequence of transformations that perseveres in the denotative objective of each expression, that is, in the abstract proposition that it expresses, while it preserves the informed constraint on the universe of discourse that gives us one viable candidate for the informational content of each expression in the interpretive chain of sign metamorphoses.
   −
A ''sign relation'' <math>L\!</math> is a subset of a cartesian product <math>O \times S \times I,</math> where <math>{O, S, I}\!</math> are sets known as the ''object'', ''sign'', and ''interpretant sign'' domains, respectively.  These facts are symbolized by writing <math>L \subseteq O \times S \times I.</math>  Accordingly, a sign relation <math>L\!</math> consists of ordered triples of the form <math>(o, s, i),\!</math> where <math>o, s, i\!</math> belong to the domains <math>{O, S, I},\!</math> respectively.  An ordered triple of the form <math>(o, s, i) \in L</math> is referred to as a ''sign triple'' or an ''elementary sign relation''.
+
A ''sign relation'' <math>L~\!</math> is a subset of a cartesian product <math>O \times S \times I,</math> where <math>{O, S, I}~\!</math> are sets known as the ''object'', ''sign'', and ''interpretant sign'' domains, respectively.  These facts are symbolized by writing <math>L \subseteq O \times S \times I.</math>  Accordingly, a sign relation <math>L~\!</math> consists of ordered triples of the form <math>(o, s, i),~\!</math> where <math>o, s, i~\!</math> belong to the domains <math>{O, S, I},~\!</math> respectively.  An ordered triple of the form <math>(o, s, i) \in L</math> is referred to as a ''sign triple'' or an ''elementary sign relation''.
   −
The ''syntactic domain'' of a sign relation <math>L \subseteq O \times S \times I</math> is defined as the set-theoretic union <math>S \cup I</math> of its sign domain <math>S\!</math> and its interpretant domain <math>I.\!</math>  It is not uncommon, especially in formal examples, for the sign domain and the interpretant domain to be equal as sets, in short, to have <math>S = I.\!</math>
+
The ''syntactic domain'' of a sign relation <math>L \subseteq O \times S \times I</math> is defined as the set-theoretic union <math>S \cup I</math> of its sign domain <math>S~\!</math> and its interpretant domain <math>I.~\!</math>  It is not uncommon, especially in formal examples, for the sign domain and the interpretant domain to be equal as sets, in short, to have <math>S = I.~\!</math>
    
Sign relations may contain any number of sign triples, finite or infinite.  Finite sign relations do arise in applications and can be very instructive as expository examples, but most of the sign relations of significance in logic have infinite sign and interpretant domains, and usually infinite object domains, in the long run, at least, though one frequently works up to infinite domains by a series of finite approximations and gradual stages.
 
Sign relations may contain any number of sign triples, finite or infinite.  Finite sign relations do arise in applications and can be very instructive as expository examples, but most of the sign relations of significance in logic have infinite sign and interpretant domains, and usually infinite object domains, in the long run, at least, though one frequently works up to infinite domains by a series of finite approximations and gradual stages.
   −
With that preamble behind us, let us turn to consider the case of semiosis, or sign transformation process, that is generated by our first proof of the propositional equation <math>E_1.\!</math>
+
With that preamble behind us, let us turn to consider the case of semiosis, or sign transformation process, that is generated by our first proof of the propositional equation <math>E_1.~\!</math>
    
{| align="center" cellpadding="8"
 
{| align="center" cellpadding="8"
| [[Image:Logical Graph (P (Q)) (P (R)) = (P (Q R)) Proof 1.jpg|500px]]
+
| [[File:Logical Graph (P (Q)) (P (R)) = (P (Q R)) Proof 1.jpg|500px]]
 
| (30)
 
| (30)
 
|}
 
|}
Line 1,888: Line 1,888:  
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center"
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-0.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-0.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-1.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-1.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast P.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast P.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-2.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-2.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Domination ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Domination ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-3.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-3.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-4.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-4.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast Q.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast Q.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-5.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-5.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-6.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-6.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Domination ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Domination ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-7.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-7.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast R.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast R.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-8.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-8.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-9.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-1-9.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- DNF.jpg|500px]]
+
| [[File:Equational Inference Bar -- DNF.jpg|500px]]
 
|}
 
|}
 
| (31)
 
| (31)
Line 1,932: Line 1,932:     
{| align="center" cellpadding="8"
 
{| align="center" cellpadding="8"
| [[Image:Logical Graph (P (Q)) (P (R)) DNF.jpg|500px]]
+
| [[File:Logical Graph (P (Q)) (P (R)) DNF.jpg|500px]]
 
| (32)
 
| (32)
 
|}
 
|}
   −
Remembering that a blank node is the graphical equivalent of a logical value <math>\operatorname{true},</math> the resulting DNF may be read as follows:
+
Remembering that a blank node is the graphical equivalent of a logical value <math>\mathrm{true},</math> the resulting DNF may be read as follows:
    
{| align="center" cellpadding="8" style="text-align:center; width:60%"
 
{| align="center" cellpadding="8" style="text-align:center; width:60%"
Line 1,960: Line 1,960:  
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center"
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-0.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-0.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-1.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-1.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast P.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast P.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-2 ISW.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-2 ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Domination ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Domination ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-3.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-3.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-4.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-4.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast Q.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast Q.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-5.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-5.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Domination ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Domination ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-6 ISW.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-6 ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-7.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-7.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast R.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast R.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-8.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-8.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-9.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 2-2-9.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- DNF.jpg|500px]]
+
| [[File:Equational Inference Bar -- DNF.jpg|500px]]
 
|}
 
|}
 
| (33)
 
| (33)
Line 2,004: Line 2,004:     
{| align="center" cellpadding="8"
 
{| align="center" cellpadding="8"
| [[Image:Logical Graph (P Q R , (P)).jpg|500px]] || (34)
+
| [[File:Logical Graph (P Q R , (P)).jpg|500px]] || (34)
 
|}
 
|}
   −
This can be read to say <math>{}^{\backprime\backprime} \operatorname{either}~ p q r ~\operatorname{or}~ \operatorname{not}~ p {}^{\prime\prime},</math> which gives us yet another equivalent for the expression <math>{\texttt{(} p \texttt{(} q \texttt{))(} p \texttt{(} r \texttt{))}}\!</math> and the expression <math>\texttt{(} p \texttt{(} q r \texttt{))}.</math>  Still another way of writing the same thing would be as follows:
+
This can be read to say <math>{}^{\backprime\backprime} \mathrm{either}~ p q r ~\mathrm{or}~ \mathrm{not}~ p {}^{\prime\prime},</math> which gives us yet another equivalent for the expression <math>{\texttt{(} p \texttt{(} q \texttt{))(} p \texttt{(} r \texttt{))}}~\!</math> and the expression <math>\texttt{(} p \texttt{(} q r \texttt{))}.</math>  Still another way of writing the same thing would be as follows:
    
{| align="center" cellpadding="8"
 
{| align="center" cellpadding="8"
| [[Image:Logical Graph ((P , P Q R)).jpg|500px]] || (35)
+
| [[File:Logical Graph ((P , P Q R)).jpg|500px]] || (35)
 
|}
 
|}
   −
In other words, <math>{}^{\backprime\backprime} p ~\operatorname{is~equivalent~to}~ p ~\operatorname{and}~ q ~\operatorname{and}~ r {}^{\prime\prime}.</math>
+
In other words, <math>{}^{\backprime\backprime} p ~\mathrm{is~equivalent~to}~ p ~\mathrm{and}~ q ~\mathrm{and}~ r {}^{\prime\prime}.</math>
   −
One lemma that suggests itself at this point is a principle that may be canonized as the ''Emptiness Rule''.  It says that a bare lobe expression like <math>\texttt{( \_, \_, \ldots )},</math> with any number of places for arguments but nothing but blanks as filler, is logically tantamount to the proto-typical expression of its type, namely, the constant expression <math>\texttt{(~)}</math> that <math>\operatorname{Ex}\!</math> interprets as denoting the logical value <math>\operatorname{false}.</math>  To depict the rule in graphical form, we have the continuing sequence of equations:
+
One lemma that suggests itself at this point is a principle that may be canonized as the ''Emptiness Rule''.  It says that a bare lobe expression like <math>\texttt{( \_, \_, \ldots )},</math> with any number of places for arguments but nothing but blanks as filler, is logically tantamount to the proto-typical expression of its type, namely, the constant expression <math>\texttt{(~)}</math> that <math>\mathrm{Ex}~\!</math> interprets as denoting the logical value <math>\mathrm{false}.</math>  To depict the rule in graphical form, we have the continuing sequence of equations:
    
{| align="center" cellpadding="8" style="text-align:center; width:60%"
 
{| align="center" cellpadding="8" style="text-align:center; width:60%"
Line 2,114: Line 2,114:     
{| align="center" cellpadding="8"
 
{| align="center" cellpadding="8"
| [[Image:Logical Graph (( (P (Q)) (P (R)) , (P (Q R)) )).jpg|500px]]
+
| [[File:Logical Graph (( (P (Q)) (P (R)) , (P (Q R)) )).jpg|500px]]
 
| (39)
 
| (39)
 
|}
 
|}
Line 2,124: Line 2,124:  
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center"
 
{| align="center" cellpadding="0" cellspacing="0" style="border-left:1px solid black; border-top:1px solid black; border-right:1px solid black; border-bottom:1px solid black; text-align:center"
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-00.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-00.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-01.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-01.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast P.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast P.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-02.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-02.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Domination ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Domination ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-03.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-03.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-04.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-04.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Emptiness.jpg|500px]]
+
| [[File:Equational Inference Bar -- Emptiness.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-05.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-05.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-06.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-06.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast Q.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast Q.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-07.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-07.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-08.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-08.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Domination ISW.jpg|500px]]
+
| [[File:Equational Inference Bar -- Domination ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-09.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-09.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-10.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-10.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Spike.jpg|500px]]
+
| [[File:Equational Inference Bar -- Spike.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-11.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-11.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-12.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-12.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cast R.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cast R.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-13.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-13.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-14 ISW.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-14 ISW.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Emptiness.jpg|500px]]
+
| [[File:Equational Inference Bar -- Emptiness.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-15.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-15.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Spike.jpg|500px]]
+
| [[File:Equational Inference Bar -- Spike.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-16.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-16.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- Cancellation.jpg|500px]]
+
| [[File:Equational Inference Bar -- Cancellation.jpg|500px]]
 
|-
 
|-
| [[Image:Proof (P (Q)) (P (R)) = (P (Q R)) 3-17.jpg|500px]]
+
| [[File:Proof (P (Q)) (P (R)) = (P (Q R)) 3-17.jpg|500px]]
 
|-
 
|-
| [[Image:Equational Inference Bar -- QED.jpg|500px]]
+
| [[File:Equational Inference Bar -- QED.jpg|500px]]
 
|}
 
|}
 
| (40)
 
| (40)
Line 2,221: Line 2,221:  
: ''e''<sub>5</sub> = "(( (p (q))(p (r)) , (p (q r)) ))"
 
: ''e''<sub>5</sub> = "(( (p (q))(p (r)) , (p (q r)) ))"
   −
Under <math>\operatorname{Ex}\!</math> we have the following interpretations:
+
Under <math>\mathrm{Ex}~\!</math> we have the following interpretations:
    
: ''e''<sub>0</sub> expresses the logical constant "false"
 
: ''e''<sub>0</sub> expresses the logical constant "false"
Line 2,247: Line 2,247:  
Proof 2 lit on by burning the candle at both ends, changing ''e''<sub>2</sub> into a normal form that reduced to ''e''<sub>4</sub>, changing ''e''<sub>3</sub> into a normal form that reduced to e_4, in this way tethering ''e''<sub>2</sub> and ''e''<sub>3</sub> to a common point.  We got that (p (q))(p (r)) is equal to (p q r, (p)), then we got that (p (q r)) is equal to (p q r, (p)), so we got that (p (q))(p (r)) is equal to (p (q r)).
 
Proof 2 lit on by burning the candle at both ends, changing ''e''<sub>2</sub> into a normal form that reduced to ''e''<sub>4</sub>, changing ''e''<sub>3</sub> into a normal form that reduced to e_4, in this way tethering ''e''<sub>2</sub> and ''e''<sub>3</sub> to a common point.  We got that (p (q))(p (r)) is equal to (p q r, (p)), then we got that (p (q r)) is equal to (p q r, (p)), so we got that (p (q))(p (r)) is equal to (p (q r)).
   −
Proof 3 took the path of reflection, expressing the meta-equation between ''e''<sub>2</sub> and ''e''<sub>3</sub> via the object equation ''e''<sub>5</sub>, then taking ''e''<sub>5</sub> as ''s''<sub>1</sub> and exchanging it by dint of value preserving steps for ''e''<sub>1</sub> as ''s''<sub>''n''</sub>.  Thus we went from "(( (p (q))(p (r)) , (p (q r)) ))" to the blank expression that <math>\operatorname{Ex}\!</math> recognizes as true.
+
Proof 3 took the path of reflection, expressing the meta-equation between ''e''<sub>2</sub> and ''e''<sub>3</sub> via the object equation ''e''<sub>5</sub>, then taking ''e''<sub>5</sub> as ''s''<sub>1</sub> and exchanging it by dint of value preserving steps for ''e''<sub>1</sub> as ''s''<sub>''n''</sub>.  Thus we went from "(( (p (q))(p (r)) , (p (q r)) ))" to the blank expression that <math>\mathrm{Ex}~\!</math> recognizes as true.
    
I need to say something about the concept of ''reflection'' that I've been using according to my informal intuitions about it at numerous points in this discussion.  This is, of course, distinct from the use of the word "reflection" to license an application of the double negation theorem.
 
I need to say something about the concept of ''reflection'' that I've been using according to my informal intuitions about it at numerous points in this discussion.  This is, of course, distinct from the use of the word "reflection" to license an application of the double negation theorem.
12,080

edits

Navigation menu