Changes

add next section
Line 3,314: Line 3,314:  
Now if a sentence <math>s\!</math> really denotes a proposition <math>q,\!</math> and if the notation <math>^{\backprime\backprime} \downharpoonleft s \downharpoonright \, ^{\prime\prime}</math> is merely meant to supply another name for the proposition that <math>s\!</math> already denotes, then why is there any need for the additional notation?  It is because the interpretive mind habitually races from the sentence <math>s,\!</math> through the proposition <math>q\!</math> that it denotes, and on to the set <math>Q = q^{-1} (\underline{1})</math>  that the proposition <math>q\!</math> indicates, often jumping to the conclusion that the set <math>Q\!</math> is the only thing that the sentence <math>s\!</math> is intended to denote.  This higher order sign situation and the mind's inclination when placed in its setting calls for a linguistic mechanism or a notational device that is capable of analyzing the compound action and controlling its articulate performance, and this requires a way to interrupt the flow of assertion that typically takes place from <math>s\!</math> to <math>q\!</math> to <math>Q.\!</math>
 
Now if a sentence <math>s\!</math> really denotes a proposition <math>q,\!</math> and if the notation <math>^{\backprime\backprime} \downharpoonleft s \downharpoonright \, ^{\prime\prime}</math> is merely meant to supply another name for the proposition that <math>s\!</math> already denotes, then why is there any need for the additional notation?  It is because the interpretive mind habitually races from the sentence <math>s,\!</math> through the proposition <math>q\!</math> that it denotes, and on to the set <math>Q = q^{-1} (\underline{1})</math>  that the proposition <math>q\!</math> indicates, often jumping to the conclusion that the set <math>Q\!</math> is the only thing that the sentence <math>s\!</math> is intended to denote.  This higher order sign situation and the mind's inclination when placed in its setting calls for a linguistic mechanism or a notational device that is capable of analyzing the compound action and controlling its articulate performance, and this requires a way to interrupt the flow of assertion that typically takes place from <math>s\!</math> to <math>q\!</math> to <math>Q.\!</math>
   −
=====1.3.10.8.  The Cactus Patch=====
+
====1.3.11.  The Cactus Patch====
 +
 
 +
{| align="center" cellpadding="0" cellspacing="0" width="90%"
 +
|
 +
<p>Thus, what looks to us like a sphere of scientific knowledge more accurately should be represented as the inside of a highly irregular and spiky object, like a pincushion or porcupine, with very sharp extensions in certain directions, and virtually no knowledge in immediately adjacent areas.  If our intellectual gaze could shift slightly, it would alter each quill's direction, and suddenly our entire reality would change.</p>
 +
|-
 +
| align="right" | &mdash; Herbert J. Bernstein, "Idols of Modern Science", [HJB, 38]
 +
|}
 +
 
 +
In this and the four subsections that follow, I describe a calculus for representing propositions as sentences, in other words, as syntactically defined sequences of signs, and for manipulating these sentences chiefly in the light of their semantically defined contents, in other words, with respect to their logical values as propositions.  In their computational representation, the expressions of this calculus parse into a class of tree-like data structures called ''painted cacti''.  This is a family of graph-theoretic data structures that can be observed to have especially nice properties, turning out to be not only useful from a computational standpoint but also quite interesting from a theoretical point of view.  The rest of this subsection serves to motivate the development of this calculus and treats a number of general issues that surround the topic.
 +
 
 +
In order to facilitate the use of propositions as indicator functions it helps to acquire a flexible notation for referring to propositions in that light, for interpreting sentences in a corresponding role, and for negotiating the requirements of mutual sense between the two domains.  If none of the formalisms that are readily available or in common use are able to meet all of the design requirements that come to mind, then it is necessary to contemplate the design of a new language that is especially tailored to the purpose.  In the present application, there is a pressing need to devise a general calculus for composing propositions, computing their values on particular arguments, and inverting their indications to arrive at the sets of things in the universe that are indicated by them.
 +
 
 +
For computational purposes, it is convenient to have a middle ground or an intermediate language for negotiating between the ''koine'' of sentences regarded as strings of literal characters and the realm of propositions regarded as objects of logical value, even if this renders it necessary to introduce an artificial medium of exchange between these two domains.  If one envisions these computations to be carried out in any organized fashion, and ultimately or partially by means of the familiar sorts of machines, then the strings that express these logical propositions are likely to find themselves parsed into tree-like data structures at some stage of the game.  With regard to their abstract structures as graphs, there are several species of graph-theoretic data structures that can be used to accomplish this job in a reasonably effective and efficient way.
   −
<pre>
  −
Thus, what looks to us like a sphere of scientific knowledge more accurately should be represented as the inside of a highly irregular and spiky object, like a pincushion or porcupine, with very sharp extensions in certain directions, and virtually no knowledge in immediately adjacent areas.  If our intellectual gaze could shift slightly, it would alter each quill's direction, and suddenly our entire reality would change.
  −
(Herbert Bernstein, NWOK, 38).
  −
In this and the four subsections that follow, I describe a calculus for representing propositions as sentences, in other words, as syntactically defined sequences of signs, and for manipulating these sentences chiefly in the light of their semantically defined contents, in other words, with respect to their logical values as propositions.  In their computational representation, the expressions of this calculus parse into a class of tree�like data structures called "painted cacti".  This is a family of graph�theoretic data structures that can be observed to have especially nice properties, turning out to be not only useful from a computational standpoint but also quite interesting from a theoretical point of view.  The rest of this subsection serves to motivate the development of this calculus and treats a number of general issues that surround the topic.
  −
In order to facilitate the use of propositions as indicator functions it helps to acquire a flexible notation for referring to propositions in that light, for interpreting sentences in a corresponding role, and for negotiating the requirements of mutual sense between these two domains.  If none of the formalisms that are readily available or in common use are able to meet the design requirements that come to mind, then it is necessary to contemplate the design of a new language that is especially tailored to the purpose.  In the present application, there is a pressing need to devise a general calculus for composing propositions, computing their values on particular arguments, and inverting their indications to arrive at the sets of things in the universe that are indicated by them.
  −
For computational purposes, it is convenient to have a middle ground or an intermediate language for negotiating between the koine of sentences regarded as strings of literal characters and the realm of propositions regarded as objects of logical value, even if this renders it necessary to introduce an artificial medium of exchange between these two domains.  If one envisions these computations to be carried out in any organized fashion, and ultimately or partially by means of the familiar sorts of machines, then the strings that represent these logical propositions are likely to find themselves parsed into tree�like data structures at some stage of the game.  With regard to their abstract structures as graphs, there are several species of graph�theoretic data structures that can be used to accomplish this job in a reasonably effective and efficient way.
   
Over the course of this project, I plan to use two species of graphs:
 
Over the course of this project, I plan to use two species of graphs:
1.  "painted and rooted cacti" (PARCA).
+
 
2.  "painted and rooted conifers" (PARCO).
+
# Painted And Rooted Cacti (PARCAI).
For now, it is enough to discuss the former class of data structures, leaving the consideration of the latter class to a part of the project where their distinctive features are key to developments at that stage.  Accordingly, within the context of the current patch of discussion, or until it becomes necessary to attach further notice to the conceivable varieties of parse graphs, the acronym "PARC" is sufficient to indicate the pertinent genus of abstract graph that is under consideration.
+
# Painted And Rooted Conifers (PARCOI).
By way of making these tasks feasible to carry out on a regular basis, a prospective language designer is required not only to supply a fluent medium for the expression of propositions, but further to accompany the assertions of their sentences with a canonical mechanism for teasing out the fibers of their indicator functions.  Accordingly, with regard to a body of conceivable propositions, one needs to furnish a standard array of techniques for following the threads of their indications from their objective universe to their values for the mind and back again, that is, for tracing the clues that sentences provide from the universe of their objects to the signs of their values, and, in turn, from signs to objects.  Ultimately, one seeks to render propositions so functional as indicators of sets and so essential for examining the equality of sets that they can constitute a veritable criterion for the practical conceivability of sets.  Tackling this task requires me to introduce a number of new definitions and a collection of additional notational devices, to which I now turn.
+
 
Depending on whether a formal language is called by the type of sign that makes it up or whether it is named after the type of object that its signs are intended to denote, one may refer to this cactus language as a "sentential calculus" or as a "propositional calculus", respectively.
+
For now, it is enough to discuss the former class of data structures, leaving the consideration of the latter class to a part of the project where their distinctive features are key to developments at that stage.  Accordingly, within the context of the current patch of discussion, or until it becomes necessary to attach further notice to the conceivable varieties of parse graphs, the acronym "PARC" is sufficient to indicate the pertinent genus of abstract graphs that are under consideration.
When the syntactic definition of the language is well enough understood, then the language can begin to acquire a semantic function.  In natural circumstances, the syntax and the semantics are likely to be engaged in a process of co�evolution, whether in ontogeny or in phylogeny, that is, the two developments probably form parallel sides of a single bootstrap.  But this is not always the easiest way, at least, at first, to formally comprehend the nature of their action or the power of their interaction.
+
 
According to the customary mode of formal reconstruction, the language is first presented in terms of its syntax, in other words, as a formal language of strings called "sentences", amounting to a particular subset of the possible strings that can be formed on a finite alphabet of signs. A syntactic definition of the "cactus language", one that proceeds along purely formal lines, is carried out in the next subsection.  After that, the development of the language's more concrete aspects can be seen as a matter of defining two functions: The first is a function that takes each sentence of the language into a computational data structure, to be exact, a tree�like parse graph called a "painted cactus". The second is a function that takes each sentence of the language, or its interpolated parse graph, into a logical proposition, in effect, ending up with an indicator function as the object denoted by the sentence.
+
By way of making these tasks feasible to carry out on a regular basis, a prospective language designer is required not only to supply a fluent medium for the expression of propositions, but further to accompany the assertions of their sentences with a canonical mechanism for teasing out the fibers of their indicator functions.  Accordingly, with regard to a body of conceivable propositions, one needs to furnish a standard array of techniques for following the threads of their indications from their objective universe to their values for the mind and back again, that is, for tracing the clues that sentences provide from the universe of their objects to the signs of their values, and, in turn, from signs to objects.  Ultimately, one seeks to render propositions so functional as indicators of sets and so essential for examining the equality of sets that they can constitute a veritable criterion for the practical conceivability of sets.  Tackling this task requires me to introduce a number of new definitions and a collection of additional notational devices, to which I now turn.
The discussion of syntax brings up a number of associated issues that have to be clarified before going on.  These are questions of "style", that is, the sort of description, "grammar", or theory that one finds available or chooses as preferable for a given language.  These issues are discussed in the subsection after next (Subsection 10).
+
 
There is an aspect of syntax that is so schematic in its basic character that it can be conveyed by computational data structures, so algorithmic in its uses that it can be automated by routine mechanisms, and so fixed in its nature that its practical exploitation can be served by the usual devices of computation.  Because it involves the transformation of signs, it can be recognized as an aspect of semiotics.  Since it can be carried out in abstraction from meaning, it is not up to the level of semantics, much less a complete pragmatics, though it does incline to the pragmatic aspects of computation that are auxiliary to and incidental to the human use of language.  Therefore, I refer to this aspect of formal language use as the "algorithmics" or the "mechanics" of language processing.  A mechanical conversion of the "cactus language" into its associated data structures is discussed in Subsection 11.
+
Depending on whether a formal language is called by the type of sign that makes it up or whether it is named after the type of object that its signs are intended to denote, one may refer to this cactus language as a ''sentential calculus'' or as a ''propositional calculus'', respectively.
In the usual way of proceeding on formal grounds, meaning is added by giving each "grammatical sentence", or each syntactically distinguished string, an interpretation as a logically meaningful sentence, in effect, providing each abstractly well�formed sentence with a proposition for it to denote.  A semantic interpretation of the "cactus language" is carried out in Subsection 12.
+
 
</pre>
+
When the syntactic definition of the language is well enough understood, then the language can begin to acquire a semantic function.  In natural circumstances, the syntax and the semantics are likely to be engaged in a process of co-evolution, whether in ontogeny or in phylogeny, that is, the two developments probably form parallel sides of a single bootstrap.  But this is not always the easiest way, at least, at first, to formally comprehend the nature of their action or the power of their interaction.
 +
 
 +
According to the customary mode of formal reconstruction, the language is first presented in terms of its syntax, in other words, as a formal language of strings called ''sentences'', amounting to a particular subset of the possible strings that can be formed on a finite alphabet of signs. A syntactic definition of the ''cactus language'', one that proceeds along purely formal lines, is carried out in the next Subsection.  After that, the development of the language's more concrete aspects can be seen as a matter of defining two functions:
 +
 
 +
# The first is a function that takes each sentence of the language into a computational data structure, to be exact, a tree-like parse graph called a ''painted cactus''.
 +
# The second is a function that takes each sentence of the language, or its interpolated parse graph, into a logical proposition, in effect, ending up with an indicator function as the object denoted by the sentence.
 +
 
 +
The discussion of syntax brings up a number of associated issues that have to be clarified before going on.  These are questions of ''style'', that is, the sort of description, ''grammar'', or theory that one finds available or chooses as preferable for a given language.  These issues are discussed in the Subsection after next (Subsection 1.3.10.10).
 +
 
 +
There is an aspect of syntax that is so schematic in its basic character that it can be conveyed by computational data structures, so algorithmic in its uses that it can be automated by routine mechanisms, and so fixed in its nature that its practical exploitation can be served by the usual devices of computation.  Because it involves the transformation of signs, it can be recognized as an aspect of semiotics.  Since it can be carried out in abstraction from meaning, it is not up to the level of semantics, much less a complete pragmatics, though it does incline to the pragmatic aspects of computation that are auxiliary to and incidental to the human use of language.  Therefore, I refer to this aspect of formal language use as the ''algorithmics'' or the ''mechanics'' of language processing.  A mechanical conversion of the cactus language into its associated data structures is discussed in Subsection 1.3.10.11.
 +
 
 +
In the usual way of proceeding on formal grounds, meaning is added by giving each grammatical sentence, or each syntactically distinguished string, an interpretation as a logically meaningful sentence, in effect, equipping or providing each abstractly well-formed sentence with a logical proposition for it to denote.  A semantic interpretation of the cactus language is carried out in Subsection 1.3.10.12.
 +
 
 +
=====1.3.11.1.  The Cactus Language : Syntax=====
 +
 
 +
{| align="center" cellpadding="0" cellspacing="0" width="90%"
 +
|
 +
<p>Picture two different configurations of such an irregular shape, superimposed on each other in space, like a double exposure photograph.  Of the two images, the only part which coincides is the body.  The two different sets of quills stick out into very different regions of space.  The objective reality we see from within the first position, seemingly so full and spherical, actually agrees with the shifted reality only in the body of common knowledge.  In every direction in which we look at all deeply, the realm of discovered scientific truth could be quite different.  Yet in each of those two different situations, we would have thought the world complete, firmly known, and rather round in its penetration of the space of possible knowledge.</p>
 +
|-
 +
| align="right" | &mdash; Herbert J. Bernstein, "Idols of Modern Science", [HJB, 38]
 +
|}
 +
 
 +
In this Subsection, I describe the syntax of a family of formal languages that I intend to use as a sentential calculus, and thus to interpret for the purpose of reasoning about propositions and their logical relations.  In order to carry out the discussion, I need a way of referring to signs as if they were objects like any others, in other words, as the sorts of things that are subject to being named, indicated, described, discussed, and renamed if necessary, that can be placed, arranged, and rearranged within a suitable medium of expression, or else manipulated in the mind, that can be articulated and decomposed into their elementary signs, and that can be strung together in sequences to form complex signs.  Signs that have signs as their objects are called ''higher order signs'', and this is a topic that demands an apt formalization, but in due time.  The present discussion requires a quicker way to get into this subject, even if it takes informal means that cannot be made absolutely precise.
 +
 
 +
As a temporary notation, let the relationship between a particular sign <math>s\!</math> and a particular object <math>o\!</math>, namely, the fact that <math>s\!</math> denotes <math>o\!</math> or the fact that <math>o\!</math> is denoted by <math>s\!</math>, be symbolized in one of the following two ways:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lccc}
 +
1. & s & \rightarrow & o \\
 +
\\
 +
2. & o & \leftarrow  & s \\
 +
\end{array}</math>
 +
|}
 +
 
 +
Now consider the following paradigm:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{llccc}
 +
1. &
 +
\operatorname{If} &
 +
^{\backprime\backprime}\operatorname{A}^{\prime\prime} &
 +
\rightarrow &
 +
\operatorname{Ann}, \\
 +
&
 +
\operatorname{that~is}, &
 +
^{\backprime\backprime}\operatorname{A}^{\prime\prime} &
 +
\operatorname{denotes} &
 +
\operatorname{Ann}, \\
 +
&
 +
\operatorname{then} &
 +
\operatorname{A} &
 +
= &
 +
\operatorname{Ann} \\
 +
&
 +
\operatorname{and} &
 +
\operatorname{Ann} &
 +
= &
 +
\operatorname{A}. \\
 +
&
 +
\operatorname{Thus} &
 +
^{\backprime\backprime}\operatorname{Ann}^{\prime\prime} &
 +
\rightarrow &
 +
\operatorname{A}, \\
 +
&
 +
\operatorname{that~is}, &
 +
^{\backprime\backprime}\operatorname{Ann}^{\prime\prime} &
 +
\operatorname{denotes} &
 +
\operatorname{A}. \\
 +
\end{array}</math>
 +
|}
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{llccc}
 +
2. &
 +
\operatorname{If} &
 +
\operatorname{Bob} &
 +
\leftarrow &
 +
^{\backprime\backprime}\operatorname{B}^{\prime\prime}, \\
 +
&
 +
\operatorname{that~is}, &
 +
\operatorname{Bob} &
 +
\operatorname{is~denoted~by} &
 +
^{\backprime\backprime}\operatorname{B}^{\prime\prime}, \\
 +
&
 +
\operatorname{then} &
 +
\operatorname{Bob} &
 +
= &
 +
\operatorname{B} \\
 +
&
 +
\operatorname{and} &
 +
\operatorname{B} &
 +
= &
 +
\operatorname{Bob}. \\
 +
&
 +
\operatorname{Thus} &
 +
\operatorname{B} &
 +
\leftarrow &
 +
^{\backprime\backprime}\operatorname{Bob}^{\prime\prime}, \\
 +
&
 +
\operatorname{that~is}, &
 +
\operatorname{B} &
 +
\operatorname{is~denoted~by} &
 +
^{\backprime\backprime}\operatorname{Bob}^{\prime\prime}. \\
 +
\end{array}</math>
 +
|}
 +
 
 +
When I say that the sign "blank" denotes the sign "&nbsp;", it means that the string of characters inside the first pair of quotation marks can be used as another name for the string of characters inside the second pair of quotes.  In other words, "blank" is a higher order sign whose object is "&nbsp;", and the string of five characters inside the first pair of quotation marks is a sign at a higher level of signification than the string of one character inside the second pair of quotation marks.  This relationship can be abbreviated in either one of the following ways:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime} &
 +
\leftarrow &
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \\
 +
\\
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime} &
 +
\rightarrow &
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime} \\
 +
\end{array}</math>
 +
|}
 +
 
 +
Using the raised dot "<math>\cdot</math>" as a sign to mark the articulation of a quoted string into a sequence of possibly shorter quoted strings, and thus to mark the concatenation of a sequence of quoted strings into a possibly larger quoted string, one can write:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lllll}
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime}
 +
& \leftarrow &
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime}
 +
& = &
 +
^{\backprime\backprime}\operatorname{b}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{l}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{a}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{n}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{k}^{\prime\prime} \\
 +
\end{array}</math>
 +
|}
 +
 
 +
This usage allows us to refer to the blank as a type of character, and also to refer any blank we choose as a token of this type, referring to either of them in a marked way, but without the use of quotation marks, as I just did.  Now, since a blank is just what the name "blank" names, it is possible to represent the denotation of the sign "&nbsp;" by the name "blank" in the form of an identity between the named objects, thus:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime} & = & \operatorname{blank} \\
 +
\end{array}</math>
 +
|}
 +
 
 +
With these kinds of identity in mind, it is possible to extend the use of the "<math>\cdot</math>" sign to mark the articulation of either named or quoted strings into both named and quoted strings.  For example:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lclcl}
 +
^{\backprime\backprime}\operatorname{~~}^{\prime\prime}
 +
& = &
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime}
 +
& = &
 +
\operatorname{blank} \, \cdot \, \operatorname{blank} \\
 +
\\
 +
^{\backprime\backprime}\operatorname{~blank}^{\prime\prime}
 +
& = &
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime}
 +
& = &
 +
\operatorname{blank} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \\
 +
\\
 +
^{\backprime\backprime}\operatorname{blank~}^{\prime\prime}
 +
& = &
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \, \cdot \,
 +
^{\backprime\backprime}\operatorname{~}^{\prime\prime}
 +
& = &
 +
^{\backprime\backprime}\operatorname{blank}^{\prime\prime} \, \cdot \,
 +
\operatorname{blank}
 +
\end{array}</math>
 +
|}
 +
 
 +
A few definitions from formal language theory are required at this point.
 +
 
 +
An ''alphabet'' is a finite set of signs, typically, <math>\mathfrak{A} = \{ \mathfrak{a}_1, \ldots, \mathfrak{a}_n \}.</math>
 +
 
 +
A ''string'' over an alphabet <math>\mathfrak{A}</math> is a finite sequence of signs from <math>\mathfrak{A}.</math>
 +
 
 +
The ''length'' of a string is just its length as a sequence of signs.
 +
 
 +
The ''empty string'' is the unique sequence of length 0.  It is sometimes denoted by an empty pair of quotation marks, <math>^{\backprime\backprime\prime\prime},</math> but more often by the Greek symbols epsilon or lambda.
 +
 
 +
A sequence of length <math>k > 0\!</math> is typically presented in the concatenated forms:
 +
 
 +
{| align="center" cellpadding="4" width="90%"
 +
|
 +
<math>s_1 s_2 \ldots s_{k-1} s_k\!</math>
 +
|}
 +
 
 +
or
 +
 
 +
{| align="center" cellpadding="4" width="90%"
 +
|
 +
<math>s_1 \cdot s_2 \cdot \ldots \cdot s_{k-1} \cdot s_k</math>
 +
|}
 +
 
 +
with <math>s_j \in \mathfrak{A}</math> for all <math>j = 1 \ldots k.</math>
 +
 
 +
Two alternative notations are often useful:
 +
 
 +
{| align="center" cellpadding="4" style="text-align:center" width="90%"
 +
|-
 +
| <math>\varepsilon</math>
 +
| =
 +
| <math>^{\backprime\backprime\prime\prime}</math>
 +
| =
 +
| align="left" | the empty string.
 +
|-
 +
| <math>\underline\varepsilon</math>
 +
| =
 +
| <math>\{ \varepsilon \}</math>
 +
| =
 +
| align="left" | the language consisting of a single empty string.
 +
|}
 +
 
 +
The ''kleene star'' <math>\mathfrak{A}^*</math> of alphabet <math>\mathfrak{A}</math> is the set of all strings over <math>\mathfrak{A}.</math>  In particular, <math>\mathfrak{A}^*</math> includes among its elements the empty string <math>\varepsilon.</math>
 +
 
 +
The ''kleene plus'' <math>\mathfrak{A}^+</math> of an alphabet <math>\mathfrak{A}</math> is the set of all positive length strings over <math>\mathfrak{A},</math> in other words, everything in <math>\mathfrak{A}^*</math> but the empty string.
 +
 
 +
A ''formal language'' <math>\mathfrak{L}</math> over an alphabet <math>\mathfrak{A}</math> is a subset of <math>\mathfrak{A}^*.</math>  In brief, <math>\mathfrak{L} \subseteq \mathfrak{A}^*.</math>  If <math>s\!</math> is a string over <math>\mathfrak{A}</math> and if <math>s\!</math> is an element of <math>\mathfrak{L},</math> then it is customary to call <math>s\!</math> a ''sentence'' of <math>\mathfrak{L}.</math>  Thus, a formal language <math>\mathfrak{L}</math> is defined by specifying its elements, which amounts to saying what it means to be a sentence of <math>\mathfrak{L}.</math>
 +
 
 +
One last device turns out to be useful in this connection.  If <math>s\!</math> is a string that ends with a sign <math>t,\!</math> then <math>s \cdot t^{-1}</math> is the string that results by ''deleting'' from <math>s\!</math> the terminal <math>t.\!</math>
 +
 
 +
In this context, I make the following distinction:
 +
 
 +
# To ''delete'' an appearance of a sign is to replace it with an appearance of the empty string "".
 +
# To ''erase'' an appearance of a sign is to replace it with an appearance of the blank symbol "&nbsp;".
 +
 
 +
A ''token'' is a particular appearance of a sign.
 +
 
 +
The informal mechanisms that have been illustrated in the immediately preceding discussion are enough to equip the rest of this discussion with a moderately exact description of the so-called ''cactus language'' that I intend to use in both my conceptual and my computational representations of the minimal formal logical system that is variously known to sundry communities of interpretation as ''propositional logic'', ''sentential calculus'', or more inclusively, ''zeroth order logic'' (ZOL).
 +
 
 +
The ''painted cactus language'' <math>\mathfrak{C}</math> is actually a parameterized family of languages, consisting of one language <math>\mathfrak{C}(\mathfrak{P})</math> for each set <math>\mathfrak{P}</math> of ''paints''.
 +
 
 +
The alphabet <math>\mathfrak{A} = \mathfrak{M} \cup \mathfrak{P}</math> is the disjoint union of two sets of symbols:
 +
 
 +
<ol style="list-style-type:decimal">
 +
 
 +
<li>
 +
<p><math>\mathfrak{M}</math> is the alphabet of ''measures'', the set of ''punctuation marks'', or the collection of ''syntactic constants'' that is common to all of the languages <math>\mathfrak{C}(\mathfrak{P}).</math>  This set of signs is given as follows:</p>
 +
 
 +
<p><math>\begin{array}{lccccccccccc}
 +
\mathfrak{M}
 +
& = &
 +
\{ &
 +
\mathfrak{m}_1 & , &
 +
\mathfrak{m}_2 & , &
 +
\mathfrak{m}_3 & , &
 +
\mathfrak{m}_4 &
 +
\} \\
 +
& = &
 +
\{ &
 +
^{\backprime\backprime} \, \operatorname{~} \, ^{\prime\prime} & , &
 +
^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} & , &
 +
^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} & , &
 +
^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} &
 +
\} \\
 +
& = &
 +
\{ &
 +
\operatorname{blank} & , &
 +
\operatorname{links} & , &
 +
\operatorname{comma} & , &
 +
\operatorname{right} &
 +
\} \\
 +
\end{array}</math></p></li>
 +
 
 +
<li>
 +
<p><math>\mathfrak{P}</math> is the ''palette'', the alphabet of ''paints'', or the collection of ''syntactic variables'' that is peculiar to the language <math>\mathfrak{C}(\mathfrak{P}).</math>  This set of signs is given as follows:</p>
 +
 
 +
<p><math>\mathfrak{P} = \{ \mathfrak{p}_j  :  j \in J \}.</math></p></li>
 +
 
 +
</ol>
 +
 
 +
The easiest way to define the language <math>\mathfrak{C}(\mathfrak{P})</math> is to indicate the general sorts of operations that suffice to construct the greater share of its sentences from the specified few of its sentences that require a special election.  In accord with this manner of proceeding, I introduce a family of operations on strings of <math>\mathfrak{A}^*</math> that are called ''syntactic connectives''.  If the strings on which they operate are exclusively sentences of <math>\mathfrak{C}(\mathfrak{P}),</math> then these operations are tantamount to ''sentential connectives'', and if the syntactic sentences, considered as abstract strings of meaningless signs, are given a semantics in which they denote propositions, considered as indicator functions over some universe, then these operations amount to ''propositional connectives''.
 +
 
 +
Rather than presenting the most concise description of these languages right from the beginning, it serves comprehension to develop a picture of their forms in gradual stages, starting from the most natural ways of viewing their elements, if somewhat at a distance, and working through the most easily grasped impressions of their structures, if not always the sharpest acquaintances with their details.
 +
 
 +
The first step is to define two sets of basic operations on strings of <math>\mathfrak{A}^*.</math>
 +
 
 +
<ol style="list-style-type:decimal">
 +
 
 +
<li>
 +
<p>The ''concatenation'' of one string <math>s_1\!</math> is just the string <math>s_1.\!</math></p>
 +
 
 +
<p>The ''concatenation'' of two strings <math>s_1, s_2\!</math> is the string <math>s_1 \cdot s_2.\!</math></p>
 +
 
 +
<p>The ''concatenation'' of the <math>k\!</math> strings <math>(s_j)_{j = 1}^k</math> is the string of the form <math>s_1 \cdot \ldots \cdot s_k.\!</math></p></li>
 +
 
 +
<li>
 +
<p>The ''surcatenation'' of one string <math>s_1\!</math> is the string <math>^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p>
 +
 
 +
<p>The ''surcatenation'' of two strings <math>s_1, s_2\!</math> is <math>^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_2 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p>
 +
 
 +
<p>The ''surcatenation'' of the <math>k\!</math> strings <math>(s_j)_{j = 1}^k</math> is the string of the form <math>^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, \ldots \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_k \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p></li>
 +
 
 +
</ol>
 +
 
 +
These definitions can be made a little more succinct by defining the following sorts of generic operators on strings:
 +
 
 +
<ol style="list-style-type:decimal">
 +
 
 +
<li>The ''concatenation'' <math>\operatorname{Conc}_{j=1}^k</math> of the sequence of <math>k\!</math> strings <math>(s_j)_{j=1}^k</math> is defined recursively as follows:</li>
 +
 
 +
<ol style="list-style-type:lower-alpha">
 +
 
 +
<li><math>\operatorname{Conc}_{j=1}^1 s_j \ = \ s_1.</math></li>
 +
 
 +
<li>
 +
<p>For <math>\ell > 1,\!</math></p>
 +
 
 +
<p><math>\operatorname{Conc}_{j=1}^\ell s_j \ = \ \operatorname{Conc}_{j=1}^{\ell - 1} s_j \, \cdot \, s_\ell.</math></p></li>
 +
 
 +
</ol>
 +
 
 +
<li>The ''surcatenation'' <math>\operatorname{Surc}_{j=1}^k</math> of the sequence of <math>k\!</math> strings <math>(s_j)_{j=1}^k</math> is defined recursively as follows:</li>
 +
 
 +
<ol style="list-style-type:lower-alpha">
 +
 
 +
<li><math>\operatorname{Surc}_{j=1}^1 s_j \ = \ ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></li>
 +
 
 +
<li>
 +
<p>For <math>\ell > 1,\!</math></p>
 +
 
 +
<p><math>\operatorname{Surc}_{j=1}^\ell s_j \ = \ \operatorname{Surc}_{j=1}^{\ell - 1} s_j \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_\ell \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p></li>
 +
 
 +
</ol></ol>
 +
 
 +
The definitions of these syntactic operations can now be organized in a slightly better fashion by making a few additional conventions and auxiliary definitions.
 +
 
 +
<ol style="list-style-type:decimal">
 +
 
 +
<li>
 +
<p>The conception of the <math>k\!</math>-place concatenation operation can be extended to include its natural ''prequel'':</p>
 +
 
 +
<p><math>\operatorname{Conc}^0 \ = \ ^{\backprime\backprime\prime\prime}</math> &nbsp;=&nbsp; the empty string.</p>
 +
 
 +
<p>Next, the construction of the <math>k\!</math>-place concatenation can be broken into stages by means of the following conceptions:</p></li>
 +
 
 +
<ol style="list-style-type:lower-alpha">
 +
 
 +
<li>
 +
<p>The ''precatenation'' <math>\operatorname{Prec} (s_1, s_2)</math> of the two strings <math>s_1, s_2\!</math> is the string that is defined as follows:</p>
 +
 
 +
<p><math>\operatorname{Prec} (s_1, s_2) \ = \ s_1 \cdot s_2.</math></p></li>
 +
 
 +
<li>
 +
<p>The ''concatenation'' of the sequence of <math>k\!</math> strings <math>s_1, \ldots, s_k\!</math> can now be defined as an iterated precatenation over the sequence of <math>k+1\!</math> strings that begins with the string <math>s_0 = \operatorname{Conc}^0 \, = \, ^{\backprime\backprime\prime\prime}</math> and then continues on through the other <math>k\!</math> strings:</p></li>
 +
 
 +
<ol style="list-style-type:lower-roman">
 +
 
 +
<li>
 +
<p><math>\operatorname{Conc}_{j=0}^0 s_j \ = \ \operatorname{Conc}^0 \ = \ ^{\backprime\backprime\prime\prime}.</math></p></li>
 +
 
 +
<li>
 +
<p>For <math>\ell > 0,\!</math></p>
 +
 
 +
<p><math>\operatorname{Conc}_{j=1}^\ell s_j \ = \ \operatorname{Prec}(\operatorname{Conc}_{j=0}^{\ell - 1} s_j, s_\ell).</math></p></li>
 +
 
 +
</ol></ol>
 +
 
 +
<li>
 +
<p>The conception of the <math>k\!</math>-place surcatenation operation can be extended to include its natural "prequel":</p>
 +
 
 +
<p><math>\operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}.</math></p>
 +
 
 +
<p>Finally, the construction of the <math>k\!</math>-place surcatenation can be broken into stages by means of the following conceptions:</p>
 +
 
 +
<ol style="list-style-type:lower-alpha">
 +
 
 +
<li>
 +
<p>A ''subclause'' in <math>\mathfrak{A}^*</math> is a string that ends with a <math>^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p></li>
 +
 
 +
<li>
 +
<p>The ''subcatenation'' <math>\operatorname{Subc} (s_1, s_2)</math> of a subclause <math>s_1\!</math> by a string <math>s_2\!</math> is the string that is defined as follows:</p>
 +
 
 +
<p><math>\operatorname{Subc} (s_1, s_2) \ = \ s_1 \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_2 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></p>
 +
 
 +
<li>
 +
<p>The ''surcatenation'' of the <math>k\!</math> strings <math>s_1, \ldots, s_k\!</math> can now be defined as an iterated subcatenation over the sequence of <math>k+1\!</math> strings that starts with the string <math>s_0 \ = \ \operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}</math> and then continues on through the other <math>k\!</math> strings:</p></li>
 +
 
 +
<ol style="list-style-type:lower-roman">
 +
 
 +
<li>
 +
<p><math>\operatorname{Surc}_{j=0}^0 s_j \ = \ \operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}.</math></p></li>
 +
 
 +
<li>
 +
<p>For <math>\ell > 0,\!</math></p>
 +
 
 +
<p><math>\operatorname{Surc}_{j=1}^\ell s_j \ = \ \operatorname{Subc}(\operatorname{Surc}_{j=0}^{\ell - 1} s_j, s_\ell).</math></p></li>
 +
 
 +
</ol></ol></ol>
 +
 
 +
Notice that the expressions <math>\operatorname{Conc}_{j=0}^0 s_j</math> and <math>\operatorname{Surc}_{j=0}^0 s_j</math> are defined in such a way that the respective operators <math>\operatorname{Conc}^0</math> and <math>\operatorname{Surc}^0</math> simply ignore, in the manner of constants, whatever sequences of strings <math>s_j\!</math> may be listed as their ostensible arguments.
 +
 
 +
Having defined the basic operations of concatenation and surcatenation on arbitrary strings, in effect, giving them operational meaning for the all-inclusive language <math>\mathfrak{L} = \mathfrak{A}^*,</math> it is time to adjoin the notion of a more discriminating grammaticality, in other words, a more properly restrictive concept of a sentence.
 +
 
 +
If <math>\mathfrak{L}</math> is an arbitrary formal language over an alphabet of the sort that
 +
we are talking about, that is, an alphabet of the form <math>\mathfrak{A} = \mathfrak{M} \cup \mathfrak{P},</math> then there are a number of basic structural relations that can be defined on the strings of <math>\mathfrak{L}.</math>
 +
 
 +
{| align="center" cellpadding="4" width="90%"
 +
| 1. || <math>s\!</math> is the ''concatenation'' of <math>s_1\!</math> and <math>s_2\!</math> in <math>\mathfrak{L}</math> if and only if
 +
|-
 +
| &nbsp; || <math>s_1\!</math> is a sentence of <math>\mathfrak{L},</math> <math>s_2\!</math> is a sentence of <math>\mathfrak{L},</math> and
 +
|-
 +
| &nbsp; || <math>s = s_1 \cdot s_2.</math>
 +
|-
 +
| 2. || <math>s\!</math> is the ''concatenation'' of the <math>k\!</math> strings <math>s_1, \ldots, s_k\!</math> in <math>\mathfrak{L},</math>
 +
|-
 +
| &nbsp; || if and only if <math>s_j\!</math> is a sentence of <math>\mathfrak{L},</math> for all <math>j = 1 \ldots k,</math> and
 +
|-
 +
| &nbsp; || <math>s = \operatorname{Conc}_{j=1}^k s_j = s_1 \cdot \ldots \cdot s_k.</math>
 +
|-
 +
| 3. || <math>s\!</math> is the ''discatenation'' of <math>s_1\!</math> by <math>t\!</math> if and only if
 +
|-
 +
| &nbsp; || <math>s_1\!</math> is a sentence of <math>\mathfrak{L},</math> <math>t\!</math> is an element of <math>\mathfrak{A},</math> and
 +
|-
 +
| &nbsp; || <math>s_1 = s \cdot t.</math>
 +
|-
 +
| &nbsp; || When this is the case, one more commonly writes:
 +
|-
 +
| &nbsp; || <math>s = s_1 \cdot t^{-1}.</math>
 +
|-
 +
| 4. || <math>s\!</math> is a ''subclause'' of <math>\mathfrak{L}</math> if and only if
 +
|-
 +
| &nbsp; || <math>s\!</math> is a sentence of <math>\mathfrak{L}</math> and <math>s\!</math> ends with a <math>^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math>
 +
|-
 +
| 5. || <math>s\!</math> is the ''subcatenation'' of <math>s_1\!</math> by <math>s_2\!</math> if and only if
 +
|-
 +
| &nbsp; || <math>s_1\!</math> is a subclause of <math>\mathfrak{L},</math> <math>s_2\!</math> is a sentence of <math>\mathfrak{L},</math> and
 +
|-
 +
| &nbsp; || <math>s = s_1 \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_2 \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math>
 +
|-
 +
| 6. || <math>s\!</math> is the ''surcatenation'' of the <math>k\!</math> strings <math>s_1, \ldots, s_k\!</math> in <math>\mathfrak{L},</math>
 +
|-
 +
| &nbsp; || if and only if <math>s_j\!</math> is a sentence of <math>\mathfrak{L},</math> for all <math>j = 1 \ldots k,\!</math> and
 +
|-
 +
| &nbsp; || <math>s \ = \ \operatorname{Surc}_{j=1}^k s_j \ = \ ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, s_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, \ldots \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, s_k \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math>
 +
|}
 +
 
 +
The converses of these decomposition relations are tantamount to the corresponding forms of composition operations, making it possible for these complementary forms of analysis and synthesis to articulate the structures of strings and sentences in two directions.
 +
 
 +
The ''painted cactus language'' with paints in the set <math>\mathfrak{P} = \{ p_j : j \in J \}</math> is the formal language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P}) \subseteq \mathfrak{A}^* = (\mathfrak{M} \cup \mathfrak{P})^*</math> that is defined as follows:
 +
 
 +
{| align="center" cellpadding="4" width="90%"
 +
|-
 +
| PC 1. || The blank symbol <math>m_1\!</math> is a sentence.
 +
|-
 +
| PC 2. || The paint <math>p_j\!</math> is a sentence, for each <math>j\!</math> in <math>J.\!</math>
 +
|-
 +
| PC 3. || <math>\operatorname{Conc}^0</math> and <math>\operatorname{Surc}^0</math> are sentences.
 +
|-
 +
| PC 4. || For each positive integer <math>k,\!</math>
 +
|-
 +
| &nbsp; || if <math>s_1, \ldots, s_k\!</math> are sentences,
 +
|-
 +
| &nbsp; || then <math>\operatorname{Conc}_{j=1}^k s_j</math> is a sentence,
 +
|-
 +
| &nbsp; || and <math>\operatorname{Surc}_{j=1}^k s_j</math> is a sentence.
 +
|}
 +
 
 +
As usual, saying that <math>s\!</math> is a sentence is just a conventional way of stating that the string <math>s\!</math> belongs to the relevant formal language <math>\mathfrak{L}.</math>  An individual sentence of <math>\mathfrak{C} (\mathfrak{P}),</math> for any palette <math>\mathfrak{P},</math> is referred to as a ''painted and rooted cactus expression'' (PARCE) on the palette <math>\mathfrak{P},</math> or a ''cactus expression'', for short.  Anticipating the forms that the parse graphs of these PARCE's will take, to be described in the next Subsection, the language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P})</math> is also described as the set <math>\operatorname{PARCE} (\mathfrak{P})</math> of PARCE's on the palette <math>\mathfrak{P},</math> more generically, as the PARCE's that constitute the language <math>\operatorname{PARCE}.</math>
 +
 
 +
A ''bare'' PARCE, a bit loosely referred to as a ''bare cactus expression'', is a PARCE on the empty palette <math>\mathfrak{P} = \varnothing.</math>  A bare PARCE is a sentence in the ''bare cactus language'', <math>\mathfrak{C}^0 = \mathfrak{C} (\varnothing) = \operatorname{PARCE}^0 = \operatorname{PARCE} (\varnothing).</math>  This set of strings, regarded as a formal language in its own right, is a sublanguage of every cactus language <math>\mathfrak{C} (\mathfrak{P}).</math>  A bare cactus expression is commonly encountered in practice when one has occasion to start with an arbitrary PARCE and then finds a reason to delete or to erase all of its paints.
 +
 
 +
Only one thing remains to cast this description of the cactus language into a form that is commonly found acceptable.  As presently formulated, the principle PC&nbsp;4 appears to be attempting to define an infinite number of new concepts all in a single step, at least, it appears to invoke the indefinitely long sequences of operators, <math>\operatorname{Conc}^k</math> and <math>\operatorname{Surc}^k,</math> for all <math>k > 0.\!</math>  As a general rule, one prefers to have an effectively finite description of
 +
conceptual objects, and this means restricting the description to a finite number of schematic principles, each of which involves a finite number of schematic effects, that is, a finite number of schemata that explicitly relate conditions to results.
 +
 
 +
A start in this direction, taking steps toward an effective description of the cactus language, a finitary conception of its membership conditions, and a bounded characterization of a typical sentence in the language, can be made by recasting the present description of these expressions into the pattern of what is called, more or less roughly, a ''formal grammar''.
 +
 
 +
A notation in the style of <math>S :> T\!</math> is now introduced, to be read among many others in this manifold of ways:
 +
 
 +
{| align="center" cellpadding="4" width="90%"
 +
|-
 +
| <math>S\ \operatorname{covers}\ T</math>
 +
|-
 +
| <math>S\ \operatorname{governs}\ T</math>
 +
|-
 +
| <math>S\ \operatorname{rules}\ T</math>
 +
|-
 +
| <math>S\ \operatorname{subsumes}\ T</math>
 +
|-
 +
| <math>S\ \operatorname{types~over}\ T</math>
 +
|}
 +
 
 +
The form <math>S :> T\!</math> is here recruited for polymorphic employment in at least the following types of roles:
 +
 
 +
# To signify that an individually named or quoted string <math>T\!</math> is being typed as a sentence <math>S\!</math> of the language of interest <math>\mathfrak{L}.</math>
 +
# To express the fact or to make the assertion that each member of a specified set of strings <math>T \subseteq \mathfrak{A}^*</math> also belongs to the syntactic category <math>S,\!</math> the one that qualifies a string as being a sentence in the relevant formal language <math>\mathfrak{L}.</math>
 +
# To specify the intension or to signify the intention that every string that fits the conditions of the abstract type <math>T\!</math> must also fall under the grammatical heading of a sentence, as indicated by the type <math>S,\!</math> all within the target language <math>\mathfrak{L}.</math>
 +
 
 +
In these types of situation the letter <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> that signifies the type of a sentence in the language of interest, is called the ''initial symbol'' or the ''sentence symbol'' of a candidate formal grammar for the language, while any number of letters like <math>^{\backprime\backprime} T \, ^{\prime\prime}</math> signifying other types of strings that are necessary to a reasonable account or a rational reconstruction of the sentences that belong to the language, are collectively referred to as ''intermediate symbols''.
 +
 
 +
Combining the singleton set <math>\{ ^{\backprime\backprime} S \, ^{\prime\prime} \}</math> whose sole member is the initial symbol with the set <math>\mathfrak{Q}</math> that assembles together all of the intermediate symbols results in the set <math>\{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q}</math> of ''non-terminal symbols''.  Completing the package, the alphabet <math>\mathfrak{A}</math> of the language is also known as the set of ''terminal symbols''.  In this discussion, I will adopt the convention that <math>\mathfrak{Q}</math> is the set of ''intermediate symbols'', but I will often use <math>q\!</math> as a typical variable that ranges over all of the non-terminal symbols, <math>q \in \{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q}.</math>  Finally, it is convenient to refer to all of the symbols in <math>\{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q} \cup \mathfrak{A}</math> as the ''augmented alphabet'' of the prospective grammar for the language, and accordingly to describe the strings in <math>( \{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup \mathfrak{Q} \cup \mathfrak{A} )^*</math> as the ''augmented strings'', in effect, expressing the forms that are superimposed on a language by one of its conceivable grammars.  In certain settings it becomes desirable to separate the augmented strings that contain the symbol <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> from all other sorts of augmented strings.  In these situations the strings in the disjoint union <math>\{ ^{\backprime\backprime} S \, ^{\prime\prime} \} \cup (\mathfrak{Q} \cup \mathfrak{A} )^*</math> are known as the ''sentential forms'' of the associated grammar.
 +
 
 +
In forming a grammar for a language statements of the form <math>W :> W',\!</math>
 +
where <math>W\!</math> and <math>W'\!</math> are augmented strings or sentential forms of specified types that depend on the style of the grammar that is being sought, are variously known as ''characterizations'', ''covering rules'', ''productions'', ''rewrite rules'', ''subsumptions'', ''transformations'', or ''typing rules''.  These are collected together into a set <math>\mathfrak{K}</math> that serves to complete the definition of the formal grammar in question.
 +
 
 +
Correlative with the use of this notation, an expression of the form <math>T <: S,\!</math> read to say that <math>T\!</math> is covered by <math>S,\!</math> can be interpreted to say that <math>T\!</math> is of the type <math>S.\!</math>  Depending on the context, this can be taken in either one of two ways:
 +
 
 +
# Treating <math>T\!</math> as a string variable, it means that the individual string <math>T\!</math> is typed as <math>S.\!</math>
 +
# Treating <math>T\!</math> as a type name, it means that any instance of the type <math>T\!</math> also falls under the type <math>S.\!</math>
 +
 
 +
In accordance with these interpretations, an expression of the form <math>t <: T\!</math> can be read in all of the ways that one typically reads an expression of the form <math>t : T.\!</math>
 +
 
 +
There are several abuses of notation that commonly tolerated in the use of covering relations.  The worst offense is that of allowing symbols to stand equivocally either for individual strings or else for their types.  There is a measure of consistency to this practice, considering the fact that perfectly individual entities are rarely if ever grasped by means of signs and finite expressions, which entails that every appearance of an apparent token is only a type of more particular tokens, and meaning in the end that there is never any recourse but to the sort of discerning interpretation that can decide just how each sign is intended.  In view of all this, I continue to permit expressions like <math>t <: T\!</math> and <math>T <: S,\!</math> where any of the symbols <math>t, T, S\!</math> can be taken to signify either the tokens or the subtypes of their covering types.
 +
 
 +
'''Note.'''  For some time to come in the discussion that follows, although I will continue to focus on the cactus language as my principal object example, my more general purpose will be to develop the subject matter of the formal languages and grammars.  I will do this by taking up a particular method of ''stepwise refinement'' and using it to extract a rigorous formal grammar for the cactus language, starting with little more than a rough description of the target language and applying a systematic analysis to develop a sequence of increasingly more effective and more exact approximations to the desired grammar.
 +
 
 +
Employing the notion of a covering relation it becomes possible to redescribe the cactus language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P})</math> in the following ways.
 +
 
 +
======Grammar 1======
 +
 
 +
Grammar&nbsp;1 is something of a misnomer.  It is nowhere near exemplifying any kind of a standard form and it is only intended as a starting point for the initiation of more respectable grammars.  Such as it is, it uses the terminal alphabet <math>\mathfrak{A} = \mathfrak{M} \cup \mathfrak{P}</math> that comes with the territory of the cactus language <math>\mathfrak{C} (\mathfrak{P}),</math> it specifies <math>\mathfrak{Q} = \varnothing,</math> in other words, it employs no intermediate symbols, and it embodies the ''covering set'' <math>\mathfrak{K}</math> as listed in the following display.
 +
 
 +
<br>
 +
 
 +
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
 +
| align="left"  style="border-left:1px solid black;"  width="50%" |
 +
<math>\mathfrak{C} (\mathfrak{P}) : \text{Grammar 1}\!</math>
 +
| align="right" style="border-right:1px solid black;" width="50%" |
 +
<math>\mathfrak{Q} = \varnothing</math>
 +
|-
 +
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
 +
<math>\begin{array}{rcll}
 +
1.
 +
& S
 +
& :>
 +
& m_1 \ = \ ^{\backprime\backprime} \operatorname{~} ^{\prime\prime}
 +
\\
 +
2.
 +
& S
 +
& :>
 +
& p_j, \, \text{for each} \, j \in J
 +
\\
 +
3.
 +
& S
 +
& :>
 +
& \operatorname{Conc}^0 \ = \ ^{\backprime\backprime\prime\prime}
 +
\\
 +
4.
 +
& S
 +
& :>
 +
& \operatorname{Surc}^0 \ = \ ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}
 +
\\
 +
5.
 +
& S
 +
& :>
 +
& S^*
 +
\\
 +
6.
 +
& S
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, S \, \cdot \, ( \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S \, )^* \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
In this formulation, the last two lines specify that:
 +
 
 +
<ol style="list-style-type:decimal">
 +
 
 +
<li value="5"> The concept of a sentence in <math>\mathfrak{L}</math> covers any concatenation of sentences in <math>\mathfrak{L},</math> in effect, any number of freely chosen sentences that are available to be concatenated one after another.</li>
 +
 
 +
<li value="6"> The concept of a sentence in <math>\mathfrak{L}</math> covers any surcatenation of sentences in <math>\mathfrak{L},</math> in effect, any string that opens with a <math>^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime},</math> continues with a sentence, possibly empty, follows with a finite number of phrases of the form <math>^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S,</math> and closes with a <math>^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}.</math></li>
 +
 
 +
</ol>
 +
 
 +
This appears to be just about the most concise description of the cactus language <math>\mathfrak{C} (\mathfrak{P})</math> that one can imagine, but there are a couple of problems that are commonly felt to afflict this style of presentation and to make it less than completely acceptable.  Briefly stated, these problems turn on the following properties of the presentation:
 +
 
 +
# The invocation of the kleene star operation is not reduced to a manifestly finitary form.
 +
# The type <math>S\!</math> that indicates a sentence is allowed to cover not only itself but also the empty string.
 +
 
 +
I will discuss these issues at first in general, and especially in regard to how the two features interact with one another, and then I return to address in further detail the questions that they engender on their individual bases.
 +
 
 +
In the process of developing a grammar for a language, it is possible to notice a number of organizational, pragmatic, and stylistic questions, whose moment to moment answers appear to decide the ongoing direction of the grammar that develops and the impact of whose considerations work in tandem to determine, or at least to influence, the sort of grammar that turns out.  The issues that I can see arising at this point I can give the following prospective names, putting off the discussion of their natures and the treatment of their details to the points in the development of the present example where they evolve their full import.
 +
 
 +
# The ''degree of intermediate organization'' in a grammar.
 +
# The ''distinction between empty and significant strings'', and thus the ''distinction between empty and significant types of strings''.
 +
# The ''principle of intermediate significance''.  This is a constraint on the grammar that arises from considering the interaction of the first two issues.
 +
 
 +
In responding to these issues, it is advisable at first to proceed in a stepwise fashion, all the better to accommodate the chances of pursuing a series of parallel developments in the grammar, to allow for the possibility of reversing many steps in its development, indeed, to take into account the near certain necessity of having to revisit, to revise, and to reverse many decisions about how to proceed toward an optimal description or a satisfactory grammar for the language.  Doing all this means exploring the effects of various alterations and innovations as independently from each other as possible.
 +
 
 +
The degree of intermediate organization in a grammar is measured by how many intermediate symbols it has and by how they interact with each other by means of its productions.  With respect to this issue, Grammar&nbsp;1 has no intermediate symbols at all, <math>\mathfrak{Q} = \varnothing,</math> and therefore remains at an ostensibly trivial degree of intermediate organization.  Some additions to the list of intermediate symbols are practically obligatory in order to arrive at any reasonable grammar at all, other inclusions appear to have a more optional character, though obviously useful from the standpoints of clarity and ease of comprehension.
 +
 
 +
One of the troubles that is perceived to affect Grammar&nbsp;1 is that it wastes so much of the available potential for efficient description in recounting over and over again the simple fact that the empty string is present in the language.  This arises in part from the statement that <math>S :> S^*,\!</math> which implies that:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lcccccccccccc}
 +
S
 +
& :>
 +
& S^*
 +
& =
 +
& \underline\varepsilon
 +
& \cup & S
 +
& \cup & S \cdot S
 +
& \cup & S \cdot S \cdot S
 +
& \cup & \ldots \\
 +
\end{array}</math>
 +
|}
 +
 
 +
There is nothing wrong with the more expansive pan of the covered equation, since it follows straightforwardly from the definition of the kleene star operation, but the covering statement to the effect that <math>S :> S^*\!</math> is not a very productive piece of information, in the sense of telling very much about the language that falls under the type of a sentence <math>S.\!</math>  In particular, since it implies that <math>S :> \underline\varepsilon,</math> and since <math>\underline\varepsilon \cdot \mathfrak{L} \, = \, \mathfrak{L} \cdot \underline\varepsilon \, = \, \mathfrak{L},</math> for any formal language <math>\mathfrak{L},</math> the empty string <math>\varepsilon</math> is counted over and over in every term of the union, and every non-empty sentence under <math>S\!</math> appears again and again in every term of the union that follows the initial appearance of <math>S.\!</math>  As a result, this style of characterization has to be classified as ''true but not very informative''.  If at all possible, one prefers to partition the language of interest into a disjoint union of subsets, thereby accounting for each sentence under its proper term, and one whose place under the sum serves as a useful parameter of its character or its complexity.  In general, this form of description is not always possible to achieve, but it is usually worth the trouble to actualize it whenever it is.
 +
 
 +
Suppose that one tries to deal with this problem by eliminating each use of the kleene star operation, by reducing it to a purely finitary set of steps, or by finding an alternative way to cover the sublanguage that it is used to generate.  This amounts, in effect, to ''recognizing a type'', a complex process that involves the following steps:
 +
 
 +
# '''Noticing''' a category of strings that is generated by iteration or recursion.
 +
# '''Acknowledging''' the fact that it needs to be covered by a non-terminal symbol.
 +
# '''Making a note of it''' by instituting an explicitly-named grammatical category.
 +
 
 +
In sum, one introduces a non-terminal symbol for each type of sentence and each ''part of speech'' or sentential component that is generated by means of iteration or recursion under the ruling constraints of the grammar.  In order to do this one needs to analyze the iteration of each grammatical operation in a way that is analogous to a mathematically inductive definition, but further in a way that is not forced explicitly to recognize a distinct and separate type of expression merely to account for and to recount every increment in the parameter of iteration.
 +
 
 +
Returning to the case of the cactus language, the process of recognizing an iterative type or a recursive type can be illustrated in the following way.  The operative phrases in the simplest sort of recursive definition are its ''initial part'' and its ''generic part''.  For the cactus language <math>\mathfrak{C} (\mathfrak{P}),</math> one has the following definitions of concatenation as iterated precatenation and of surcatenation as iterated subcatenation, respectively:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{llll}
 +
1.
 +
& \operatorname{Conc}_{j=1}^0
 +
& =
 +
& ^{\backprime\backprime\prime\prime}
 +
\\ \\
 +
& \operatorname{Conc}_{j=1}^k S_j
 +
& =
 +
& \operatorname{Prec} (\operatorname{Conc}_{j=1}^{k-1} S_j, S_k)
 +
\\ \\
 +
2.
 +
& \operatorname{Surc}_{j=1}^0
 +
& =
 +
& ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}
 +
\\ \\
 +
& \operatorname{Surc}_{j=1}^k S_j
 +
& =
 +
& \operatorname{Subc} (\operatorname{Surc}_{j=1}^{k-1} S_j, S_k)
 +
\\ \\
 +
\end{array}</math>
 +
|}
 +
 
 +
In order to transform these recursive definitions into grammar rules, one introduces a new pair of intermediate symbols, <math>\operatorname{Conc}</math> and <math>\operatorname{Surc},</math> corresponding to the operations that share the same names but ignoring the inflexions of their individual parameters <math>j\!</math> and <math>k.\!</math>  Recognizing the
 +
type of a sentence by means of the initial symbol <math>S\!</math> and interpreting <math>\operatorname{Conc}</math> and <math>\operatorname{Surc}</math> as names for the types of strings that are generated by concatenation and by surcatenation, respectively, one arrives at the following transformation of the ruling operator definitions into the form of covering grammar rules:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{llll}
 +
1.
 +
& \operatorname{Conc}
 +
& :>
 +
& ^{\backprime\backprime\prime\prime}
 +
\\ \\
 +
& \operatorname{Conc}
 +
& :>
 +
& \operatorname{Conc} \cdot S
 +
\\ \\
 +
2.
 +
& \operatorname{Surc}
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}
 +
\\ \\
 +
& \operatorname{Surc}
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, S \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\ \\
 +
& \operatorname{Surc}
 +
& :>
 +
& \operatorname{Surc} \, \cdot \, ( \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime} \, )^{-1} \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\end{array}</math>
 +
|}
 +
 
 +
As given, this particular fragment of the intended grammar contains a couple of features that are desirable to amend.
 +
 
 +
# Given the covering <math>S :> \operatorname{Conc},</math> the covering rule <math>\operatorname{Conc} :> \operatorname{Conc} \cdot S</math> says no more than the covering rule <math>\operatorname{Conc} :> S \cdot S.</math>  Consequently, all of the information contained in these two covering rules is already covered by the statement that <math>S :> S \cdot S.</math>
 +
# A grammar rule that invokes a notion of decatenation, deletion, erasure, or any other sort of retrograde production, is frequently considered to be lacking in elegance, and a there is a style of critique for grammars that holds it preferable to avoid these types of operations if it is at all possible to do so.  Accordingly, contingent on the prescriptions of the informal rule in question, and pursuing the stylistic dictates that are writ in the realm of its aesthetic regime, it becomes necessary for us to backtrack a little bit, to temporarily withdraw the suggestion of employing these elliptical types of operations, but without, of course, eliding the record of doing so.
 +
 
 +
======Grammar 2======
 +
 
 +
One way to analyze the surcatenation of any number of sentences is to introduce an auxiliary type of string, not in general a sentence, but a proper component of any sentence that is formed by surcatenation.  Doing this brings one to the following definition:
 +
 
 +
A ''tract'' is a concatenation of a finite sequence of sentences, with a literal comma <math>^{\backprime\backprime} \operatorname{,} ^{\prime\prime}</math> interpolated between each pair of adjacent sentences.  Thus, a typical tract <math>T\!</math> takes the form:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lllllllllll}
 +
T
 +
& =
 +
& S_1
 +
& \cdot
 +
& ^{\backprime\backprime} \operatorname{,} ^{\prime\prime}
 +
& \cdot
 +
& \ldots
 +
& \cdot
 +
& ^{\backprime\backprime} \operatorname{,} ^{\prime\prime}
 +
& \cdot
 +
& S_k
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
A tract must be distinguished from the abstract sequence of sentences, <math>S_1, \ldots, S_k,\!</math> where the commas that appear to come to mind, as if being called up to separate the successive sentences of the sequence, remain as partially abstract conceptions, or as signs that retain a disengaged status on the borderline between the text and the mind.  In effect, the types of commas that appear to follow in the abstract sequence continue to exist as conceptual abstractions and fail to be cognized in a wholly explicit fashion, whether as concrete tokens in the object language, or as marks in the strings of signs that are able to engage one's parsing attention.
 +
 
 +
Returning to the case of the painted cactus language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P}),</math> it is possible to put the currently assembled pieces of a grammar together in the light of the presently adopted canons of style, to arrive a more refined analysis of the fact that the concept of a sentence covers any concatenation of sentences and any surcatenation of sentences, and so to obtain the following form of a grammar:
 +
 
 +
<br>
 +
 
 +
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
 +
| align="left"  style="border-left:1px solid black;"  width="50%" |
 +
<math>\mathfrak{C} (\mathfrak{P}) : \text{Grammar 2}\!</math>
 +
| align="right" style="border-right:1px solid black;" width="50%" |
 +
<math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}</math>
 +
|-
 +
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
 +
<math>\begin{array}{rcll}
 +
1.
 +
& S
 +
& :>
 +
& \varepsilon
 +
\\
 +
2.
 +
& S
 +
& :>
 +
& m_1
 +
\\
 +
3.
 +
& S
 +
& :>
 +
& p_j, \, \text{for each} \, j \in J
 +
\\
 +
4.
 +
& S
 +
& :>
 +
& S \, \cdot \, S
 +
\\
 +
5.
 +
& S
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
6.
 +
& T
 +
& :>
 +
& S
 +
\\
 +
7.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
In this rendition, a string of type <math>T\!</math> is not in general a sentence itself but a proper ''part of speech'', that is, a strictly ''lesser'' component of a sentence in any suitable ordering of sentences and their components.  In order to see how the grammatical category <math>T\!</math> gets off the ground, that is, to detect its minimal strings and to discover how its ensuing generations get started from these, it is useful to observe that the covering rule <math>T :> S\!</math> means that <math>T\!</math> ''inherits'' all of the initial conditions of <math>S,\!</math> namely, <math>T \, :> \, \varepsilon, m_1, p_j.</math>  In accord with these simple beginnings it comes to parse that the rule <math>T \, :> \, T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S,</math> with the substitutions <math>T = \varepsilon</math> and <math>S = \varepsilon</math> on the covered side of the rule, bears the germinal implication that <math>T \, :> \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime}.</math>
 +
 
 +
Grammar&nbsp;2 achieves a portion of its success through a higher degree of intermediate organization.  Roughly speaking, the level of organization can be seen as reflected in the cardinality of the intermediate alphabet <math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}</math> but it is clearly not explained by this simple circumstance alone, since it is taken for granted that the intermediate symbols serve a purpose, a purpose that is easily recognizable but that may not be so easy to pin down and to specify exactly.  Nevertheless, it is worth the trouble of exploring this aspect of organization and this direction of development a little further.
 +
 
 +
======Grammar 3======
 +
 
 +
Although it is not strictly necessary to do so, it is possible to organize the materials of our developing grammar in a slightly better fashion by recognizing two recurrent types of strings that appear in the typical cactus expression. In doing this, one arrives at the following two definitions:
 +
 
 +
A ''rune'' is a string of blanks and paints concatenated together.  Thus, a typical rune <math>R\!</math> is a string over <math>\{ m_1 \} \cup \mathfrak{P},</math> possibly the empty string:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>R\ \in\ ( \{ m_1 \} \cup \mathfrak{P} )^*</math>
 +
|}
 +
 
 +
When there is no possibility of confusion, the letter <math>^{\backprime\backprime} R \, ^{\prime\prime}</math> can be used either as a string variable that ranges over the set of runes or else as a type name for the class of runes.  The latter reading amounts to the enlistment of a fresh intermediate symbol, <math>^{\backprime\backprime} R \, ^{\prime\prime} \in \mathfrak{Q},</math> as a part of a new grammar for <math>\mathfrak{C} (\mathfrak{P}).</math>  In effect, <math>^{\backprime\backprime} R \, ^{\prime\prime}</math> affords a grammatical recognition for any rune that forms a part of a sentence in <math>\mathfrak{C} (\mathfrak{P}).</math>  In situations where these variant usages are likely to be confused, the types of strings can be indicated by means of expressions like <math>r <: R\!</math> and <math>W <: R.\!</math>
 +
 
 +
A ''foil'' is a string of the form <math>^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime},</math> where <math>T\!</math> is a tract.  Thus, a typical foil <math>F\!</math> has the form:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lllllllllllllll}
 +
F
 +
& =
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime}
 +
& \cdot
 +
& S_1
 +
& \cdot
 +
& ^{\backprime\backprime} \operatorname{,} ^{\prime\prime}
 +
& \cdot
 +
& \ldots
 +
& \cdot
 +
& ^{\backprime\backprime} \operatorname{,} ^{\prime\prime}
 +
& \cdot
 +
& S_k
 +
& \cdot
 +
& ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
This is just the surcatenation of the sentences <math>S_1, \ldots, S_k.\!</math>  Given the possibility that this sequence of sentences is empty, and thus that the tract <math>T\!</math> is the empty string, the minimum foil <math>F\!</math> is the expression <math>^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}.</math>  Explicitly marking each foil <math>F\!</math> that is embodied in a cactus expression is tantamount to recognizing another intermediate symbol, <math>^{\backprime\backprime} F \, ^{\prime\prime} \in \mathfrak{Q},</math> further articulating the structures of sentences and expanding the grammar for the language
 +
<math>\mathfrak{C} (\mathfrak{P}).</math>  All of the same remarks about the versatile uses of the intermediate symbols, as string variables and as type names, apply again to the letter <math>^{\backprime\backprime} F \, ^{\prime\prime}.</math>
 +
 
 +
<br>
 +
 
 +
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
 +
| align="left"  style="border-left:1px solid black;"  width="50%" |
 +
<math>\mathfrak{C} (\mathfrak{P}) : \text{Grammar 3}\!</math>
 +
| align="right" style="border-right:1px solid black;" width="50%" |
 +
<math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} F \, ^{\prime\prime}, \, ^{\backprime\backprime} R \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}</math>
 +
|-
 +
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
 +
<math>\begin{array}{rcll}
 +
1.
 +
& S
 +
& :>
 +
& R
 +
\\
 +
2.
 +
& S
 +
& :>
 +
& F
 +
\\
 +
3.
 +
& S
 +
& :>
 +
& S \, \cdot \, S
 +
\\
 +
4.
 +
& R
 +
& :>
 +
& \varepsilon
 +
\\
 +
5.
 +
& R
 +
& :>
 +
& m_1
 +
\\
 +
6.
 +
& R
 +
& :>
 +
& p_j, \, \text{for each} \, j \in J
 +
\\
 +
7.
 +
& R
 +
& :>
 +
& R \, \cdot \, R
 +
\\
 +
8.
 +
& F
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
9.
 +
& T
 +
& :>
 +
& S
 +
\\
 +
10.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
In Grammar&nbsp;3, the first three Rules say that a sentence (a string of type <math>S\!</math>), is a rune (a string of type <math>R\!</math>), a foil (a string of type <math>F\!</math>), or an arbitrary concatenation of strings of these two types.  Rules&nbsp;4 through 7 specify that a rune <math>R\!</math> is an empty string <math>\varepsilon,</math> a blank symbol <math>m_1,\!</math> a paint <math>p_j,\!</math> or any concatenation of strings of these three types.  Rule&nbsp;8 characterizes a foil <math>F\!</math> as a string of the form <math>^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime},</math> where <math>T\!</math> is a tract.  The last two Rules say that a tract <math>T\!</math> is either a sentence <math>S\!</math> or else the concatenation of a tract, a comma, and a sentence, in that order.
 +
 
 +
At this point in the succession of grammars for <math>\mathfrak{C} (\mathfrak{P}),</math> the explicit uses of indefinite iterations, like the kleene star operator, are now completely reduced to finite forms of concatenation, but the problems that some styles of analysis have with allowing non-terminal symbols to cover both themselves and the empty string are still present.
 +
 
 +
Any degree of reflection on this difficulty raises the general question:  What is a practical strategy for accounting for the empty string in the organization of any formal language that counts it among its sentences?  One answer that presents itself is this:  If the empty string belongs to a formal language, it suffices to count it once at the beginning of the formal account that enumerates its sentences and then to move on to more interesting materials.
 +
 
 +
Returning to the case of the cactus language <math>\mathfrak{C} (\mathfrak{P}),</math> in other words, the formal language <math>\operatorname{PARCE}</math> of ''painted and rooted cactus expressions'', it serves the purpose of efficient accounting to partition the language into the following couple of sublanguages:
 +
 
 +
<ol style="list-style-type:decimal">
 +
 
 +
<li>
 +
<p>The ''emptily painted and rooted cactus expressions'' make up the language <math>\operatorname{EPARCE}</math> that consists of a single empty string as its only sentence.  In short:</p>
 +
 
 +
<p><math>\operatorname{EPARCE} \ = \ \underline\varepsilon \ = \ \{ \varepsilon \}</math></p></li>
 +
 
 +
<li>
 +
<p>The ''significantly painted and rooted cactus expressions'' make up the language <math>\operatorname{SPARCE}</math> that consists of everything else, namely, all of the non-empty strings in the language <math>\operatorname{PARCE}.</math>  In sum:</p>
 +
 
 +
<p><math>\operatorname{SPARCE} \ = \ \operatorname{PARCE} \setminus \varepsilon</math></p></li>
 +
 
 +
</ol>
 +
 
 +
As a result of marking the distinction between empty and significant sentences, that is, by categorizing each of these three classes of strings as an entity unto itself and by conceptualizing the whole of its membership as falling under a distinctive symbol, one obtains an equation of sets that connects the three languages being marked:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>\operatorname{SPARCE} \ = \ \operatorname{PARCE} \ - \ \operatorname{EPARCE}</math>
 +
|}
 +
 
 +
In sum, one has the disjoint union:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>\operatorname{PARCE} \ = \ \operatorname{EPARCE} \ \cup \ \operatorname{SPARCE}</math>
 +
|}
 +
 
 +
For brevity in the present case, and to serve as a generic device in any similar array of situations, let <math>S\!</math> be the type of an arbitrary sentence, possibly empty, and let <math>S'\!</math> be the type of a specifically non-empty sentence.  In addition, let <math>\underline\varepsilon</math> be the type of the empty sentence, in effect, the language
 +
<math>\underline\varepsilon = \{ \varepsilon \}</math> that contains a single empty string, and let a plus sign <math>^{\backprime\backprime} + ^{\prime\prime}</math> signify a disjoint union of types.  In the most general type of situation, where the type <math>S\!</math> is permitted to include the empty string, one notes the following relation among types:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>S \ = \ \underline\varepsilon \ + \ S'</math>
 +
|}
 +
 
 +
With the distinction between empty and significant expressions in mind, I return to the grasp of the cactus language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P}) = \operatorname{PARCE} (\mathfrak{P})</math> that is afforded by Grammar&nbsp;2, and, taking that as a point of departure, explore other avenues of possible improvement in the comprehension of these expressions.  In order to observe the effects of this alteration as clearly as possible, in isolation from any other potential factors, it is useful to strip away the higher levels intermediate organization that are present in Grammar&nbsp;3, and start again with a single intermediate symbol, as used in Grammar&nbsp;2.  One way of carrying out this strategy leads on to a grammar of the variety that will be articulated next.
 +
 
 +
======Grammar 4======
 +
 
 +
If one imposes the distinction between empty and significant types on each non-terminal symbol in Grammar&nbsp;2, then the non-terminal symbols <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> and <math>^{\backprime\backprime} T \, ^{\prime\prime}</math> give rise to the expanded set of non-terminal symbols <math>^{\backprime\backprime} S \, ^{\prime\prime}, \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime}, \, ^{\backprime\backprime} T' \, ^{\prime\prime},</math> leaving the last three of these to form the new intermediate alphabet.  Grammar&nbsp;4 has the intermediate alphabet <math>\mathfrak{Q} \, = \, \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime}, \, ^{\backprime\backprime} T' \, ^{\prime\prime} \, \},</math> with the set <math>\mathfrak{K}</math> of covering rules as listed in the next display.
 +
 
 +
<br>
 +
 
 +
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
 +
| align="left"  style="border-left:1px solid black;"  width="50%" |
 +
<math>\mathfrak{C} (\mathfrak{P}) : \text{Grammar 4}\!</math>
 +
| align="right" style="border-right:1px solid black;" width="50%" |
 +
<math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime}, \, ^{\backprime\backprime} T' \, ^{\prime\prime} \, \}</math>
 +
|-
 +
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
 +
<math>\begin{array}{rcll}
 +
1.
 +
& S
 +
& :>
 +
& \varepsilon
 +
\\
 +
2.
 +
& S
 +
& :>
 +
& S'
 +
\\
 +
3.
 +
& S'
 +
& :>
 +
& m_1
 +
\\
 +
4.
 +
& S'
 +
& :>
 +
& p_j, \, \text{for each} \, j \in J
 +
\\
 +
5.
 +
& S'
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
6.
 +
& S'
 +
& :>
 +
& S' \, \cdot \, S'
 +
\\
 +
7.
 +
& T
 +
& :>
 +
& \varepsilon
 +
\\
 +
8.
 +
& T
 +
& :>
 +
& T'
 +
\\
 +
9.
 +
& T'
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \operatorname{,} ^{\prime\prime} \, \cdot \, S
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
In this version of a grammar for <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P}),</math> the intermediate type <math>T\!</math> is partitioned as <math>T = \underline\varepsilon + T',</math> thereby parsing the intermediate symbol <math>T\!</math> in parallel fashion with the division of its overlying type as <math>S = \underline\varepsilon + S'.</math>  This is an option that I will choose to close off for now, but leave it open to consider at a later point.  Thus, it suffices to give a brief discussion of what it involves, in the process of moving on to its chief alternative.
 +
 
 +
There does not appear to be anything radically wrong with trying this approach to types.  It is reasonable and consistent in its underlying principle, and it provides a rational and a homogeneous strategy toward all parts of speech, but it does require an extra amount of conceptual overhead, in that every non-trivial type has to be split into two parts and comprehended in two stages.  Consequently, in view of the largely practical difficulties of making the requisite distinctions for every intermediate symbol, it is a common convention, whenever possible, to restrict intermediate types to covering exclusively non-empty strings.
 +
 
 +
For the sake of future reference, it is convenient to refer to this restriction on intermediate symbols as the ''intermediate significance'' constraint.  It can be stated in a compact form as a condition on the relations between non-terminal symbols <math>q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q}</math> and sentential forms <math>W \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup (\mathfrak{Q} \cup \mathfrak{A})^*.</math>
 +
 
 +
<br>
 +
 
 +
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
 +
| align="center" style="border-left:1px solid black; border-right:1px solid black" |
 +
<math>\text{Condition On Intermediate Significance}\!</math>
 +
|-
 +
| style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
 +
<math>\begin{array}{lccc}
 +
\text{If}
 +
& q
 +
& :>
 +
& W
 +
\\
 +
\text{and}
 +
& W
 +
& =
 +
& \varepsilon
 +
\\
 +
\text{then}
 +
& q
 +
& =
 +
& ^{\backprime\backprime} S \, ^{\prime\prime}
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
If this is beginning to sound like a monotone condition, then it is not absurd to sharpen the resemblance and render the likeness more acute.  This is done by declaring a couple of ordering relations, denoting them under variant interpretations by the same sign, <math>^{\backprime\backprime}\!< \, ^{\prime\prime}.</math>
 +
 
 +
# The ordering <math>^{\backprime\backprime}\!< \, ^{\prime\prime}</math> on the set of non-terminal symbols, <math>q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q},</math> ordains the initial symbol <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> to be strictly prior to every intermediate symbol.  This is tantamount to the axiom that <math>^{\backprime\backprime} S \, ^{\prime\prime} < q,</math> for all <math>q \in \mathfrak{Q}.</math>
 +
# The ordering <math>^{\backprime\backprime}\!< \, ^{\prime\prime}</math> on the collection of sentential forms, <math>W \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup (\mathfrak{Q} \cup \mathfrak{A})^*,</math> ordains the empty string to be strictly minor to every other sentential form.  This is stipulated in the axiom that <math>\varepsilon < W,</math> for every non-empty sentential form <math>W.\!</math>
 +
 
 +
Given these two orderings, the constraint in question on intermediate significance can be stated as follows:
 +
 
 +
<br>
 +
 
 +
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
 +
| align="center" style="border-left:1px solid black; border-right:1px solid black" |
 +
<math>\text{Condition On Intermediate Significance}\!</math>
 +
|-
 +
| style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
 +
<math>\begin{array}{lccc}
 +
\text{If}
 +
& q
 +
& :>
 +
& W
 +
\\
 +
\text{and}
 +
& q
 +
& >
 +
& ^{\backprime\backprime} S \, ^{\prime\prime}
 +
\\
 +
\text{then}
 +
& W
 +
& >
 +
& \varepsilon
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
Achieving a grammar that respects this convention typically requires a more detailed account of the initial setting of a type, both with regard to the type of context that incites its appearance and also with respect to the minimal strings that arise under the type in question.  In order to find covering productions that satisfy the intermediate significance condition, one must be prepared to consider a wider variety of calling contexts or inciting situations that can be noted to surround each recognized type, and also to enumerate a larger number of the smallest cases that can be observed to fall under each significant type.
 +
 
 +
======Grammar 5======
 +
 
 +
With the foregoing array of considerations in mind, one is gradually led to a grammar for <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P})</math> in which all of the covering productions have either one of the following two forms:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{ccll}
 +
S
 +
& :>
 +
& \varepsilon
 +
&
 +
\\
 +
q
 +
& :>
 +
& W,
 +
& \text{with} \ q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q} \ \text{and} \ W \in (\mathfrak{Q} \cup \mathfrak{A})^+
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
A grammar that fits into this mold is called a ''context-free grammar''.  The first type of rewrite rule is referred to as a ''special production'', while the second type of rewrite rule is called an ''ordinary production''.  An ''ordinary derivation'' is one that employs only ordinary productions.  In ordinary productions, those that have the form <math>q :> W,\!</math> the replacement string <math>W\!</math> is never the empty string, and so the lengths of the augmented strings or the sentential forms that follow one another in an ordinary derivation, on account of using the ordinary types of rewrite rules, never decrease at any stage of the process, up to and including the terminal string that is finally generated by the grammar.  This type of feature is known as the ''non-contracting property'' of productions, derivations, and grammars.  A grammar is said to have the property if all of its covering productions, with the possible exception of <math>S :> \varepsilon,</math> are non-contracting.  In particular, context-free grammars are special cases of non-contracting grammars.  The presence of the non-contracting property within a formal grammar makes the length of the augmented string available as a parameter that can figure into mathematical inductions and motivate recursive proofs, and this handle on the generative process makes it possible to establish the kinds of results about the generated language that are not easy to achieve in more general cases, nor by any other means even in these brands of special cases.
 +
 
 +
Grammar&nbsp;5 is a context-free grammar for the painted cactus language that uses <math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \},</math> with <math>\mathfrak{K}</math> as listed in the next display.
 +
 
 +
<br>
 +
 
 +
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
 +
| align="left"  style="border-left:1px solid black;"  width="50%" |
 +
<math>\mathfrak{C} (\mathfrak{P}) : \text{Grammar 5}\!</math>
 +
| align="right" style="border-right:1px solid black;" width="50%" |
 +
<math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}</math>
 +
|-
 +
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
 +
<math>\begin{array}{rcll}
 +
1.
 +
& S
 +
& :>
 +
& \varepsilon
 +
\\
 +
2.
 +
& S
 +
& :>
 +
& S'
 +
\\
 +
3.
 +
& S'
 +
& :>
 +
& m_1
 +
\\
 +
4.
 +
& S'
 +
& :>
 +
& p_j, \, \text{for each} \, j \in J
 +
\\
 +
5.
 +
& S'
 +
& :>
 +
& S' \, \cdot \, S'
 +
\\
 +
6.
 +
& S'
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}
 +
\\
 +
7.
 +
& S'
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
8.
 +
& T
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime}
 +
\\
 +
9.
 +
& T
 +
& :>
 +
& S'
 +
\\
 +
10.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime}
 +
\\
 +
11.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, S'
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
Finally, it is worth trying to bring together the advantages of these diverse styles of grammar, to whatever extent that they are compatible.  To do this, a prospective grammar must be capable of maintaining a high level of intermediate organization, like that arrived at in Grammar&nbsp;2, while respecting the principle of intermediate significance, and thus accumulating all the benefits of the context-free format in Grammar&nbsp;5.  A plausible synthesis of most of these features is given in Grammar&nbsp;6.
 +
 
 +
======Grammar 6======
 +
 
 +
Grammar&nbsp;6 has the intermediate alphabet <math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} F \, ^{\prime\prime}, \, ^{\backprime\backprime} R \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \},</math> with the production set <math>\mathfrak{K}</math> as listed in the next display.
 +
 
 +
<br>
 +
 
 +
{| align="center" cellpadding="12" cellspacing="0" style="border-top:1px solid black" width="90%"
 +
| align="left"  style="border-left:1px solid black;"  width="50%" |
 +
<math>\mathfrak{C} (\mathfrak{P}) : \text{Grammar 6}\!</math>
 +
| align="right" style="border-right:1px solid black;" width="50%" |
 +
<math>\mathfrak{Q} = \{ \, ^{\backprime\backprime} S' \, ^{\prime\prime}, \, ^{\backprime\backprime} F \, ^{\prime\prime}, \, ^{\backprime\backprime} R \, ^{\prime\prime}, \, ^{\backprime\backprime} T \, ^{\prime\prime} \, \}</math>
 +
|-
 +
| colspan="2" style="border-top:1px solid black; border-bottom:1px solid black; border-left:1px solid black; border-right:1px solid black" |
 +
<math>\begin{array}{rcll}
 +
1.
 +
& S
 +
& :>
 +
& \varepsilon
 +
\\
 +
2.
 +
& S
 +
& :>
 +
& S'
 +
\\
 +
3.
 +
& S'
 +
& :>
 +
& R
 +
\\
 +
4.
 +
& S'
 +
& :>
 +
& F
 +
\\
 +
5.
 +
& S'
 +
& :>
 +
& S' \, \cdot \, S'
 +
\\
 +
6.
 +
& R
 +
& :>
 +
& m_1
 +
\\
 +
7.
 +
& R
 +
& :>
 +
& p_j, \, \text{for each} \, j \in J
 +
\\
 +
8.
 +
& R
 +
& :>
 +
& R \, \cdot \, R
 +
\\
 +
9.
 +
& F
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{()} \, ^{\prime\prime}
 +
\\
 +
10.
 +
& F
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, T \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}
 +
\\
 +
11.
 +
& T
 +
& :>
 +
& ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime}
 +
\\
 +
12.
 +
& T
 +
& :>
 +
& S'
 +
\\
 +
13.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime}
 +
\\
 +
14.
 +
& T
 +
& :>
 +
& T \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, S'
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
<br>
 +
 
 +
The preceding development provides a typical example of how an initially effective and conceptually succinct description of a formal language, but one that is terse to the point of allowing its prospective interpreter to waste exorbitant amounts of energy in trying to unravel its implications, can be converted into a form that is more efficient from the operational point of view, even if slightly more ungainly in regard to its elegance.
 +
 
 +
The basic idea behind all of this machinery remains the same:  Besides the select body of formulas that are introduced as boundary conditions, it merely institutes the following general rule:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|-
 +
| <math>\operatorname{If}</math>
 +
| the strings <math>S_1, \ldots, S_k\!</math> are sentences,
 +
|-
 +
| <math>\operatorname{Then}</math>
 +
| their concatenation in the form
 +
|-
 +
| &nbsp;
 +
| <math>\operatorname{Conc}_{j=1}^k S_j \ = \ S_1 \, \cdot \, \ldots \, \cdot \, S_k</math>
 +
|-
 +
| &nbsp;
 +
| is a sentence,
 +
|-
 +
| <math>\operatorname{And}</math>
 +
| their surcatenation in the form
 +
|-
 +
| &nbsp;
 +
| <math>\operatorname{Surc}_{j=1}^k S_j \ = \ ^{\backprime\backprime} \, \operatorname{(} \, ^{\prime\prime} \, \cdot \, S_1 \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, \ldots \, \cdot \, ^{\backprime\backprime} \, \operatorname{,} \, ^{\prime\prime} \, \cdot \, S_k \, \cdot \, ^{\backprime\backprime} \, \operatorname{)} \, ^{\prime\prime}</math>
 +
|-
 +
| &nbsp;
 +
| is a sentence.
 +
|}
 +
 
 +
=====1.3.11.2.  Generalities About Formal Grammars=====
 +
 
 +
It is fitting to wrap up the foregoing developments by summarizing the notion of a formal grammar that appeared to evolve in the present case.  For the sake of future reference and the chance of a wider application, it is also useful to try to extract the scheme of a formalization that potentially holds for any formal language.  The following presentation of the notion of a formal grammar is adapted, with minor modifications, from the treatment in (DDQ, 60&ndash;61).
 +
 
 +
A ''formal grammar'' <math>\mathfrak{G}</math> is given by a four-tuple <math>\mathfrak{G} = ( \, ^{\backprime\backprime} S \, ^{\prime\prime}, \, \mathfrak{Q}, \, \mathfrak{A}, \, \mathfrak{K} \, )</math> that takes the following form of description:
 +
 
 +
<ol style="list-style-type:decimal">
 +
 
 +
<li><math>^{\backprime\backprime} S \, ^{\prime\prime}</math> is the ''initial'', ''special'', ''start'', or ''sentence'' symbol.  Since the letter <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> serves this function only in a special setting, its employment in this role need not create any confusion with its other typical uses as a string variable or as a sentence variable.</li>
 +
 
 +
<li><math>\mathfrak{Q} = \{ q_1, \ldots, q_m \}</math> is a finite set of ''intermediate symbols'', all distinct from <math>^{\backprime\backprime} S \, ^{\prime\prime}.</math></li>
 +
 
 +
<li><math>\mathfrak{A} = \{ a_1, \dots, a_n \}</math> is a finite set of ''terminal symbols'', also known as the ''alphabet'' of <math>\mathfrak{G},</math> all distinct from <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> and disjoint from <math>\mathfrak{Q}.</math>  Depending on the particular conception of the language <math>\mathfrak{L}</math> that is ''covered'', ''generated'', ''governed'', or ''ruled'' by the grammar <math>\mathfrak{G},</math> that is, whether <math>\mathfrak{L}</math> is conceived to be a set of words, sentences, paragraphs, or more extended structures of discourse, it is usual to describe <math>\mathfrak{A}</math> as the ''alphabet'', ''lexicon'', ''vocabulary'', ''liturgy'', or ''phrase book'' of both the grammar <math>\mathfrak{G}</math> and the language <math>\mathfrak{L}</math> that it regulates.</li>
 +
 
 +
<li><math>\mathfrak{K}</math> is a finite set of ''characterizations''.  Depending on how they come into play, these are variously described as ''covering rules'', ''formations'', ''productions'', ''rewrite rules'', ''subsumptions'', ''transformations'', or ''typing rules''.</li>
 +
 
 +
</ol>
 +
 
 +
To describe the elements of <math>\mathfrak{K}</math> it helps to define some additional terms:
 +
 
 +
<ol style="list-style-type:lower-latin">
 +
 
 +
<li>The symbols in <math>\{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q} \cup \mathfrak{A}</math> form the ''augmented alphabet'' of <math>\mathfrak{G}.</math></li>
 +
 
 +
<li>The symbols in <math>\{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q}</math> are the ''non-terminal symbols'' of <math>\mathfrak{G}.</math></li>
 +
 
 +
<li>The symbols in <math>\mathfrak{Q} \cup \mathfrak{A}</math> are the ''non-initial symbols'' of <math>\mathfrak{G}.</math></li>
 +
 
 +
<li>The strings in <math>( \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q} \cup \mathfrak{A} )^*</math>  are the ''augmented strings'' for <math>\mathfrak{G}.</math></li>
 +
 
 +
<li>The strings in <math>\{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup (\mathfrak{Q} \cup \mathfrak{A})^*</math> are the ''sentential forms'' for <math>\mathfrak{G}.</math></li>
 +
 
 +
</ol>
 +
 
 +
Each characterization in <math>\mathfrak{K}</math> is an ordered pair of strings <math>(S_1, S_2)\!</math> that takes the following form:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>S_1 \ = \ Q_1 \cdot q \cdot Q_2,</math>
 +
|-
 +
| <math>S_2 \ = \ Q_1 \cdot W \cdot Q_2.</math>
 +
|}
 +
 
 +
In this scheme, <math>S_1\!</math> and <math>S_2\!</math> are members of the augmented strings for <math>\mathfrak{G},</math> more precisely, <math>S_1\!</math> is a non-empty string and a sentential form over <math>\mathfrak{G},</math> while <math>S_2\!</math> is a possibly empty string and also a sentential form over <math>\mathfrak{G}.</math>
 +
 
 +
Here also, <math>q\!</math> is a non-terminal symbol, that is, <math>q \in \{ \, ^{\backprime\backprime} S \, ^{\prime\prime} \, \} \cup \mathfrak{Q},</math> while <math>Q_1, Q_2,\!</math> and <math>W\!</math> are possibly empty strings of non-initial symbols, a fact that can be expressed in the form, <math>Q_1, Q_2, W \in (\mathfrak{Q} \cup \mathfrak{A})^*.</math>
 +
 
 +
In practice, the couplets in <math>\mathfrak{K}</math> are used to ''derive'', to ''generate'', or to ''produce'' sentences of the corresponding language <math>\mathfrak{L} = \mathfrak{L} (\mathfrak{G}).</math>  The language <math>\mathfrak{L}</math> is then said to be ''governed'', ''licensed'', or ''regulated'' by the grammar <math>\mathfrak{G},</math> a circumstance that is expressed in the form <math>\mathfrak{L} = \langle \mathfrak{G} \rangle.</math>  In order to facilitate this active employment of the grammar, it is conventional to write the abstract characterization <math>(S_1, S_2)\!</math> and the specific characterization <math>(Q_1 \cdot q \cdot Q_2, \ Q_1 \cdot W \cdot Q_2)</math> in the following forms, respectively:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
S_1
 +
& :>
 +
& S_2
 +
\\
 +
Q_1 \cdot q \cdot Q_2
 +
& :>
 +
& Q_1 \cdot W \cdot Q_2
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
In this usage, the characterization <math>S_1 :> S_2\!</math> is tantamount to a grammatical license to transform a string of the form <math>Q_1 \cdot q \cdot Q_2</math> into a string of the form <math>Q1 \cdot W \cdot Q2,</math> in effect, replacing the non-terminal symbol <math>q\!</math> with the non-initial string <math>W\!</math> in any selected, preserved, and closely adjoining context of the form <math>Q1 \cdot \underline{~~~} \cdot Q2.</math>  In this application the notation <math>S_1 :> S_2\!</math> can be read to say that <math>S_1\!</math> ''produces'' <math>S_2\!</math> or that <math>S_1\!</math> ''transforms into'' <math>S_2.\!</math>
 +
 
 +
An ''immediate derivation'' in <math>\mathfrak{G}</math> is an ordered pair <math>(W, W')\!</math> of sentential forms in <math>\mathfrak{G}</math> such that:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{llll}
 +
W = Q_1 \cdot X \cdot Q_2,
 +
& W' = Q_1 \cdot Y \cdot Q_2,
 +
& \text{and}
 +
& (X, Y) \in \mathfrak{K}.
 +
\end{array}</math>
 +
|}
 +
 
 +
As noted above, it is usual to express the condition <math>(X, Y) \in \mathfrak{K}</math> by writing <math>X :> Y \, \text{in} \, \mathfrak{G}.</math>
 +
 
 +
The immediate derivation relation is indicated by saying that <math>W\!</math> ''immediately derives'' <math>W',\!</math> by saying that <math>W'\!</math> is ''immediately derived'' from <math>W\!</math> in <math>\mathfrak{G},</math> and also by writing:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>W ::> W'.\!</math>
 +
|}
 +
 
 +
A ''derivation'' in <math>\mathfrak{G}</math> is a finite sequence <math>(W_1, \ldots, W_k)\!</math> of sentential forms over <math>\mathfrak{G}</math> such that each adjacent pair <math>(W_j, W_{j+1})\!</math> of sentential forms in the sequence is an immediate derivation in <math>\mathfrak{G},</math> in other words, such that:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>W_j ::> W_{j+1},\ \text{for all}\ j = 1\ \text{to}\ k - 1.</math>
 +
|}
 +
 
 +
If there exists a derivation <math>(W_1, \ldots, W_k)\!</math> in <math>\mathfrak{G},</math> one says that <math>W_1\!</math> ''derives'' <math>W_k\!</math> in <math>\mathfrak{G}</math> or that <math>W_k\!</math> is ''derivable'' from <math>W_1\!</math> in <math>\mathfrak{G},</math> and one
 +
typically summarizes the derivation by writing:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>W_1 :\!*\!:> W_k.\!</math>
 +
|}
 +
 
 +
The language <math>\mathfrak{L} = \mathfrak{L} (\mathfrak{G}) = \langle \mathfrak{G} \rangle</math> that is ''generated'' by the formal grammar <math>\mathfrak{G} = ( \, ^{\backprime\backprime} S \, ^{\prime\prime}, \, \mathfrak{Q}, \, \mathfrak{A}, \, \mathfrak{K} \, )</math> is the set of strings over the terminal alphabet <math>\mathfrak{A}</math> that are derivable from the initial symbol <math>^{\backprime\backprime} S \, ^{\prime\prime}</math> by way of the intermediate symbols in <math>\mathfrak{Q}</math> according to the characterizations in <math>\mathfrak{K}.</math>  In sum:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>\mathfrak{L} (\mathfrak{G}) \ = \ \langle \mathfrak{G} \rangle \ = \ \{ \, W \in \mathfrak{A}^* \, : \, ^{\backprime\backprime} S \, ^{\prime\prime} \, :\!*\!:> \, W \, \}.</math>
 +
|}
 +
 
 +
Finally, a string <math>W\!</math> is called a ''word'', a ''sentence'', or so on, of the language generated by <math>\mathfrak{G}</math> if and only if <math>W\!</math> is in <math>\mathfrak{L} (\mathfrak{G}).</math>
 +
 
 +
=====1.3.11.3.  The Cactus Language : Stylistics=====
 +
 
 +
{| align="center" cellpadding="0" cellspacing="0" width="90%"
 +
|
 +
<p>As a result, we can hardly conceive of how many possibilities there are for what we call objective reality.  Our sharp quills of knowledge are so narrow and so concentrated in particular directions that with science there are myriads of totally different real worlds, each one accessible from the next simply by slight alterations &mdash; shifts of gaze &mdash; of every particular discipline and subspecialty.
 +
</p>
 +
|-
 +
| align="right" | &mdash; Herbert J. Bernstein, "Idols of Modern Science", [HJB, 38]
 +
|}
 +
 
 +
This Subsection highlights an issue of ''style'' that arises in describing a formal language.  In broad terms, I use the word ''style'' to refer to a loosely specified class of formal systems, typically ones that have a set of distinctive features in common.  For instance, a style of proof system usually dictates one or more rules of inference that are acknowledged as conforming to that style.  In the present context, the word ''style'' is a natural choice to characterize the varieties of formal grammars, or any other sorts of formal systems that can be contemplated for deriving the sentences of a formal language.
 +
 
 +
In looking at what seems like an incidental issue, the discussion arrives at a critical point.  The question is:  What decides the issue of style?  Taking a given language as the object of discussion, what factors enter into and determine the choice of a style for its presentation, that is, a particular way of arranging and selecting the materials that come to be involved in a description, a grammar, or a theory of the language?  To what degree is the determination accidental, empirical, pragmatic, rhetorical, or stylistic, and to what extent is the choice essential, logical, and necessary?  For that matter, what determines the order of signs in a word, a sentence, a text, or a discussion?  All of the corresponding parallel questions about the character of this choice can be posed with regard to the constituent part as well as with regard to the main constitution of the formal language.
 +
 
 +
In order to answer this sort of question, at any level of articulation, one has to inquire into the type of distinction that it invokes, between arrangements and orders that are essential, logical, and necessary and orders and arrangements that are accidental, rhetorical, and stylistic.  As a rough guide to its comprehension, a ''logical order'', if it resides in the subject at all, can be approached by considering all of the ways of saying the same things, in all of the languages that are capable of saying roughly the same things about that subject.  Of course, the ''all'' that appears in this rule of thumb has to be interpreted as a fittingly qualified sort of universal.  For all practical purposes, it simply means ''all of the ways that a person can think of'' and ''all of the languages that a person can conceive of'', with all things being relative to the particular moment of investigation.  For all of these reasons, the rule must stand as little more than a rough idea of how to approach its object.
 +
 
 +
If it is demonstrated that a given formal language can be presented in any one of several styles of formal grammar, then the choice of a format is accidental, optional, and stylistic to the very extent that it is free.  But if it can be shown that a particular language cannot be successfully presented in a particular style of grammar, then the issue of style is no longer free and rhetorical, but becomes to that very degree essential, necessary, and obligatory, in other words, a question of the objective logical order that can be found to reside in the object language.
 +
 
 +
As a rough illustration of the difference between logical and rhetorical orders, consider the kinds of order that are expressed and exhibited in the following conjunction of implications:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>X \Rightarrow Y\ \operatorname{and}\ Y \Rightarrow Z.</math>
 +
|}
 +
 
 +
Here, there is a happy conformity between the logical content and the rhetorical form, indeed, to such a degree that one hardly notices the difference between them.  The rhetorical form is given by the order of sentences in the two implications and the order of implications in the conjunction.  The logical content is given by the order of propositions in the extended implicational sequence:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>X\ \le\ Y\ \le\ Z.</math>
 +
|}
 +
 
 +
To see the difference between form and content, or manner and matter, it is enough to observe a few of the ways that the expression can be varied without changing its meaning, for example:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
| <math>Z \Leftarrow Y\ \operatorname{and}\ Y \Leftarrow X.</math>
 +
|}
 +
 
 +
Any style of declarative programming, also called ''logic programming'', depends on a capacity, as embodied in a programming language or other formal system, to describe the relation between problems and solutions in logical terms.  A recurring problem in building this capacity is in bridging the gap between ostensibly non-logical orders and the logical orders that are used to describe and to represent them.  For instance, to mention just a couple of the most pressing cases, and the ones that are currently proving to be the most resistant to a complete analysis, one has the orders of dynamic evolution and rhetorical transition that manifest themselves in the process of inquiry and in the communication of its results.
 +
 
 +
This patch of the ongoing discussion is concerned with describing a particular variety of formal languages, whose typical representative is the painted cactus language <math>\mathfrak{L} = \mathfrak{C} (\mathfrak{P}).</math>  It is the intention of this work to interpret this language for propositional logic, and thus to use it as a sentential calculus, an order of reasoning that forms an active ingredient and a significant component of all logical reasoning.  To describe this language, the standard devices of formal grammars and formal language theory are more than adequate, but this only raises the next question:  What sorts of devices are exactly adequate, and fit the task to a "T"?  The ultimate desire is to turn the tables on the order of description, and so begins a process of eversion that evolves to the point of asking:  To what extent can the language capture the essential features and laws of its own grammar and describe the active principles of its own generation?  In other words:  How well can the language be described by using the language itself to do so?
 +
 
 +
In order to speak to these questions, I have to express what a grammar says about a language in terms of what a language can say on its own.  In effect, it is necessary to analyze the kinds of meaningful statements that grammars are capable of making about languages in general and to relate them to the kinds of meaningful statements that the syntactic ''sentences'' of the cactus language might be interpreted as making about the very same topics.  So far in the present discussion, the sentences of the cactus language do not make any meaningful statements at all, much less any meaningful statements about languages and their constitutions.  As of yet, these sentences subsist in the form of purely abstract, formal, and uninterpreted combinatorial constructions.
 +
 
 +
Before the capacity of a language to describe itself can be evaluated, the missing link to meaning has to be supplied for each of its strings.  This calls for a dimension of semantics and a notion of interpretation, topics that are taken up for the case of the cactus language <math>\mathfrak{C} (\mathfrak{P})</math> in Subsection 1.3.10.12.  Once a plausible semantics is prescribed for this language it will be possible to return to these questions and to address them in a meaningful way.
 +
 
 +
The prominent issue at this point is the distinct placements of formal languages and formal grammars with respect to the question of meaning.  The sentences of a formal language are merely the abstract strings of abstract signs that happen to belong to a certain set.  They do not by themselves make any meaningful statements at all, not without mounting a separate effort of interpretation, but the rules of a formal grammar make meaningful statements about a formal language, to the extent that they say what strings belong to it and what strings do not.  Thus, the formal grammar, a formalism that appears to be even more skeletal than the formal language, still has bits and pieces of meaning attached to it.  In a sense, the question of meaning is factored into two parts, structure and value, leaving the aspect of value reduced in complexity and subtlety to the simple question of belonging.  Whether this single bit of meaningful value is enough to encompass all of the dimensions of meaning that we require, and whether it can be compounded to cover the complexity that actually exists in the realm of meaning &mdash; these are questions for an extended future inquiry.
 +
 
 +
Perhaps I ought to comment on the differences between the present and the standard definition of a formal grammar, since I am attempting to strike a compromise with several alternative conventions of usage, and thus to leave certain options open for future exploration.  All of the changes are minor, in the sense that they are not intended to alter the classes of languages that are able to be generated, but only to clear up various ambiguities and sundry obscurities that affect their conception.
 +
 
 +
Primarily, the conventional scope of non-terminal symbols was expanded to encompass the sentence symbol, mainly on account of all the contexts where the initial and the intermediate symbols are naturally invoked in the same breath.  By way of compensating for the usual exclusion of the sentence symbol from the non-terminal class, an equivalent distinction was introduced in the fashion of a distinction between the initial and the intermediate symbols, and this serves its purpose in all of those contexts where the two kind of symbols need to be treated separately.
 +
 
 +
At the present point, I remain a bit worried about the motivations and the justifications for introducing this distinction, under any name, in the first place.  It is purportedly designed to guarantee that the process of derivation at least gets started in a definite direction, while the real questions have to do with how it all ends.  The excuses of efficiency and expediency that I offered as plausible and sufficient reasons for distinguishing between empty and significant sentences are likely to be ephemeral, if not entirely illusory, since intermediate symbols are still permitted to characterize or to cover themselves, not to mention being allowed to cover the empty string, and so the very types of traps that one exerts oneself to avoid at the outset are always there to afflict the process at all of the intervening times.
 +
 
 +
If one reflects on the form of grammar that is being prescribed here, it looks as if one sought, rather futilely, to avoid the problems of recursion by proscribing the main program from calling itself, while allowing any subprogram to do so.  But any trouble that is avoidable in the part is also avoidable in the main, while any trouble that is inevitable in the part is also inevitable in the main.  Consequently, I am reserving the right to change my mind at a later stage, perhaps to permit the initial symbol to characterize, to cover, to regenerate, or to produce itself, if that turns out to be the best way in the end.
 +
 
 +
Before I leave this Subsection, I need to say a few things about the manner in which the abstract theory of formal languages and the pragmatic theory of sign relations interact with each other.
 +
 
 +
Formal language theory can seem like an awfully picky subject at times, treating every symbol as a thing in itself the way it does, sorting out the nominal types of symbols as objects in themselves, and singling out the passing tokens of symbols as distinct entities in their own rights.  It has to continue doing this, if not for any better reason than to aid in clarifying the kinds of languages that people are accustomed to use, to assist in writing computer programs that are capable of parsing real sentences, and to serve in designing programming languages that people would like to become accustomed to use.  As a matter of fact, the only time that formal language theory becomes too picky, or a bit too myopic in its focus, is when it leads one to think that one is dealing with the thing itself and not just with the sign of it, in other words, when the people who use the tools of formal language theory forget that they are dealing with the mere signs of more interesting objects and not with the objects of ultimate interest in and of themselves.
 +
 
 +
As a result, there a number of deleterious effects that can arise from the extreme pickiness of formal language theory, arising, as is often the case, when formal theorists forget the practical context of theorization.  It frequently happens that the exacting task of defining the membership of a formal language leads one to think that this object and this object alone is the justifiable end of the whole exercise.  The distractions of this mediate objective render one liable to forget that one's penultimate interest lies always with various kinds of equivalence classes of signs, not entirely or exclusively with their more meticulous representatives.
 +
 
 +
When this happens, one typically goes on working oblivious to the fact that many details about what transpires in the meantime do not matter at all in the end, and one is likely to remain in blissful ignorance of the circumstance that many special details of language membership are bound, destined, and pre-determined to be glossed over with some measure of indifference, especially when it comes down to the final constitution of those equivalence classes of signs that are able to answer for the genuine objects of the whole enterprise of language.  When any form of theory, against its initial and its best intentions, leads to this kind of absence of mind that is no longer beneficial in all of its main effects, the situation calls for an antidotal form of theory, one that can restore the presence of mind that all forms of theory are meant to augment.
 +
 
 +
The pragmatic theory of sign relations is called for in settings where everything that can be named has many other names, that is to say, in the usual case.  Of course, one would like to replace this superfluous multiplicity of signs with an organized system of canonical signs, one for each object that needs to be denoted, but reducing the redundancy too far, beyond what is necessary to eliminate the factor of "noise" in the language, that is, to clear up its effectively useless distractions, can destroy the very utility of a typical language, which is intended to provide a ready means to express a present situation, clear or not, and to describe an ongoing condition of experience in just the way that it seems to present itself.  Within this fleshed out framework of language, moreover, the process of transforming the manifestations of a sign from its ordinary appearance to its canonical aspect is the whole problem of computation in a nutshell.
 +
 
 +
It is a well-known truth, but an often forgotten fact, that nobody computes with numbers, but solely with numerals in respect of numbers, and numerals themselves are symbols.  Among other things, this renders all discussion of numeric versus symbolic computation a bit beside the point, since it is only a question of what kinds of symbols are best for one's immediate application or for one's selection of ongoing objectives.  The numerals that everybody knows best are just the canonical symbols, the standard signs or the normal terms for numbers, and the process of computation is a matter of getting from the arbitrarily obscure signs that the data of a situation are capable of throwing one's way to the indications of its character that are clear enough to motivate action.
 +
 
 +
Having broached the distinction between propositions and sentences, one can see its similarity to the distinction between numbers and numerals.  What are the implications of the foregoing considerations for reasoning about propositions and for the realm of reckonings in sentential logic?  If the purpose of a sentence is just to denote a proposition, then the proposition is just the object of whatever sign is taken for a sentence.  This means that the computational manifestation of a piece of reasoning about propositions amounts to a process that takes place entirely within a language of sentences, a procedure that can rationalize its account by referring to the denominations of these sentences among propositions.
 +
 
 +
The application of these considerations in the immediate setting is this:  Do not worry too much about what roles the empty string <math>\varepsilon \, = \, ^{\backprime\backprime\prime\prime}</math> and the blank symbol <math>m_1 \, = \, ^{\backprime\backprime} \operatorname{~} ^{\prime\prime}</math> are supposed to play in a given species of formal languages.  As it happens, it is far less important to wonder whether these types of formal tokens actually constitute genuine sentences than it is to decide what equivalence classes it makes sense to form over all of the sentences in the resulting language, and only then to bother about what equivalence classes these limiting cases of sentences are most conveniently taken to represent.
 +
 
 +
These concerns about boundary conditions betray a more general issue.  Already by this point in discussion the limits of the purely syntactic approach to a language are beginning to be visible.  It is not that one cannot go a whole lot further by this road in the analysis of a particular language and in the study of languages in general, but when it comes to the questions of understanding the purpose of a language, of extending its usage in a chosen direction, or of designing a language for a particular set of uses, what matters above all else are the ''pragmatic equivalence classes'' of signs that are demanded by the application and intended by the designer, and not so much the peculiar characters of the signs that represent these classes of practical meaning.
 +
 
 +
Any description of a language is bound to have alternative descriptions.  More precisely, a circumscribed description of a formal language, as any effectively finite description is bound to be, is certain to suggest the equally likely existence and the possible utility of other descriptions.  A single formal grammar describes but a single formal language, but any formal language is described by many different formal grammars, not all of which afford the same grasp of its structure, provide an equivalent comprehension of its character, or yield an interchangeable view of its aspects.  Consequently, even with respect to the same formal language, different formal grammars are typically better for different purposes.
 +
 
 +
With the distinctions that evolve among the different styles of grammar, and with the preferences that different observers display toward them, there naturally comes the question:  What is the root of this evolution?
 +
 
 +
One dimension of variation in the styles of formal grammars can be seen by treating the union of languages, and especially the disjoint union of languages, as a ''sum'', by treating the concatenation of languages as a ''product'', and then by distinguishing the styles of analysis that favor ''sums of products'' from those that favor ''products of sums'' as their canonical forms of description.  If one examines the relation between languages and grammars carefully enough to see the presence and the influence of these different styles, and when one comes to appreciate the ways that different styles of grammars can be used with different degrees of success for different purposes, then one begins to see the possibility that alternative styles of description can be based on altogether different linguistic and logical operations.
 +
 
 +
It possible to trace this divergence of styles to an even more primitive division, one that distinguishes the ''additive'' or the ''parallel'' styles from the ''multiplicative'' or the ''serial'' styles.  The issue is somewhat confused by the fact that an ''additive'' analysis is typically expressed in the form of a ''series'', in other words, a disjoint union of sets or a
 +
linear sum of their independent effects.  But it is easy enough to sort this out if one observes the more telling connection between ''parallel'' and ''independent''.  Another way to keep the right associations straight is to employ the term ''sequential'' in preference to the more misleading term ''serial''.  Whatever one calls this broad division of styles, the scope and sweep of their dimensions of variation can be delineated in the following way:
 +
 
 +
# The ''additive'' or ''parallel'' styles favor ''sums of products'' <math>(\textstyle\sum\prod)</math> as canonical forms of expression, pulling sums, unions, co-products, and logical disjunctions to the outermost layers of analysis and synthesis, while pushing products, intersections, concatenations, and logical conjunctions to the innermost levels of articulation and generation.  In propositional logic, this style leads to the ''disjunctive normal form'' (DNF).
 +
# The ''multiplicative'' or ''serial'' styles favor ''products of sums'' <math>(\textstyle\prod\sum)</math> as canonical forms of expression, pulling products, intersections, concatenations, and logical conjunctions to the outermost layers of analysis and synthesis, while pushing sums, unions, co-products, and logical disjunctions to the innermost levels of articulation and generation.  In propositional logic, this style leads to the ''conjunctive normal form'' (CNF).
 +
 
 +
There is a curious sort of diagnostic clue that often serves to reveal the dominance of one mode or the other within an individual thinker's cognitive style.  Examined on the question of what constitutes the ''natural numbers'', an ''additive'' thinker tends to start the sequence at 0, while a ''multiplicative'' thinker tends to regard it as beginning at 1.
 +
 
 +
In any style of description, grammar, or theory of a language, it is usually possible to tease out the influence of these contrasting traits, namely, the ''additive'' attitude versus the ''mutiplicative'' tendency that go to make up the particular style in question, and even to determine the dominant inclination or point of view that establishes its perspective on the target domain.
 +
 
 +
In each style of formal grammar, the ''multiplicative'' aspect is present in the sequential concatenation of signs, both in the augmented strings and in the terminal strings.  In settings where the non-terminal symbols classify types of strings, the concatenation of the non-terminal symbols signifies the cartesian product over the corresponding sets of strings.
 +
 
 +
In the context-free style of formal grammar, the ''additive'' aspect is easy enough to spot.  It is signaled by the parallel covering of many augmented strings or sentential forms by the same non-terminal symbol.  Expressed in active terms, this calls for the independent rewriting of that non-terminal symbol by a number of different successors, as in the following scheme:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{matrix}
 +
q & :> & W_1 \\
 +
\\
 +
\cdots & \cdots & \cdots \\
 +
\\
 +
q & :> & W_k \\
 +
\end{matrix}</math>
 +
|}
 +
 
 +
It is useful to examine the relationship between the grammatical covering or production relation <math>(:>\!)</math> and the logical relation of implication <math>(\Rightarrow),</math> with one eye to what they have in common and one eye to how they differ.  The production <math>q :> W\!</math> says that the appearance of the symbol <math>q\!</math> in a sentential form implies the possibility of exchanging it for <math>W.\!</math>  Although this sounds like a ''possible implication'', to the extent that ''<math>q\!</math> implies a possible <math>W\!</math>'' or that ''<math>q\!</math> possibly implies <math>W,\!</math>'' the qualifiers ''possible'' and ''possibly'' are the critical elements in these statements, and they are crucial to the meaning of what is actually being implied.  In effect, these qualifications reverse the direction of implication, yielding <math>^{\backprime\backprime} \, q \Leftarrow W \, ^{\prime\prime}</math> as the best analogue for the sense of the production.
 +
 
 +
One way to sum this up is to say that non-terminal symbols have the significance of hypotheses.  The terminal strings form the empirical matter of a language, while the non-terminal symbols mark the patterns or the types of substrings that can be noticed in the profusion of data.  If one observes a portion of a terminal string that falls into the pattern of the sentential form <math>W,\!</math> then it is an admissible hypothesis, according to the theory of the language that is constituted by the formal grammar, that this piece not only fits the type <math>q\!</math> but even comes to be generated under the auspices of the non-terminal symbol <math>^{\backprime\backprime} q ^{\prime\prime}.</math>
 +
 
 +
A moment's reflection on the issue of style, giving due consideration to the received array of stylistic choices, ought to inspire at least the question:  "Are these the only choices there are?"  In the present setting, there are abundant indications that other options, more differentiated varieties of description and more integrated ways of approaching individual languages, are likely to be conceivable, feasible, and even more ultimately viable.  If a suitably generic style, one that incorporates the full scope of logical combinations and operations, is broadly available, then it would no longer be necessary, or even apt, to argue in universal terms about which style is best, but more useful to investigate how we might adapt the local styles to the local requirements.  The medium of a generic style would yield a viable compromise between additive and multiplicative canons, and render the choice between parallel and serial a false alternative, at least, when expressed in the globally exclusive terms that are currently most commonly adopted to pose it.
 +
 
 +
One set of indications comes from the study of machines, languages, and computation, especially the theories of their structures and relations.  The forms of composition and decomposition that are generally known as ''parallel'' and ''serial'' are merely the extreme special cases, in variant directions of specialization, of a more generic form, usually called the ''cascade'' form of combination.  This is a well-known fact in the theories that deal with automata and their associated formal languages, but its implications do not seem to be widely appreciated outside these fields.  In particular, it dispells the need to choose one extreme or the other, since most of the natural cases are likely to exist somewhere in between.
 +
 
 +
Another set of indications appears in algebra and category theory, where forms of composition and decomposition related to the cascade combination, namely, the ''semi-direct product'' and its special case, the ''wreath product'', are encountered at higher levels of generality than the cartesian products of sets or the direct products of spaces.
 +
 
 +
In these domains of operation, one finds it necessary to consider also the ''co-product'' of sets and spaces, a construction that artificially creates a disjoint union of sets, that is, a union of spaces that are being treated as independent.  It does this, in effect, by ''indexing'',
 +
''coloring'', or ''preparing'' the otherwise possibly overlapping domains that are being combined.  What renders this a ''chimera'' or a ''hybrid'' form of combination is the fact that this indexing is tantamount to a cartesian product of a singleton set, namely, the conventional ''index'', ''color'', or ''affix'' in question, with the individual domain that is entering as a factor, a term, or a participant in the final result.
 +
 
 +
One of the insights that arises out of Peirce's logical work is that the set operations of complementation, intersection, and union, along with the logical operations of negation, conjunction, and disjunction that operate in isomorphic tandem with them, are not as fundamental as they first appear.  This is because all of them can be constructed from or derived from a smaller set of operations, in fact, taking the logical side of things, from either one of two ''sole sufficient'' operators, called ''amphecks'' by Peirce, ''strokes'' by those who re-discovered them later, and known in computer science as the NAND and the NNOR operators.  For this reason, that is, by virtue of their precedence in the orders of construction and derivation, these operations have to be regarded as the simplest and the most primitive in principle, even if they are scarcely recognized as lying among the more familiar elements of logic.
 +
 
 +
I am throwing together a wide variety of different operations into each of the bins labeled ''additive'' and ''multiplicative'', but it is easy to observe a natural organization and even some relations approaching isomorphisms among and between the members of each class.
 +
 
 +
The relation between logical disjunction and set-theoretic union and the relation between logical conjunction and set-theoretic intersection ought to be clear enough for the purposes of the immediately present context.  In any case, all of these relations are scheduled to receive a thorough examination in a subsequent discussion (Subsection 1.3.10.13).  But the relation of a set-theoretic union to a category-theoretic co-product and the relation of a set-theoretic intersection to a syntactic concatenation deserve a closer look at this point.
 +
 
 +
The effect of a co-product as a ''disjointed union'', in other words, that creates an object tantamount to a disjoint union of sets in the resulting co-product even if some of these sets intersect non-trivially and even if some of them are identical ''in reality'', can be achieved in several ways.  The most usual conception is that of making a ''separate copy'', for each part of the intended co-product, of the set that is intended to go there.  Often one thinks of the set that is assigned to a particular part of the co-product as being distinguished by a particular ''color'', in other words, by the attachment of a distinct ''index'', ''label'', or ''tag'', being a marker that is inherited by and passed on to every element of the set in that part.  A concrete image of this construction can be achieved by imagining that each set and each element of each set is placed in an ordered pair with the sign of its color, index, label, or tag.  One describes this as the ''injection'' of each set into the corresponding ''part'' of the co-product.
 +
 
 +
For example, given the sets <math>P\!</math> and <math>Q,\!</math> overlapping or not, one can define the ''indexed'' or ''marked'' sets <math>P_{[1]}\!</math> and <math>Q_{[2]},\!</math> amounting to the copy of <math>P\!</math> into the first part of the co-product and the copy of <math>Q\!</math> into the second part of the co-product, in the following manner:
 +
 
 +
{| align="center" cellpsadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lllll}
 +
P_{[1]} & = & (P, 1) & = & \{ (x, 1) : x \in P \}, \\
 +
Q_{[2]} & = & (Q, 2) & = & \{ (x, 2) : x \in Q \}. \\
 +
\end{array}</math>
 +
|}
 +
 
 +
Using the coproduct operator (<math>\textstyle\coprod</math>) for this construction, the ''sum'', the ''coproduct'', or the ''disjointed union'' of <math>P\!</math> and <math>Q\!</math> in that order can be represented as the ordinary union of <math>P_{[1]}\!</math> and <math>Q_{[2]}.\!</math>
 +
 
 +
{| align="center" cellpsadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
P \coprod Q & = & P_{[1]} \cup Q_{[2]}. \\
 +
\end{array}</math>
 +
|}
 +
 
 +
The concatenation <math>\mathfrak{L}_1 \cdot \mathfrak{L}_2</math> of the formal languages <math>\mathfrak{L}_1</math> and <math>\mathfrak{L}_2</math> is just the cartesian product of sets <math>\mathfrak{L}_1 \times \mathfrak{L}_2</math> without the extra <math>\times</math>'s, but the relation of cartesian products to set-theoretic intersections and thus to logical conjunctions is far from being clear.  One way of seeing a type of relation is to focus on the information that is needed to specify each construction, and thus to reflect on the signs that are used to carry this information.  As a first approach to the topic of information, according to a strategy that seeks to be as elementary and as informal as possible, I introduce the following set of ideas, intended to be taken in a very provisional way.
 +
 
 +
A ''stricture'' is a specification of a certain set in a certain place, relative to a number of other sets, yet to be specified.  It is assumed that one knows enough to tell if two strictures are equivalent as pieces of information, but any more determinate indications, like names for the places that are mentioned in the stricture, or bounds on the number of places that are involved, are regarded as being extraneous impositions, outside the proper concern of the definition, no matter how convenient they are found to be for a particular discussion.  As a schematic form of illustration, a stricture can be pictured in the following shape:
 +
 
 +
:{| cellpadding="8"
 +
| <math>^{\backprime\backprime}</math>
 +
| <math>\ldots \times X \times Q \times X \times \ldots</math>
 +
| <math>^{\prime\prime}</math>
 +
|}
 +
 
 +
A ''strait'' is the object that is specified by a stricture, in effect, a certain set in a certain place of an otherwise yet to be specified relation.  Somewhat sketchily, the strait that corresponds to the stricture just given can be pictured in the following shape:
 +
 
 +
:{| cellpadding="8"
 +
| &nbsp;
 +
| <math>\ldots \times X \times Q \times X \times \ldots</math>
 +
| &nbsp;
 +
|}
 +
 
 +
In this picture <math>Q\!</math> is a certain set and <math>X\!</math> is the universe of discourse that is relevant to a given discussion.  Since a stricture does not, by itself, contain a sufficient amount of information to specify the number of sets that it intends to set in place, or even to specify the absolute location of the set that its does set in place, it appears to place an unspecified number of unspecified sets in a vague and uncertain strait.  Taken out of its interpretive context, the residual information that a stricture can convey makes all of the following potentially equivalent as strictures:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{ccccccc}
 +
^{\backprime\backprime} Q ^{\prime\prime}
 +
& , &
 +
^{\backprime\backprime} X \times Q \times X ^{\prime\prime}
 +
& , &
 +
^{\backprime\backprime} X \times X \times Q \times X \times X ^{\prime\prime}
 +
& , &
 +
\ldots
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
With respect to what these strictures specify, this leaves all of the following equivalent as straits:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{ccccccc}
 +
Q
 +
& = &
 +
X \times Q \times X
 +
& = &
 +
X \times X \times Q \times X \times X
 +
& = &
 +
\ldots
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
Within the framework of a particular discussion, it is customary to set a bound on the number of places and to limit the variety of sets that are regarded as being under active consideration, and it is also convenient to index the places of the indicated relations, and of their encompassing cartesian products, in some fixed way.  But the whole idea of a stricture is to specify a strait that is capable of extending through and beyond any fixed frame of discussion.  In other words, a stricture is conceived to constrain a strait at a certain point, and then to leave it literally embedded, if tacitly expressed, in a yet to be fully specified relation, one that involves an unspecified number of unspecified domains.
 +
 
 +
A quantity of information is a measure of constraint.  In this respect, a set of comparable strictures is ordered on account of the information that each one conveys, and a system of comparable straits is ordered in accord with the amount of information that it takes to pin each one of them down.  Strictures that are more constraining and straits that are more constrained are placed at higher levels of information than those that are less so, and entities that involve more information are said to have a greater ''complexity'' in comparison with those entities that involve less information, that are said to have a greater ''simplicity''.
 +
 
 +
In order to create a concrete example, let me now institute a frame of discussion where the number of places in a relation is bounded at two, and where the variety of sets under active consideration is limited to the typical subsets <math>P\!</math> and <math>Q\!</math> of a universe <math>X.\!</math>  Under these conditions, one can use the following sorts of expression as schematic strictures:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{matrix}
 +
^{\backprime\backprime} X ^{\prime\prime} &
 +
^{\backprime\backprime} P ^{\prime\prime} &
 +
^{\backprime\backprime} Q ^{\prime\prime} \\
 +
\\
 +
^{\backprime\backprime} X \times X ^{\prime\prime} &
 +
^{\backprime\backprime} X \times P ^{\prime\prime} &
 +
^{\backprime\backprime} X \times Q ^{\prime\prime} \\
 +
\\
 +
^{\backprime\backprime} P \times X ^{\prime\prime} &
 +
^{\backprime\backprime} P \times P ^{\prime\prime} &
 +
^{\backprime\backprime} P \times Q ^{\prime\prime} \\
 +
\\
 +
^{\backprime\backprime} Q \times X ^{\prime\prime} &
 +
^{\backprime\backprime} Q \times P ^{\prime\prime} &
 +
^{\backprime\backprime} Q \times Q ^{\prime\prime} \\
 +
\end{matrix}</math>
 +
|}
 +
 
 +
These strictures and their corresponding straits are stratified according to their amounts of information, or their levels of constraint, as follows:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lcccc}
 +
\text{High:}
 +
& ^{\backprime\backprime} P \times P ^{\prime\prime}
 +
& ^{\backprime\backprime} P \times Q ^{\prime\prime}
 +
& ^{\backprime\backprime} Q \times P ^{\prime\prime}
 +
& ^{\backprime\backprime} Q \times Q ^{\prime\prime}
 +
\\
 +
\\
 +
\text{Med:}
 +
& ^{\backprime\backprime} P ^{\prime\prime}
 +
& ^{\backprime\backprime} X \times P ^{\prime\prime}
 +
& ^{\backprime\backprime} P \times X ^{\prime\prime}
 +
\\
 +
\\
 +
\text{Med:}
 +
& ^{\backprime\backprime} Q ^{\prime\prime}
 +
& ^{\backprime\backprime} X \times Q ^{\prime\prime}
 +
& ^{\backprime\backprime} Q \times X ^{\prime\prime}
 +
\\
 +
\\
 +
\text{Low:}
 +
& ^{\backprime\backprime} X ^{\prime\prime}
 +
& ^{\backprime\backprime} X \times X ^{\prime\prime}
 +
\\
 +
\end{array}</math>
 +
|}
 +
 
 +
Within this framework, the more complex strait <math>P \times Q</math> can be expressed
 +
in terms of the simpler straits, <math>P \times X</math> and <math>X \times Q.</math>  More specifically, it lends itself to being analyzed as their intersection, in the following way:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lllll}
 +
P \times Q & = & P \times X & \cap & X \times Q. \\
 +
\end{array}</math>
 +
|}
 +
 
 +
From here it is easy to see the relation of concatenation, by virtue of these types of intersection, to the logical conjunction of propositions.  The cartesian product <math>P \times Q</math> is described by a conjunction of propositions, namely, <math>P_{[1]} \land Q_{[2]},</math> subject to the following interpretation:
 +
 
 +
# <math>P_{[1]}\!</math> asserts that there is an element from the set <math>P\!</math> in the first place of the product.
 +
# <math>Q_{[2]}\!</math> asserts that there is an element from the set <math>Q\!</math> in the second place of the product.
 +
 
 +
The integration of these two pieces of information can be taken in that measure to specify a yet to be fully determined relation.
 +
 
 +
In a corresponding fashion at the level of the elements, the ordered pair <math>(p, q)\!</math> is described by a conjunction of propositions, namely, <math>p_{[1]} \land q_{[2]},</math> subject to the following interpretation:
 +
 
 +
# <math>p_{[1]}\!</math> says that <math>p\!</math> is in the first place of the product element under construction.
 +
# <math>q_{[2]}\!</math> says that <math>q\!</math> is in the second place of the product element under construction.
 +
 
 +
Notice that, in construing the cartesian product of the sets <math>P\!</math> and <math>Q\!</math> or the concatenation of the languages <math>\mathfrak{L}_1</math> and <math>\mathfrak{L}_2</math> in this way, one shifts the level of the active construction from the tupling of the elements in <math>P\!</math> and <math>Q\!</math> or the concatenation of the strings that are internal to the languages <math>\mathfrak{L}_1</math> and <math>\mathfrak{L}_2</math> to the concatenation of the external signs that it takes to indicate these sets or these languages, in other words, passing to a conjunction of indexed propositions, <math>P_{[1]}\!</math> and <math>Q_{[2]},\!</math> or to a conjunction of assertions, <math>(\mathfrak{L}_1)_{[1]}</math> and <math>(\mathfrak{L}_2)_{[2]},</math> that marks the sets or the languages in question for insertion in the indicated places of a product set or a product language, respectively.  In effect, the subscripting by the indices <math>^{\backprime\backprime} [1] ^{\prime\prime}</math> and <math>^{\backprime\backprime} [2] ^{\prime\prime}</math> can be recognized as a special case of concatenation, albeit through the posting of editorial remarks from an external ''mark-up'' language.
 +
 
 +
In order to systematize the relations that strictures and straits placed at higher levels of complexity, constraint, information, and organization have with those that are placed at the associated lower levels, I introduce the following pair of definitions:
 +
 
 +
The <math>j^\text{th}\!</math> ''excerpt'' of a stricture of the form <math>^{\backprime\backprime} \, S_1 \times \ldots \times S_k \, ^{\prime\prime},</math> regarded within a frame of discussion where the number of places is limited to <math>k,\!</math> is the stricture of the form <math>^{\backprime\backprime} \, X \times \ldots \times S_j \times \ldots \times X \, ^{\prime\prime}.</math>  In the proper context, this can be written more succinctly as the stricture <math>^{\backprime\backprime} \, (S_j)_{[j]} \, ^{\prime\prime},</math> an assertion that places the <math>j^\text{th}\!</math> set in the <math>j^\text{th}\!</math> place of the product.
 +
 
 +
The <math>j^\text{th}\!</math> ''extract'' of a strait of the form <math>S_1 \times \ldots \times S_k,\!</math> constrained to a frame of discussion where the number of places is restricted to <math>k,\!</math> is the strait of the form <math>X \times \ldots \times S_j \times \ldots \times X.</math>  In the appropriate context, this can be denoted more succinctly by the stricture <math>^{\backprime\backprime} \, (S_j)_{[j]} \, ^{\prime\prime},</math> an assertion that places the <math>j^\text{th}\!</math> set in the <math>j^\text{th}\!</math> place of the product.
 +
 
 +
In these terms, a stricture of the form <math>^{\backprime\backprime} \, S_1 \times \ldots \times S_k \, ^{\prime\prime}</math> can be expressed in terms of simpler strictures, to wit, as a conjunction of its <math>k\!</math> excerpts:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
^{\backprime\backprime} \, S_1 \times \ldots \times S_k \, ^{\prime\prime}
 +
& = &
 +
^{\backprime\backprime} \, (S_1)_{[1]} \, ^{\prime\prime}
 +
\, \land \, \ldots \, \land \,
 +
^{\backprime\backprime} \, (S_k)_{[k]} \, ^{\prime\prime}.
 +
\end{array}</math>
 +
|}
 +
 
 +
In a similar vein, a strait of the form <math>S_1 \times \ldots \times S_k\!</math> can be expressed in terms of simpler straits, namely, as an intersection of its <math>k\!</math> extracts:
 +
 
 +
{| align="center" cellpadding="8" width="90%"
 +
|
 +
<math>\begin{array}{lll}
 +
S_1 \times \ldots \times S_k & = & (S_1)_{[1]} \, \cap \, \ldots \, \cap \, (S_k)_{[k]}.
 +
\end{array}</math>
 +
|}
 +
 
 +
There is a measure of ambiguity that remains in this formulation, but it is the best that I can do in the present informal context.
    
==References==
 
==References==
12,080

edits