Changes

copy next segment
Line 2,652: Line 2,652:  
| valign=top | z. || This must suffice as an attempt to determine in rough outline the nature of the soul.
 
| valign=top | z. || This must suffice as an attempt to determine in rough outline the nature of the soul.
 
|}
 
|}
 +
 +
====1.3.10.  Recurring Themes====
 +
 +
<pre>
 +
The overall purpose of the next several sections is threefold:
 +
1. To continue to illustrate the salient properties of sign relations in the medium of selected examples.
 +
2. To demonstrate the use of sign relations to analyze and to clarify a particular order of difficult symbols and complex texts, namely, those that involve recursive, reflective, or reflexive features.
 +
3. To begin to suggest the implausibility of understanding this order of phenomena without using sign relations or something like them, namely, concepts with the power of triadic relations.
 +
The prospective lines of an inquiry into inquiry cannot help but meet at various points, where a certain entanglement of the subjects of interest repeatedly has to be faced.  The present discussion of sign relations is currently approaching one of these points.  As the work progresses, the formal tools of logic and set theory become more and more indispensable to say anything significant or to produce any meaningful results in the study of sign relations.  And yet it appears, at least from the vantage of the pragmatic perspective, that the best way to formalize, to justify, and to sharpen the use of these tools is by means of the sign relations that they involve.  And so the investigation shuffles forward on two or more feet, shifting from a stance that fixes on a certain level of logic and set theory, using it to advance the understanding of sign relations, and then exploits the leverage of this new pivot to consider variations, and hopefully improvements, in the very language of concepts and terms that one uses to express questions about logic and sets, in all of its aspects, from syntax, to semantics, to the pragmatics of both human and computational interpreters.
 +
The main goals of this section are as follows:
 +
1. To introduce a basic logical notation and a naive theory of sets, just enough to treat sign relations as the set�theoretic extensions of logically expressible concepts.
 +
2. To use this modicum of formalism to define a number of conceptual constructs, useful in the analysis of more general sign relations.
 +
3. To develop a proof format that is suitable for deriving facts about these constructs in a careful and potentially computational manner.
 +
4. More incidentally, but increasingly effectively, to get a sense of how sign relations can be used to clarify the very languages that are used to talk about them.
 +
1.3.10.1.  Preliminary Notions
 +
The discussion in this subsection proceeds by recalling a series of basic definitions, refining them to deal with more specialized situations, and refitting them as necessary to cover larger families of sign relations.
 +
In this discussion the word "semantic" is being used as a generic adjective to describe anything concerned with or related to meaning, whether denotative, connotative, or pragmatic, and without regard to how these different aspects of meaning are correlated with each other.  The word "semiotic" is being used, more specifically, to indicate the connotative relationships that exist between signs, in particular, to stress the aspects of process and of potential for progress that are involved in the transitions between signs and their interpretants.  Whenever the focus fails to be clear from the context of discussion, the modifiers "denotative" and "referential" are available to pinpoint the relationships that exist between signs and their objects.  Finally, there is a common usage of the term "pragmatic" to highlight aspects of meaning that have to do with the context of use and the language user, but I reserve the use of this term to refer to the interpreter as an agent with a purpose, and thus to imply that all three aspects of sign relations are involved in the subject under discussion.
 +
Recall the definitions of "semiotic equivalence classes" (SEC's), "semiotic partitions" (SEP's), "semiotic equations" (SEQ's), and "semiotic equivalence relations" (SER's), as in Subsection 1.3.4.3.
 +
The discussion up to this point is partial to examples of sign relations that enjoy especially nice properties, in particular, whose connotative components form equivalence relations and whose denotative components conform to these equivalences, in the sense that all of the signs in a single equivalence class always denote one and the same object.  By way of liberalizing this discussion to more general cases of sign relations, this subsection develops a number of additional concepts for describing the internal relations of sign relations and makes a set of definitions that do not take the aforementioned features for granted.
 +
The complete sign relation involved in a situation encompasses all the things that one thinks about and all the thoughts that one thinks about them while engaged in that situation, in other words, all the signs and ideas that flit through one's mind in relation to a domain of objects.  Only a rarefied sample of this complete sign relation is bound to avail itself to reflective awareness, still less of it is likely to inspire a common interest in the community of inquiry at large, and only bits and pieces of it can be expected to suit themselves to a formal analysis.  In view of these considerations, it is useful to have a general idea of the "sampling relation" that an investigator, oneself in particular, is likely to form between two sign relations:  (1) the whole sign relation that one intends to study, and (2) the selective portion of it that one is able to pin down for a formal examination.
 +
It is important to realize that a "sampling relation", to express it roughly, is a special case of a sign relation.  Aside from acting on sign relations and creating an association between sign relations, a sampling relation is also involved in a larger sign relation, at least, it can be subsumed within a general order of sign relations that allows sign relations themselves to be taken as the objects, the signs, and the interpretants of what can be called a "higher order" (HO) sign relation.  Considered with respect to its full potential, its use, and its purpose, a sampling relation does not fall outside the closure of sign relations.  To be precise, a sampling relation falls within the denotative component of a HO sign relation, since the sign relation sampled is the object of study and the sample is taken as a sign of it.
 +
With respect to the general variety of sampling relations there are a number of specific conceptions that are likely to be useful in this study, a few of which can now be discussed.
 +
A "bit" of a sign relation is defined to be any subset of its extension, that is, an arbitrary selection from its set of ordered triples.
 +
Described in relation to sampling relations, a bit of a sign relation is just the most arbitrary possible sample of it, and thus its occurring to mind implies the most general form of sampling relation to be in effect.  In essence, it is just as if a bit of a sign relation, by virtue of its appearing in evidence, can always be interpreted as a bit of evidence that some sort of sampling relation is being applied.
 +
1.3.10.2.  Intermediary Notions
 +
A number of additional definitions are relevant to sign relations whose connotative components constitute equivalence relations, if only in part.
 +
A "dyadic relation on a single set" (DROSS) is a non�empty set of points plus a set of ordered pairs on these points.  Until further notice, any reference to a "dyadic relation" is intended to be taken in this sense, in other words, as a reference to a DROSS.  In a typical notation, the dyadic relation G = <X, G> = <G(1), G(2)> is specified by giving the set of points X = G(1) and the set of ordered pairs G = G(2) ? X?X that go together to define the relation.  In contexts where the set of points is understood, it is customary to call the whole relation G by the name of the set G.
 +
A "subrelation" of a dyadic relation G = <X, G> = <G(1), G(2)> is a dyadic relation H = <Y, H> = <H(1), H(2)> that has all of its points and pairs in G, more precisely, that has all of its points Y ? X and all of its pairs H ? G.
 +
The "induced subrelation on a subset" (ISOS), taken with respect to the dyadic relation G c X?X and the subset Y ? X, is the maximal subrelation of G whose points belong to Y.  In other words, it is the dyadic relation on Y whose extension contains all of the pairs of Y?Y that appear in G.  Since the construction of an ISOS is uniquely determined by the data of G and Y, it can be represented as a function of these arguments, as in the notation "ISOS (G, Y)", which can be denoted more briefly as "GY".  Using the symbol "n" to indicate the intersection of a pair of sets, the construction of GY = ISOS (G, Y) can be defined as follows:
 +
GY = <Y, GY> = <GY(1), GY(2)>
 +
 +
= <Y, {<x, y> C YxY : <x, y> C G(2)}>
 +
 +
= <Y, YxY n G(2)>.
 +
These definitions for dyadic relations can now be applied in a context where each bit of a sign relation that is being considered satisfies a special set of conditions, namely, if R is the bit under consideration:
 +
1. Syntactic domain X = Sign domain S = Interpretant domain I.
 +
2. Connotative component = RXX = RSI = Equivalence relation E.
 +
Under these assumptions, and with regard to bits of sign relations that satisfy these conditions, it is useful to consider further selections of a specialized sort, namely, those that keep equivalent signs synonymous.
 +
An "arbit" of a sign relation is a slightly more judicious bit of it, preserving a semblance of whatever SEP happens to rule over its signs, and respecting the semiotic parts of the sampled sign relation, when it has such parts.  In other words, an arbit suggests an act of selection that represents the parts of the original SEP by means of the parts of the resulting SEP, that extracts an ISOS of each clique in the SER that it bothers to select any points at all from, and that manages to portray in at least this partial fashion all or none of every SEC that appears in the original sign relation.
 +
1.3.10.3.  Propositions & Sentences
 +
The concept of a sign relation is typically extended as a set L ? O?S?I.  Because this extensional representation of a sign relation is one of the most natural forms that it can take up, along with being one of the most important forms that it is likely to be encountered in, a good amount of set�theoretic machinery is necessary to carry out a reasonably detailed analysis of sign relations in general.
 +
For the purposes of this discussion, let it be supposed that each set X, that comprises a subject of interest in a particular discussion or that constitutes a topic of interest in a particular moment of discussion, is a subset of a set U, one that is sufficiently universal relative to that discussion or big enough to cover everything that is being talked about at that moment.  In a setting like this it is possible to make a number of useful definitions, to which I now turn.
 +
The "negation" of a sentence S, written as "(S)" and read as "Not S", is a sentence that is true when S is false and false when S is true.
 +
The "complement" of a set X with respect to the universe U, written as "U?X", or?simply as "~X" when the universe U is understood, is the set of elements in U that are not in X, that is:
 +
~X = U?X = {u ? U  :  (u ? X) }.
 +
The "relative complement" of X in Y, for two sets X, Y ? U, written as "Y?X", is the set of elements in Y that are not in X, that is:
 +
Y?X = {u ? U  :  u ? Y  and  (u ? X) }.
 +
The "intersection" of X and Y, for two sets X, Y ? U, is denoted by "X ? Y" and defined as the set of elements in U that belong to both of X and Y.
 +
X ? Y = {u ? U  :  u ? X  and  u ? Y }.
 +
The "union" of X and Y, for two sets X, Y ? U, is denoted by "X ? Y" and defined as the set of elements in U that belong to at least one of X or Y.
 +
X ? Y = {u ? U  :  u ? X  or  u ? Y }.
 +
The "symmetric difference" of X and Y, for two sets X, Y ? U, written "X ? Y", is the set of elements in U that belong to just one of X or Y.
 +
X ? Y = {u ? U  :  u ? X?Y  or  u ? Y?X }.
 +
The foregoing "definitions" are the bare essentials that are needed to get the rest of this discussion going, but they have to be regarded as almost purely informal in character, at least, at this stage of the game.  In particular, these definitions all invoke the undefined notion of what a "sentence" is, they all rely on the reader's native intuition of what a "set" is, and they all derive their coherence and their meaning from the common understanding, but the equally casual use and unreflective acquaintance, that just about everybody has of the logical connectives "not", "and", "or", as these are expressed in natural language terms.
 +
As formative definitions, these initial postulations neither acquire the privileged status of untouchable axioms and infallible intuitions nor do they deserve any special suspicion, at least, nothing over and above the reflective critique that one ought to apply to all important definitions.  These dim beginnings of anything approaching genuine definitions also serve to accustom the mind's eye to a particular style of observation, that of seeing informal concepts presented in a formal frame, in a way that almost demands their increasing clarification.  In this style of examination, the frame of the set�builder expression "{u ? U : ... }" functions like the "eye of the needle" that one is trying to get a suitably rich mathematics through.
 +
Part the task of the remaining discussion is to gradually formalize the promissory notes that are represented by these terms and stipulations and to see whether their casual comprehension can be converted into an explicit subject matter, one that depends on grasping the corresponding collection of almost wholly, if still partially, formalized conceptions.  To this I now turn.
 +
The "binary domain" is the set B = {0, 1} of two algebraic values, whose arithmetic follows the rules of GF(2).
 +
The "boolean domain" is the set B = {0, 1} of two logical values, whose elements can be read as "false" and "true", or as "falsity" and "truth", respectively.
 +
At this point, I cannot tell whether the distinction between these two domains is slight or significant, and so this question must evolve its own answer, while I pursue a larger inquiry by means of its hypothesis.  The weight of the matter appears to increase as the investigation moves from abstract, algebraic, and formal settings to contexts where logical semantics, natural language syntax, and concrete categories of grammar are compelling considerations.  Speaking roughly or abstractly enough, it is often acceptable to identify these two domains, and up until this point there has rarely appeared to be a sufficient reason to keep their concepts separately in mind.  The boolean domain B comes with at least two operations, though often under different names and always included in a number of others, that are analogous to the field operations of the binary domain B, and operations that are isomorphic to the rest of the boolean operations in B can always be built on the binary basis of B.  Of course, as sets of the same cardinality, the domains B and B and all of the structures that can be built on them become isomorphic at a high enough level of abstraction.  Consequently, the main reason for making this distinction in the present context appears to be a matter more of grammar than an issue of logical or mathematical substance, namely, so that the signs "0" and "1" can appear with some semblance of syntactic legitimacy in linguistic contexts that call for a grammatical sentence or a sentence surrogate to represent the classes of sentences that are "always false" and "always true", respectively.  The signs "0" and "1", customarily read as nouns but not as sentences, fail to be suitable for this purpose.  Whether these scruples, that are needed to conform to a particular choice of natural language context, are ultimately important, is another thing I do not know at this point.
 +
The "negation" of x, for x ? B, written as "(x)" and read as "not x", is the boolean value (x) ? B that is 1 when x is 0, and 0 when x is 1.  In other words, negation is a monadic operation on boolean values, or a function of the form (.) : B �> B.
 +
It is convenient to transport the product and the sum operations of B into the logical setting of B, where they can be symbolized by signs of the same character, doubly underlined as necessary to avoid confusion.  This yields the following definitions of a "product" and a "sum" in B and leads to the following forms of multiplication and addition tables.
 +
The "product" of x and y, for values x, y ? B, is given by Table 8.  Viewed as a function of logical values, . : B?B �> B, this corresponds to the logical operation that is commonly called "conjunction" and that is otherwise expressed as "x and y".  In accord with common practice, the raised dot ".", doubly underlined or not, is often omitted from written expressions.
 +
Table 8.  Product Operation for the Boolean Domain
 +
. 0 1
 +
0 0 0
 +
1 0 1
 +
The "sum" of x and y, for values x, y ? B, is presented in Table 9.  Viewed as a function of logical values, + : B?B �> B, this corresponds to the logical operation that is commonly called "exclusive disjunction" and that is otherwise expressed as "x or y, but not both" .  Depending on the context, other signs and readings that invoke this operation are:  "x ? y", read as "x is not equal to y" or as "exactly one of x and y", and "x <?> y", read as "x is not equivalent to y" or as "x opposes y".
 +
Table 9.  Sum Operation for the Boolean Domain
 +
+ 0 1
 +
0 0 1
 +
1 1 0
 +
For sentences, the signs of equality ("=") and inequality ("=/=") are reserved to mean the syntactic identity and non�identity, respectively, of their literal strings of characters, while the signs of equivalence ("<=>") and inequivalence ("<=/=>") refer to the logical values, if any, of these strings, and signify the equality and inequality, respectively, of their conceivable boolean values.  For the logical values themselves, the two pairs of symbols collapse in their significance to a single pair, signifying a single form of coincidence or a single form of distinction, respectively, between the boolean values of the entities involved.
 +
In logical studies, one tends to be interested in all of the operations or all of the functions of a given type, at least, to the extent that their totalities and their individualities can be comprehended, and not just the specialized collections that define particular algebraic structures.  Although the remainder of the conceivably possible dyadic operations on boolean values, namely, the rest of the sixteen functions f : B?B �> B, could be presented in the same way as the multiplication and addition tables, it is better to look for a more efficient style of representation, one that treats all of the boolean functions of k variables on a roughly equal basis, and with a bit of luck, provides a calculus for computing with these functions.  This involves, among other things, finding their values for given arguments, inverting them, "finding their fibers", or solving equations that are expressed in terms of them, and facilitating the recognition of invariant forms that take them as components.
 +
The whole point of formal logic, the reason for doing logic formally and the measure that determines how far it is possible to reason abstractly, is to discover functions that do not vary as much as their variables do, in other words, to identify forms of logical functions that, though they express a dependence on the values of their constituent arguments, do not vary as much as possible, but approach the way of being a function that constant functions enjoy.  Thus, the recognition of a logical law amounts to identifying a logical function, that, though it ostensibly depends on the values of its putative arguments, is not as variable in its values as the values of its variables are allowed to be.
 +
The "indicator function" or the "characteristic function" of a set X ? U, written "fX", is the map from U to the boolean domain B = {0, 1} that is defined in the following ways:
 +
1. Considered in extensional form, fX is the subset of UxB
 +
that is given by the following formula:
 +
fX  =  {<u, v> ? UxB : v = 1  <=>  u ? X}.
 +
2. Considered in functional form, fX is the map from U to B
 +
that is given by the following condition:
 +
fX(u) = 1  <=>  u ? X.
 +
A "proposition about things in the universe", for short, a "proposition", is the same thing as an indicator function, that is, a function of the form f : U �> B.  The convenience of this seemingly redundant usage is that it permits one to refer to an indicator function without having to specify right away, as a part of its only available designation, exactly what set it indicates, even though a proposition is always an indicator function of some subset of the universe, and even though one probably or eventually wants to know which one.
 +
According to the stated understandings, a proposition is a function that indicates a set, in the sense that a function associates values with the elements of a domain, some which values can be interpreted to mark out for special consideration a subset of that domain.  The way in which an indicator function is imagined to "indicate" a set can be expressed in terms of the following concepts.
 +
 +
The "fiber" of a codomain element y in Y under a function f : X -> Y is the subset of the domain X that is mapped onto y, in short, it is f^(-1)(y) c X.
 +
 +
In other language that is often used, the fiber of y under f is called the "antecedent set", the "inverse image", the "level set", or the "pre-image" of y under f.  All of these equivalent concepts are defined as follows:
 +
 +
Fiber of y under f  =  f^(-1)(y)  =  {x in X  :  f(x) = y}.
 +
 +
In the special case where f is the indicator function f_Q of the set Q c X, the fiber of %1% under f_Q is just the set Q back again:
 +
 +
Fiber of %1% under f_Q  =  (f_Q)^(-1)(%1%)  =  {x in X  :  f_Q (x) = %1%}  =  Q.
 +
 +
In this specifically boolean setting, as in the more generally logical context, where "truth" under any name is especially valued, it is worth devoting a specialized notation to the "fiber of truth" in a proposition, to mark the set that it indicates with a particular ease and explicitness.
 +
 +
For this purpose, I introduce the use of "fiber bars" or "ground signs", written as "[| ... |]" around a sentence or the sign of a proposition, and whose application is defined as follows:
 +
 +
If  f : X -> %B%,
 +
 +
then  [| f |]  =  f^(-1)(%1%)  =  {x in X  :  f(x) = %1%}.
 +
 +
The definition of a fiber, in either the general or the boolean case,
 +
is a purely nominal convenience for referring to the antecedent subset,
 +
the inverse image under a function, or the pre-image of a functional value.
 +
 +
The definition of an operator on propositions, signified by framing the signs of propositions with fiber bars or ground signs, remains a purely notational device, and yet the notion of a fiber in a logical context serves to raise an interesting point.  By way of illustration, it is legitimate to rewrite the above definition in the following form:
 +
 +
If  f : X -> %B%,
 +
 +
then  [| f |]  =  f^(-1)(%1%)  =  {x in X  :  f(x)}.
 +
 +
The set-builder frame "{x in X  :  ... }" requires a sentence to
 +
fill in the blank, as with the sentence "f(x) = %1%" that serves
 +
to fill the frame in the initial definition of a logical fiber.
 +
And what is a sentence but the expression of a proposition, in
 +
other words, the name of an indicator function?  As it happens,
 +
the sign "f(x)" and the sentence "f(x) = %1%" represent the very
 +
same value to this context, for all x in X, that is, they are equal
 +
in their truth or falsity to any reasonable interpreter of signs or
 +
sentences in this context, and so either one of them can be tendered
 +
for the other, in effect, exchanged for the other, within this frame.
 +
 +
The "fiber" of a codomain element y ? Y under a function f : X �> Y is the subset of the domain X that is mapped onto y, in short, f�1(y) ? X.  In other language that is often used, the fiber of y under f is called the "antecedent set", "inverse image", "level set", or "pre�image" of y under f.  All of these equivalent concepts are defined as follows:
 +
Fiber of y under f  =  f�1(y)  =  {x ? X : f(x) = y}.
 +
In the special case where f is the indicator function fX of the set X, the fiber of 1 under fX is just the set X back again:
 +
Fiber of 1 under fX  =  fX�1(1)  =  {u ? U : fX(u) = 1}  =  X.
 +
In this specifically boolean setting, as in the more generally logical context, where "truth" under any name is especially valued, it is worth devoting a specialized notation to the "fiber of truth" in a proposition, to mark the set that it indicates with a particular ease and explicitness.  For this purpose, I introduce the use of "fiber bars" or "ground signs", written as "| ... |" around a sentence or the sign of a proposition, and whose application is defined as follows:
 +
If f : U �> B,
 +
 +
then |f| = f�1(1) = {u ? U : f(u) = 1}.
 +
The definition of a fiber, in either the general or the boolean case, is a purely nominal convenience for referring to the antecedent subset, the inverse image under a function, or the pre�image of a functional value.  The definition of an operator on propositions, signified by framing the signs of propositions with fiber bars or ground signs, remains a purely notational device, and yet the notion of a fiber in a logical context serves to raise an interesting point.  By way of illustration, it is legitimate to rewrite the above definition in the following form:
 +
If f : U �> B,
 +
 +
then |f| = f�1(1) = {u ? U : f(u)}.
 +
The set�builder frame "{u ? U : ... }" requires a sentence to fill in the blank, as with the sentence "f(u) = 1" that finishes it up in the initial definition of a logical fiber.  And what is a sentence but the expression of a proposition, that is, the name of an indicator function?  As it happens, the sign "f(u)" and the sentence "f(u) = 1" represent the very same value to this context, for all u ? U, that is, they are equal in their truth or falsity to any reasonable interpreter of signs or sentences in this context, and so either one of them can be tendered for the other, or exchanged for the other, within this frame.
 +
The sign "f(u)" manifestly names the value f(u).  This is a value that can be seen in many lights.  It is, at turns:  (1) the value that the proposition f has at the point u, in other words, that it bears at the point where it is evaluated, and that it takes on with respect to the argument or the object that the whole proposition is taken to be about, (2) the value that the proposition f not only takes up the point u, but that it carries, conveys, transfers, or transports into the setting "{u ? U : ... }" or into any other context of discourse where f is meant to be evaluated, (3) the value that the sign "f(u)" has in the context where it is placed, that it stands for in the context where it stands, and that it continues to stand for in this context just so long as the same proposition f and the same object u are borne in mind, and last but not least, (4) the value that the sign "f(u)" represents to its full interpretive context as being its own logical interpretant, namely, the value that it signifies as its canonical connotation to any interpreter of the sign that is cognizant of the context in which it appears.
 +
The sentence "f(u) = 1" indirectly names what the sign "f(u)" more directly names, that is, the value f(u).  In other words, the sentence "f(u) = 1" has the same value to its interpretive context that the sign "f(u)" imparts to any comparable context, each by way of its respective evaluation for the same u ? U.
 +
What is the relation among connoting, denoting, and "evaluing", where the last term is coined to describe all the ways of bearing, conveying, developing, or evolving a value in, to, or into an interpretive context?  In other words, when a sign is evaluated to a particular value, one can say that the sign "evalues" that value, using the verb in a way that is categorically analogous or grammatically conjugate to the times when one says that a sign "connotes" an idea or that a sign "denotes" an object.  This does little more than provide the discussion with a "weasel word", a term that is designed to avoid the main issue, to put off deciding the exact relation between formal signs and formal values, and ultimately to finesse the question about the nature of formal values, whether they are more akin to conceptual signs and figurative ideas or to the kinds of literal objects and platonic ideas that are independent of the mind.
 +
These questions are confounded by the presence of certain peculiarities in formal discussions, especially by the fact that an equivalence class of signs is tantamount to a formal object.  This has the effect of allowing an abstract connotation to work as a formal denotation.  In other words, if the purpose of a sign is merely to lead its interpreter up to a sign in an equivalence class of signs, then it follows that this equivalence class is the object of the sign, that connotation can achieve denotation, at least, to some degree, and that the interpretant domain collapses with the object domain, at least, in some respect, all things being relative to the sign relation that embeds the discussion.
 +
Introducing the realm of "values" is a stopgap measure that temporarily permits the discussion to avoid certain singularities in the embedding sign relation, and allowing the process of "evaluation" as a compromise mode of signification between connotation and denotation only manages to steer around a topic that eventually has to be mapped in full, but these strategies do allow the discussion to proceed a little further without having to answer questions that are too difficult to be settled fully or even tackled directly at this point.  As far as the relations among connoting, denoting, and evaluing are concerned, it is possible that all of these constitute independent dimensions of significance that a sign can have, but since the notion of connotation is already generic enough to contain multitudes of subspecies, I am going to subsume, on a tentative basis, all of the conceivable modes of "evaluing" within the broader concept of connotation.
 +
With this degree of flexibility in mind, one can say that the sentence "f(u) = 1" latently connotes what the sign "f(u)" patently connotes.  Taken in abstraction, both syntactic entities fall into an equivalence class of signs that constitutes an abstract object, a thing of value that is "identified by" the sign "f(u)", and thus an object that might as well be "identified with" the value f(u).
 +
The upshot of this whole discussion of evaluation is that it allows one to rewrite the definitions of indicator functions and their fibers as follows:
 +
The "indicator function" or the "characteristic function" of a set X ? U, written "fX", is the map from U to the boolean domain B = {0, 1} that is defined in the following ways:
 +
1. Considered in extensional form, fX is the subset of UxB
 +
that is given by the following formula:
 +
fX  =  {<u, v> ? UxB : v  <=>  u ? X}.
 +
2. Considered in functional form, fX is the map from U to B
 +
that is given by the following condition:
 +
fX(u)  <=>  u ? X.
 +
The "fibers" of truth and falsity under a proposition f : U �> B are subsets of U that are variously described as follows:
 +
1. The fiber of 1 under f = |f| = f�1(1)
 +
 +
= {u C U :  f(u) = 1}
 +
 +
= {u C U :  f(u)}.
 +
2. The fiber of 0 under f = ~|f| = f�1(0)
 +
 +
= {u C U :  f(u) = 0}
 +
 +
= {u C U : (f(u))}.
 +
Perhaps this looks like a lot of work for the sake of what seems to be such a trivial form of syntactic transformation, but it is an important step in loosening up the syntactic privileges that are held by the sign of logical equivalence "<=>", as written between logical sentences, and by the sign of equality "=", as written between their logical values, or else between propositions and their boolean values.  Doing this removes a longstanding but wholly unnecessary conceptual confound between the idea of an "assertion" and notion of an "equation", and it allows one to treat logical equality on a par with the other logical operations.
 +
As a purely informal aid to interpretation, I frequently use the letters "p", "q", and "P", "Q" to denote propositions.  This can serve to tip off the reader that a function is intended as the indicator function of a set, and thus it saves the trouble of declaring the type f : U �> B each time that a function is introduced as a proposition.
 +
Another convention of use in this context is to let underscored letters stand for k�tuples or sequences of objects.  Typically, the objects are all of one type, and typically the letter that is underscored is the same basic character that is indexed or subscripted, as in a list, to denote the individual components of the k�tuple or sequence.  For instance:
 +
1. If v1, ..., vk ? V, then v = <v1, ..., vk> ? V' = Vk.
 +
2. If v1, ..., vk : V, then v = <v1, ..., vk> : V' = Vk.
 +
3. If f1, ..., fk : U �> V, then f = <f1, ..., fk> : (U �> V)k.
 +
There is usually felt to be a slight but significant distinction between the membership statement, that uses the sign "?" as in example (1), and the type statement, that uses the sign ":" as in examples (2) and (3).  The difference that is perceived in categorical statements, those of the form "v ? V" or "v : V", is that a multitude of objects can be said to have the same type without necessarily positing the existence of a set to which they all belong.  Without trying to decide whether I share this feeling or fully understand this distinction, I can only try to maintain a style of notation that respects it to some degree.  It is conceivable that the question of belonging to a set is rightly sensed to be the more serious issue, one that has to do with the reality of an object and the substance of a predicate, than the question of falling under a type, that has more to do with the way that a sign is intepreted and the way that information about an object is organized.  When it comes to the kinds of hypothetical statements that appear in the present instance, those of the form "v ? V => v ? V'" or "v : V => v : V'", these are usually read as implying some kind of synthesis, whose contingent consequences are the construction of a new space to contain the elements as compounded and the recognition of a new type to characterize the elements as listed, respectively.  In this application, the statement about types is again taken to be weaker than the corresponding statement about sets, since the apodosis is only meant to abbreviate and to summarize what is already stated in the protasis.
 +
A "boolean connection" of degree k, also known as a "boolean function" on k variables, is a map of the form F : Bk �> B.  In other words, a boolean connection of degree k is a proposition about things in the universe U = Bk.
 +
An "imagination" of degree k on U is a k�tuple of propositions about things in the universe U.  By way of displaying the kinds of notation that are used to express this idea, the imagination f = <f1, ..., fk> is given as a sequence of indicator functions fj : U �> B, for j = 1 to k.  All of these features of the typical imagination f can be summed up in either one of two ways:  either in the form of a membership statement, to the effect that f C (U �> B)k, or in the form of a type statement, to the effect that f : (U �> B)k, though perhaps the latter form is slightly more precise than the former.
 +
The "play of images" that is determined by f and u, more specifically, the play of the imagination f = <f1, ..., fk> that has to with the element u of U, is the k�tuple v = <v1, ..., vk> of values in B that satisfies the equations vj = fj(u), for all j = 1 to k.
 +
A "projection" of Bk, typically denoted by "pj" or "prj", is one of the maps pj : Bk �> B, for j = 1 to k, that is defined as follows:
 +
If v = <v1, ..., vk> C Bk,
 +
 +
then pj(v) = pj(<v1, ..., vk>) = vj.
 +
The "projective imagination" of Bk is the imagination <p1, ..., pk>.
 +
A "sentence about things in the universe", for short, a "sentence", is a sign that denotes a proposition.  In other words, a sentence is any sign that denotes an indicator function, any sign whose object is a function of the form f : U �> B.
 +
To emphasize the empirical contingency of this definition, one can say that a sentence is any sign that is interpreted as naming a proposition, any sign that is taken to denote an indicator function, or any sign whose object happens to be a function of the form f : U �> B.
 +
An "expression" is a type of sign, for instance, a term or a sentence, that has a value.  In this conception of an expression, I am deliberately leaving a number of options open, like whether it amounts to a term or to a sentence and whether it ought to be accounted as denoting a value or as connoting a value.  Perhaps the expression has different values under different lights, and perhaps it relates to them differently in different respects.  In the end, what one calls an expression matters less than where its value lies.  Of course, no matter whether one calls an expression a "term" or a "sentence", if the value is an element in B, then the expression affords the option of being treated as a sentence, meaning that it is subject to assertion and composition in the same way that any sentence is, having its value figure into the values of larger expressions through the linkages of sentential connectives, and allowing the consideration of what things in what universe the corresponding proposition indicates.
 +
Expressions with this degree of flexibility in the types under which they can be interpreted are difficult to translate from their formal settings into more natural contexts.  Indeed, the whole issue can be difficult to talk about, or even to think about, since the grammatical categories of sentences and noun phrases are not so fluid in natural language settings are they can made in artificial arenas.
 +
To finesse the issue of whether an expression denotes or connotes its value, or else to create general term that covers what both possibilities have in common, one can say that an expression "evalues" its value.
 +
An "assertion" is just a sentence that is being used in a certain way, namely, to indicate the indication of the indicator function that the sentence is usually used to denote.  In other words, an assertion is a sentence that is being converted to a certain use or being interpreted in a certain role, and one whose immediate denotation is being pursued to its substantive indication, specifically, the fiber of truth of the proposition that the sentence potentially denotes.  Thus, an assertion is a sentence that is held to denote the set of things in the universe of which the sentence is true.
 +
Taken in a context of communication, an assertion is basically a request that the interpreter consider the things for which the sentence is true, in other words, to find the fiber of truth in the associated proposition, or to invert the indicator function that is denoted by the sentence with respect to its possible value of truth.
 +
A "denial" of a sentence S is an assertion of its negation (S).  It acts as a request to think about the things for which the sentence is false, in other words, to find the fiber of falsity in the indicted proposition, or to invert the indicator function that is denoted by the sentence with respect to its possible value of falsity.
 +
According to this manner of definition, any sign that happens to denote a proposition, any sign that is taken as denoting an indicator function, by that very fact alone successfully qualifies as a sentence.  That is, a sentence is any sign that actually succeeds in denoting a proposition, any sign that one way or another brings to mind, as its actual object, a function of the form f : U �> B.
 +
There are several features of this definition that need be understood.  Indeed, there are problems involved in this whole style of definition that need to be discussed, and this requires a slight digression.
 +
 +
----
 +
 +
Before I move on I will need to go back and pick up a collection of basic definitions from the beginning of the Subsection.  Also, cumbersome as it may be, I will need to use the form "-( )-" for the pair of "struck-through parentheses" that I normally use for logical negation.
 +
1.3.10.3  Propositions & Sentences
 +
The concept of a sign relation is typically extended as a set L c OxSxI.  Because this extensional representation of a sign relation is one of the most natural forms that it can take up, along with being one of the most important forms that it is likely to be encountered in, a good amount of set-theoretic machinery is necessary to carry out a reasonably detailed analysis of sign relations in general.
 +
For the purposes of this discussion, let it be supposed that each set Q, that comprises a subject of interest in a particular discussion or that constitutes a topic of interest in a particular moment of discussion, is a subset of a set X, one that is sufficiently universal relative to that discussion or big enough to cover everything that is being talked about in that moment.  In a setting like this it is possible to make a number of useful definitions, to which I now turn.
 +
The "negation" of a sentence z, written as "-(z)-" and read as "not z", is a sentence that is true when z is false, and false when z is true.
 +
The "complement" of a set Q with respect to the universe X is denoted by "X-Q", or simply by "~Q" when the universe X is determinate, and is defined as the set of elements in X that do not belong to Q, that is:
 +
~Q  =  X-Q  =  {x in X  :  -(x in Q)- }.
 +
The "relative complement" of P in Q, for two sets P, Q c X, is denoted by "Q-P" and defined as the set of elements in Q that do not belong to P, that is:
 +
Q-P  =  {x in X  :  x in Q  and  -(x in P)- }.
 +
The "intersection" of P and Q, for two sets P, Q c X, is denoted by "P |^| Q" and defined as the set of elements in X that belong to both P and Q.
 +
P |^| Q  =  {x in X  :  x in P  and  x in Q }.
 +
The "union" of P and Q, for two sets P, Q c X, is denoted by "P |_| Q" and defined as the set of elements in X that belong to at least one of P or Q.
 +
P |_| Q  =  {x in X  :  x in P  or  x in Q }.
 +
The "symmetric difference" of P and Q, for two sets P, Q c X, is denoted by "P ± Q" and is defined as the set of elements in X that belong to just one of P or Q.
 +
P ± Q =  {x in X  :  x in P-Q  or  x in Q-P }.
 +
The foregoing "definitions" are the bare essentials that are needed to get the rest of this discussion going, but they have to be regarded as almost purely informal in character, at least, at this stage of the game.  In particular, these definitions all invoke the undefined notion of what a "sentence" is, they all rely on the reader's native intuition of what a "set" is, and they all derive their coherence and their meaning from the common understanding, but the equally casual use and unreflective acquaintance, that just about everybody has of the logical connectives "not", "and", "or", as these are expressed in natural language terms.
 +
As formative definitions, these initial postulations neither acquire the privileged status of untouchable axioms and infallible intuitions nor do they deserve any special suspicion, at least, nothing over and above the reflective critique that one ought to apply to all important definitions.  These dim beginnings of anything approaching genuine definitions also serve to accustom the mind's eye to a particular style of observation, that of seeing informal concepts presented in a formal frame, in a way that almost demands their increasing clarification.  In this style of examination, the frame of the set-builder expression "{x in X : ... }" functions like the "eye of the needle" through which one is trying to transport a suitably rich import of mathematics.
 +
Part the task of the remaining discussion is gradually to formalize the promissory notes that are represented by these terms and stipulations and to see whether their casual comprehension can be converted into an explicit subject matter, one that depends on grasping the corresponding collection of almost wholly, if still partially, formalized conceptions.  To this I now turn.
 +
 +
----
 +
 +
I now find myself forced to observe a distinction, drawn between
 +
binary arithmetic values and boolean logical values, that I have
 +
spent most of my life ignoring, so bear with me if I do it a bit
 +
ineptly.  It was a bit of a shock to me that I had to acknowledge
 +
its significance after all these years, but life can be like that
 +
in so many dimensions.  My motivation for drawing this distinction
 +
is partly external, to accommodate many different ways of speaking
 +
about logical values, as employed by a wide spectrum of people from
 +
analytic philosophers to circuit engineers, and partly internal, to
 +
integrate quantitative and qualitative modes of description under an
 +
overarching sign-theoretic frame of reference.
 +
 +
It will help if I can begin with some excerpts from my dissertation,
 +
from a time when I was able to think about these questions far more
 +
carefully than I am at present.
 +
 +
1.3.10.3  Propositions & Sentences
 +
 +
The "binary domain" is the set !B! = {!0!, !1!} of two algebraic values,
 +
whose arithmetic operations obey the rules of GF(2), the "galois field"
 +
of exactly two elements, whose addition and multiplication tables are
 +
tantamount to addition and multiplication of integers "modulo 2".
 +
 +
The "boolean domain" is the set %B% = {%0%, %1%} of two logical values,
 +
whose elements are read as "false" and "true", or as "falsity" and "truth",
 +
respectively.
 +
 +
At this point, I cannot tell whether the distinction between these two
 +
domains is slight or significant, and so this question must evolve its
 +
own answer, while I pursue a larger inquiry by means of its hypothesis.
 +
The weight of the matter appears to increase as the investigation moves
 +
from abstract, algebraic, and formal settings to contexts where logical
 +
semantics, natural language syntax, and concrete categories of grammar
 +
are compelling considerations.  Speaking abstractly and roughly enough,
 +
it is often acceptable to identify these two domains, and up until this
 +
point there has rarely appeared to be a sufficient reason to keep their
 +
concepts separately in mind.  The boolean domain %B% comes with at least
 +
two operations, though often under different names and always included
 +
in a number of others, that are analogous to the field operations of the
 +
binary domain !B!, and operations that are isomorphic to the rest of the
 +
boolean operations in %B% can always be built on the binary basis of !B!.
 +
 +
Of course, as sets of the same cardinality, the domains !B! and %B%
 +
and all of the structures that can be built on them become isomorphic
 +
at a high enough level of abstraction.  Consequently, the main reason
 +
for making this distinction in the immediate context appears to be more
 +
a matter of grammar than an issue of logical and mathematical substance,
 +
namely, so that the signs "%0%" and "%1%" can appear with a semblance of
 +
syntactic legitimacy in linguistic contexts that call for a grammatical
 +
sentence or a sentence surrogate to represent the classes of sentences
 +
that are "always false" and "always true", respectively.  The signs
 +
"0" and "1", customarily read as nouns but not as sentences, fail
 +
to be suitable for this purpose.  Whether these scruples, that are
 +
needed to conform to a particular choice of natural language context,
 +
are ultimately important, is another thing I do not know at this point.
 +
 +
The "negation" of x, for x in %B%, written as "(x)"
 +
and read as "not x", is the boolean value (x) in %B%
 +
that is %1% when x is %0%, and %0% when x is %1%.
 +
 +
Thus, negation is a monadic operation on boolean
 +
values, a function of the form (_) : %B% -> %B%.
 +
 +
It is convenient to transport the product and the sum operations of !B!
 +
into the logical setting of %B%, where they can be symbolized by signs
 +
of the same character, doubly underlined as necessary to avoid confusion.
 +
This yields the following definitions of a "product" and a "sum" in %B%
 +
and leads to the following forms of multiplication and addition tables.
 +
 +
The "product" of x and y, for values x, y in %B%, is given by Table 8.
 +
 +
Table 8.  Product Operation for the Boolean Domain
 +
o---------o---------o---------o
 +
|  %.%  #  %0%  |  %1%  |
 +
o=========o=========o=========o
 +
|  %0%  #  %0%  |  %0%  |
 +
o---------o---------o---------o
 +
|  %1%  #  %0%  |  %1%  |
 +
o---------o---------o---------o
 +
Viewed as a function on logical values, %.% : %B% x %B% -> %B%, the product corresponds to the logical operation that is commonly called "conjunction" and that is otherwise expressed as "x and y".  In accord with common practice, the raised dot ".", doubly underlined or otherwise, is frequently omitted from written expressions of the product.
 +
The "sum" of x and y, for values x, y in %B%, is given by Table 9.
 +
Table 9.  Sum Operation for the Boolean Domain
 +
o---------o---------o---------o
 +
|  %+%  #  %0%  |  %1%  |
 +
o=========o=========o=========o
 +
|  %0%  #  %0%  |  %1%  |
 +
o---------o---------o---------o
 +
|  %1%  #  %1%  |  %0%  |
 +
o---------o---------o---------o
 +
Viewed as a function on logical values, %+% : %B% x %B% -> %B%, the sum corresponds to the logical operation that is generally called "exclusive disjunction" and that is otherwise expressed as "x or y, but not both".  Depending on the context, a couple of other signs and readings that can invoke this operation are:
 +
1.  "x =/= y", read "x is not equal to y", or "exactly one of x and y".
 +
2.  "x <=/=> y", read "x is not equivalent to y", or "x opposes y".
 +
For sentences, the signs of equality ("=") and inequality ("=/=") are reserved to signify the syntactic identity and non-identity, respectively, of the literal strings of characters that make up the sentences in question, while the signs of equivalence ("<=>") and inequivalence ("<=/=>") refer to the logical values, if any, of these strings, and serve to signify the equality and inequality, respectively, of their conceivable boolean values.  For the logical values themselves, the two pairs of symbols collapse in their senses to a single pair, signifying a single form of coincidence or a single form of distinction, respectively, between the boolean values of the entities involved.
 +
In logical studies, one tends to be interested in all of the operations or all of the functions of a given type, at least, to the extent that their totalities and their individualities can be comprehended, and not just the specialized collections that define particular algebraic structures.
 +
Although the rest of the conceivably possible dyadic operations on boolean values, in other words, the remainder of the sixteen functions f : %B% x %B% -> %B%, could be presented in the same way as the multiplication and addition tables, it is better to look for a more efficient style of representation, one that is able to express all of the boolean functions on the same number of variables on a roughly equal basis, and with a bit of luck, affords us with a calculus for computing with these functions.
 +
The utility of a suitable calculus would involve, among other things:
 +
1.  Finding the values of given functions for given arguments.
 +
2.  Inverting boolean functions, that is, "finding the fibers" of boolean functions, or solving logical equations that are expressed in terms of boolean functions.
 +
3.  Facilitating the recognition of invariant forms that take boolean functions as their functional components.
 +
The whole point of formal logic, the reason for doing logic formally and the measure that determines how far it is possible to reason abstractly, is to discover functions that do not vary as much as their variables do, in other words, to identify forms of logical functions that, though they express a dependence on the values of their constituent arguments, do not vary as much as possible, but approach the way of being a function that constant functions enjoy.  Thus, the recognition of a logical law amounts to identifying a logical function, that, though it ostensibly depends on the values of its putative arguments, is not as variable in its values as the values of its variables are allowed to be.
 +
The "indicator function" or the "characteristic function" of a set Q c X, written "f_Q", is the map from X to the boolean domain %B% = {%0%, %1%} that is defined in the following ways:
 +
1.  Considered in extensional form, f_Q is the subset of X x %B% that is given by the following formula:
 +
    f_Q  =  {<x, b> in X x %B%  :  b = %1%  <=>  x in Q}.
 +
2.  Considered in functional form, f_Q is the map from X to %B% that is given by the following condition:
 +
    f_Q (x) = %1%  <=>  x in Q.
 +
A "proposition about things in the universe", for short, a "proposition", is the same thing as an indicator function, that is, a function of the form f : X -> %B%.  The convenience of this seemingly redundant usage is that it allows one to refer to an indicator function without having to specify right away, as a part of its designated subscript, exactly what set it indicates, even though a proposition always indicates some subset of its designated universe, and even though one will probably or eventually want to know exactly what subset that is.
 +
According to the stated understandings, a proposition is a function that indicates a set, in the sense that a function associates values with the elements of a domain, some of which values can be interpreted to mark out for special consideration a subset of that domain.  The way in which an indicator function is imagined to "indicate" a set can be expressed in terms of the following concepts.
 +
The "fiber" of a codomain element y in Y under a function f : X -> Y is the subset of the domain X that is mapped onto y, in short, it is f^(-1)(y) c X.  In other language that is often used, the fiber of y under f is called the "antecedent set", the "inverse image", the "level set", or the "pre-image" of y under f.  All of these equivalent concepts are defined as follows:
 +
Fiber of y under f  =  f^(-1)(y)  =  {x in X  :  f(x) = y}.
 +
In the special case where f is the indicator function f_Q of the set Q c X, the fiber of 1 under fQ is just the set Q back again:
 +
Fiber of 1 under fQ  =  fQ-1(1)  =  {x in X  :  fQ(x) = 1}  =  Q.
 +
In this specifically boolean setting, as in the more generally logical context, where "truth" under any name is especially valued, it is worth devoting a specialized notation to the "fiber of truth" in a proposition, to mark the set that it indicates with a particular ease and explicitness.
 +
For this purpose, I introduce the use of "fiber bars" or "ground signs", written as "[| ... |]" around a sentence or the sign of a proposition, and whose application is defined as follows:
 +
If  f : X -> %B%,
 +
 +
then  [| f |]  =  f^(-1)(%1%)  =  {x in X  :  f(x) = %1%}.
 +
¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤
 +
Some may recognize here fledgling efforts
 +
to reinforce flights of Fregean semantics
 +
with impish pitches of Peircean semiotics.
 +
Some may deem it Icarean, all too Icarean.
 +
 +
1.3.10.3  Propositions & Sentences (cont.)
 +
 +
The definition of a fiber, in either the general or the boolean case,
 +
is a purely nominal convenience for referring to the antecedent subset,
 +
the inverse image under a function, or the pre-image of a functional value.
 +
The definition of an operator on propositions, signified by framing the signs
 +
of propositions with fiber bars or ground signs, remains a purely notational
 +
device, and yet the notion of a fiber in a logical context serves to raise
 +
an interesting point.  By way of illustration, it is legitimate to rewrite
 +
the above definition in the following form:
 +
 +
If  f : X -> %B%,
 +
 +
then  [| f |]  =  f^(-1)(%1%)  =  {x in X  :  f(x)}.
 +
 +
The set-builder frame "{x in X  :  ... }" requires a grammatical sentence or
 +
a sentential clause to fill in the blank, as with the sentence "f(x) = %1%"
 +
that serves to fill the frame in the initial definition of a logical fiber.
 +
And what is a sentence but the expression of a proposition, in other words,
 +
the name of an indicator function?  As it happens, the sign "f(x)" and the
 +
sentence "f(x) = %1%" represent the very same value to this context, for
 +
all x in X, that is, they will appear equal in their truth or falsity
 +
to any reasonable interpreter of signs or sentences in this context,
 +
and so either one of them can be tendered for the other, in effect,
 +
exchanged for the other, within this context, frame, and reception.
 +
 +
The sign "f(x)" manifestly names the value f(x).
 +
This is a value that can be seen in many lights.
 +
It is, at turns:
 +
 +
1.  The value that the proposition f has at the point x,
 +
    in other words, the value that f bears at the point x
 +
    where f is being evaluated, the value that f takes on
 +
    with respect to the argument or the object x that the
 +
    whole proposition is taken to be about.
 +
 +
2.  The value that the proposition f not only takes up at
 +
    the point x, but that it carries, conveys, transfers,
 +
    or transports into the setting "{x in X  :  ... }" or
 +
    into any other context of discourse where f is meant
 +
    to be evaluated.
 +
 +
3.  The value that the sign "f(x)" has in the context where it is placed,
 +
    that it stands for in the context where it stands, and that it continues
 +
    to stand for in this context just so long as the same proposition f and the
 +
    same object x are borne in mind.
 +
 +
4.  The value that the sign "f(x)" represents to its full interpretive context
 +
    as being its own logical interpretant, namely, the value that it signifies
 +
    as its canonical connotation to any interpreter of the sign that is cognizant
 +
    of the context in which it appears.
 +
 +
The sentence "f(x) = %1%" indirectly names what the sign "f(x)"
 +
more directly names, that is, the value f(x).  In other words,
 +
the sentence "f(x) = %1%" has the same value to its interpretive
 +
context that the sign "f(x)" imparts to any comparable context,
 +
each by way of its respective evaluation for the same x in X.
 +
 +
What is the relation among connoting, denoting, and "evaluing", where
 +
the last term is coined to describe all the ways of bearing, conveying,
 +
developing, or evolving a value in, to, or into an interpretive context?
 +
In other words, when a sign is evaluated to a particular value, one can
 +
say that the sign "evalues" that value, using the verb in a way that is
 +
categorically analogous or grammatically conjugate to the times when one
 +
says that a sign "connotes" an idea or that a sign "denotes" an object.
 +
This does little more than provide the discussion with a "weasel word",
 +
a term that is designed to avoid the main issue, to put off deciding the
 +
exact relation between formal signs and formal values, and ultimately to
 +
finesse the question about the nature of formal values, whether they are
 +
more akin to conceptual signs and figurative ideas or to the kinds of
 +
literal objects and platonic ideas that are independent of the mind.
 +
 +
These questions are confounded by the presence of certain peculiarities in
 +
formal discussions, especially by the fact that an equivalence class of signs
 +
is tantamount to a formal object.  This has the effect of allowing an abstract
 +
connotation to work as a formal denotation.  In other words, if the purpose of
 +
a sign is merely to lead its interpreter up to a sign in an equivalence class
 +
of signs, then it follows that this equivalence class is the object of the
 +
sign, that connotation can achieve denotation, at least, to some degree,
 +
and that the interpretant domain collapses with the object domain,
 +
at least, in some respect, all things being relative to the
 +
sign relation that embeds the discussion.
 +
 +
Introducing the realm of "values" is a stopgap measure that temporarily
 +
permits the discussion to avoid certain singularities in the embedding
 +
sign relation, and allowing the process of "evaluation" as a compromise
 +
mode of signification between connotation and denotation only manages to
 +
steer around a topic that eventually has to be mapped in full, but these
 +
strategies do allow the discussion to proceed a little further without
 +
having to answer questions that are too difficult to be settled fully
 +
or even tackled directly at this point.  As far as the relations among
 +
connoting, denoting, and evaluing are concerned, it is possible that
 +
all of these constitute independent dimensions of significance that
 +
a sign might be able to enjoy, but since the notion of connotation
 +
is already generic enough to contain multitudes of subspecies, I am
 +
going to subsume, on a tentative basis, all of the conceivable modes
 +
of "evaluing" within the broader concept of connotation.
 +
 +
With this degree of flexibility in mind, one can say that the sentence
 +
"f(x) = %1%" latently connotes what the sign "f(x)" patently connotes.
 +
Taken in abstraction, both syntactic entities fall into an equivalence
 +
class of signs that constitutes an abstract object, a thing of value
 +
that is "identified by" the sign "f(x)", and thus an object that might
 +
as well be "identified with" the value f(x).
 +
 +
The upshot of this whole discussion of evaluation is that it allows one to
 +
rewrite the definitions of indicator functions and their fibers as follows:
 +
 +
The "indicator function" or the "characteristic function" of a set Q c X,
 +
written "f_Q", is the map from X to the boolean domain %B% = {%0%, %1%}
 +
that is defined in the following ways:
 +
 +
1.  Considered in its extensional form, f_Q is the subset of X x %B%
 +
    that is given by the following formula:
 +
 +
    f_Q  =  {<x, b> in X x %B%  :  b  <=>  x in Q}.
 +
 +
2.  Considered in its functional form, f_Q is the map from X to %B%
 +
    that is given by the following condition:
 +
 +
    f_Q (x)  <=>  x in Q.
 +
 +
The "fibers" of truth and falsity under a proposition f : X -> %B%
 +
are subsets of X that are variously described as follows:
 +
 +
1.  The fiber of %1% under f  =  [| f |]  =  f^(-1)(%1%)
 +
 +
                              =  {x in X  :  f(x) = %1%}
 +
 +
                              =  {x in X  :  f(x) }.
 +
 +
2.  The fiber of %0% under f  =  ~[| f |]  =  f^(-1)(%0%)
 +
 +
                              =  {x in X  :  f(x) = %0%}
 +
 +
                              =  {x in X  :  (f(x)) }.
 +
 +
Perhaps this looks like a lot of work for the sake of what seems to be
 +
such a trivial form of syntactic transformation, but it is an important
 +
step in loosening up the syntactic privileges that are held by the sign
 +
of logical equivalence "<=>", as written between logical sentences, and
 +
by the sign of equality "=", as written between their logical values, or
 +
else between propositions and their boolean values.  Doing this removes
 +
a longstanding but wholly unnecessary conceptual confound between the
 +
idea of an "assertion" and notion of an "equation", and it allows one
 +
to treat logical equality on a par with the other logical operations.
 +
 +
¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤
 +
Where are we?  We just defined the concept of a functional fiber in several
 +
of the most excruciating ways possible, but that's just because this method
 +
of refining functional fibers is intended partly for machine consumputation,
 +
so its schemata must be rendered free of all admixture of animate intuition.
 +
However, just between us, a single picture may suffice to sum up the notion:
 +
 +
|  X-[| f |] ,  [| f |]  c  X
 +
|  o      o  o  o  o      |
 +
|    \    /    \  |  /      |
 +
|    \  /      \ | /        | f
 +
|      \ /        \|/        |
 +
|      o          o          v
 +
|  {  %0%    ,    %1%  }  =  %B%
 +
 +
For the sake of current reference:
 +
 +
| The "fibers" of truth and falsity in a proposition f : X -> %B%
 +
| are the subsets [| f |] and X - [| f |] of X that are variously
 +
| described as follows:
 +
|
 +
| The fiber of %1% under f
 +
|
 +
| =  [| f |]  =  f^(-1)(%1%)
 +
|
 +
| =  {x in X  :  f(x) = %1%}
 +
|
 +
| =  {x in X  :  f(x) }.
 +
|
 +
| The fiber of %0% under f
 +
|
 +
| =  ~[| f |]  =  f^(-1)(%0%)
 +
|
 +
| =  {x in X  :  f(x) = %0%}
 +
|
 +
| =  {x in X  :  (f(x)) }.
 +
 +
Oh, by the way, the outer parentheses in "(f(g))" signify negation.
 +
I did not have here the "stricken parentheses" that I normally use.
 +
 +
Why are we doing this?  The immediate reason -- whose critique I defer --
 +
has to do with finding a modus vivendi, whether a working compromise or
 +
a genuine integration, between the assertive-declarative languages and
 +
the functional-procedural languages that we have available for the sake
 +
of conceptual-logical-ontological analysis, clarification, description,
 +
inference, problem-solving, programming, representation, or whatever.
 +
 +
In the next few installments, I will be working toward the definition
 +
of an operation called the "stretch".  This is related to the concept
 +
from category theory that is called a "pullback".  As a few will know
 +
the uses of that already, maybe there's hope of stretching the number.
 +
¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤
 +
In this episode, I compile a collection of definitions,
 +
leading up to the particular conception of a "sentence"
 +
that I'll be using throughout the rest of this inquiry.
 +
 +
1.3.10.3  Propositions & Sentences (cont.)
 +
 +
As a purely informal aid to interpretation, I frequently use the letters
 +
"p", "q" to denote propositions.  This can serve to tip off the reader
 +
that a function is intended as the indicator function of a set, and
 +
it saves us the trouble of declaring the type f : X -> %B% each
 +
time that a function is introduced as a proposition.
 +
 +
Another convention of use in this context is to let boldface letters
 +
stand for k-tuples, lists, or sequences of objects.  Typically, the
 +
elements of the k-tuple, list, or sequence are all of one type, and
 +
typically the boldface letter is of the same basic character as the
 +
indexed or subscripted letters that are used denote the components
 +
of the k-tuple, list, or sequence.  When the dimension of elements
 +
and functions is clear from the context, we may elect to drop the
 +
bolding of characters that name k-tuples, lists, and sequences.
 +
 +
For example:
 +
 +
1.  If x_1, ..., x_k in X,      then #x# = <x_1, ..., x_k> in X' = X^k.
 +
 +
2.  If x_1, ..., x_k  : X,      then #x# = <x_1, ..., x_k>  : X' = X^k.
 +
 +
3.  If f_1, ..., f_k  : X -> Y,  then #f# = <f_1, ..., f_k>  : (X -> Y)^k.
 +
 +
There is usually felt to be a slight but significant distinction between
 +
the "membership statement" that uses the sign "in" as in Example (1) and
 +
the "type statement" that uses the sign ":" as in examples (2) and (3).
 +
The difference that appears to be perceived in categorical statements,
 +
when those of the form "x in X" and those of the form "x : X" are set
 +
in side by side comparisons with each other, is that a multitude of
 +
objects can be said to have the same type without having to posit
 +
the existence of a set to which they all belong.  Without trying
 +
to decide whether I share this feeling or even fully understand
 +
the distinction in question, I can only try to maintain a style
 +
of notation that respects it to some degree.  It is conceivable
 +
that the question of belonging to a set is rightly sensed to be
 +
the more serious matter, one that has to do with the reality of
 +
an object and the substance of a predicate, than the question of
 +
falling under a type, that may have more to do with the way that
 +
a sign is interpreted and the way that information about an object
 +
is organized.  When it comes to the kinds of hypothetical statements
 +
that appear in these Examples, those of the form "x in X => #x# in X'"
 +
and "x : X => #x# : X'", these are usually read as implying some order
 +
of synthetic construction, one whose contingent consequences involve the
 +
constitution of a new space to contain the elements being compounded and
 +
the recognition of a new type to characterize the elements being moulded,
 +
respectively.  In these applications, the statement about types is again
 +
taken to be less presumptive than the corresponding statement about sets,
 +
since the apodosis is intended to do nothing more than to abbreviate and
 +
to summarize what is already stated in the protasis.
 +
 +
A "boolean connection" of degree k, also known as a "boolean function"
 +
on k variables, is a map of the form F : %B%^k -> %B%.  In other words,
 +
a boolean connection of degree k is a proposition about things in the
 +
universe X = %B%^k.
 +
 +
An "imagination" of degree k on X is a k-tuple of propositions about things
 +
in the universe X.  By way of displaying the various kinds of notation that
 +
are used to express this idea, the imagination #f# = <f_1, ..., f_k> is given
 +
as a sequence of indicator functions f_j : X -> %B%, for j = 1 to k.  All of
 +
these features of the typical imagination #f# can be summed up in either one
 +
of two ways:  either in the form of a membership statement, to the effect that
 +
#f# is in (X -> %B%)^k, or in the form of a type statement, to the effect that
 +
#f# : (X -> %B%)^k, though perhaps the latter form is slightly more precise than
 +
the former.
 +
 +
The "play of images" that is determined by #f# and x, more specifically,
 +
the play of the imagination #f# = <f_1, ..., f_k> that has to with the
 +
element x in X, is the k-tuple #b# = <b_1, ..., b_k> of values in %B%
 +
that satisfies the equations b_j = f_j (x), for all j = 1 to k.
 +
 +
A "projection" of %B%^k, typically denoted by "p_j" or "pr_j",
 +
is one of the maps p_j : %B%^k -> %B%, for j = 1 to k, that is
 +
defined as follows:
 +
 +
If        #b#  =      <b_1, ..., b_k>          in  %B%^k,
 +
 +
then  p_j (#b#)  =  p_j (<b_1, ..., b_k>)  =  b_j  in  %B%.
 +
 +
The "projective imagination" of %B%^k is the imagination <p_1, ..., p_k>.
 +
 +
A "sentence about things in the universe", for short, a "sentence",
 +
is a sign that denotes a proposition.  In other words, a sentence is
 +
any sign that denotes an indicator function, any sign whose object is
 +
a function of the form f : X �> B.
 +
 +
To emphasize the empirical contingency of this definition, one can say
 +
that a sentence is any sign that is interpreted as naming a proposition,
 +
any sign that is taken to denote an indicator function, or any sign whose
 +
object happens to be a function of the form f : X �> B.
 +
¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤~~~~~~~~~¤
 +
I finish out the Subsection on "Propositions & Sentences" with
 +
an account of how I use concepts like "assertion" and "denial".
 +
1.3.10.3  Propositions & Sentences (cont.)
 +
An "expression" is a type of sign, for instance, a term or a sentence,
 +
that has a value.  In forming this conception of an expression, I am
 +
deliberately leaving a number of options open, for example, whether
 +
the expression amounts to a term or to a sentence and whether it
 +
ought to be accounted as denoting a value or as connoting a value.
 +
Perhaps the expression has different values under different lights,
 +
and perhaps it relates to them differently in different respects.
 +
In the end, what one calls an expression matters less than where
 +
its value lies.  Of course, no matter whether one chooses to call
 +
an expression a "term" or a "sentence", if the value is an element
 +
of %B%, then the expression affords the option of being treated as
 +
a sentence, meaning that it is subject to assertion and composition
 +
in the same way that any sentence is, having its value figure into
 +
the values of larger expressions through the linkages of sentential
 +
connectives, and affording us the consideration of what things in
 +
what universe the corresponding proposition happens to indicate.
 +
 +
Expressions with this degree of flexibility in the types under
 +
which they can be interpreted are difficult to translate from
 +
their formal settings into more natural contexts.  Indeed,
 +
the whole issue can be difficult to talk about, or even
 +
to think about, since the grammatical categories of
 +
sentential clauses and noun phrases are rarely so
 +
fluid in natural language settings are they can
 +
be rendered in artificially formal arenas.
 +
 +
To finesse the issue of whether an expression denotes or connotes its value,
 +
or else to create a general term that covers what both possibilities have
 +
in common, one can say that an expression "evalues" its value.
 +
 +
An "assertion" is just a sentence that is being used in a certain way,
 +
namely, to indicate the indication of the indicator function that the
 +
sentence is usually used to denote.  In other words, an assertion is
 +
a sentence that is being converted to a certain use or that is being
 +
interpreted in a certain role, and one whose immediate denotation is
 +
being pursued to its substantive indication, specifically, the fiber
 +
of truth of the proposition that the sentence potentially denotes.
 +
Thus, an assertion is a sentence that is held to denote the set of
 +
things in the universe for which the sentence is held to be true.
 +
 +
Taken in a context of communication, an assertion is basically a request
 +
that the interpreter consider the things for which the sentence is true,
 +
in other words, to find the fiber of truth in the associated proposition,
 +
or to invert the indicator function that is denoted by the sentence with
 +
respect to its possible value of truth.
 +
 +
A "denial" of a sentence z is an assertion of its negation -(z)-.
 +
The denial acts as a request to think about the things for which the
 +
sentence is false, in other words, to find the fiber of falsity in the
 +
indicted proposition, or to invert the indicator function that is being
 +
denoted by the sentence with respect to its possible value of falsity.
 +
 +
According to this manner of definition, any sign that happens to denote
 +
a proposition, any sign that is taken as denoting an indicator function,
 +
by that very fact alone successfully qualifies as a sentence.  That is,
 +
a sentence is any sign that actually succeeds in denoting a proposition,
 +
any sign that one way or another brings to mind, as its actual object,
 +
a function of the form f : X �> B.
 +
 +
There are many features of this definition that need to be understood.
 +
Indeed, there are problems involved in this whole style of definition
 +
that need to be discussed, and doing this requires a slight excursion.
 +
1.3.10.4  Empirical Types & Rational Types
 +
In this subsection, I want to examine the style of definition that I used to define a sentence as a type of sign, to adapt its application to other problems of defining types, and to draw a lesson of general significance.
 +
Notice that I am defining a sentence in terms of what it denotes, and not in terms of its structure as a sign.  In this way of reckoning, a sign is not a sentence on account of any property that it has in itself, but only due to the sign relation that actually happens to interpret it.  This makes the property of being a sentence a question of actualities and contingent relations, not merely a question of potentialities and absolute categories.  This does nothing to alter the level of interest that one is bound to have in the structures of signs, it merely shifts the import of the question from the logical plane of definition to the pragmatic plane of effective action.  As a practical matter, of course, some signs are better for a given purpose than others, more conducive to a particular result than others, and more effective in achieving an assigned objective than others, and the reasons for this are at least partly explained by the relationships that can be found to exist among a sign's structure, its object, and the sign relation that fits them.
 +
Notice the general character of this development.  I start by defining a type of sign according to the type of object that it happens to denote, ignoring at first the structural potential that it brings to the task.  According to this mode of definition, a type of sign is singled out from other signs in terms of the type of object that it actually denotes and not according to the type of object that it is designed or destined to denote, nor in terms of the type of structure that it possesses in itself.  This puts the empirical categories, the classes based on actualities, at odds with the rational categories, the classes based on intentionalities.
 +
In hopes that this much explanation is enough to rationalize the account of types that I am using, I break off the digression at this point and return to the main discussion.
 +
1.3.10.5  Articulate Sentences
 +
A sentence is "articulate" (1) if it has a significant form, a compound constitution, or a non�trivial structure as a sign, and (2) if there is an informative relationship that exists between its structure as a sign and the proposition that it happens to denote.  A sentence of this kind is typically given in the form of a "description", an "expression", or a "formula", in other words, as an articulated sign or a well�structured element of a formal language.  As a general rule, the class of sentences that one is willing to contemplate is compiled from a particular brand of complex signs and syntactic strings, those that are put together from the basic building blocks of a formal language and held in a special esteem for the roles that they play within its grammar.  However, even if a typical sentence is a sign that is generated by a formal regimen, having its form, its meaning, and its use governed by the principles of a comprehensive grammar, the class of sentences that one has a mind to contemplate can also include among its number many other signs of an arbitrary nature.
 +
Frequently this "formula" has a "variable" in it that "ranges over" the universe U.  A "variable" is an ambiguous or equivocal sign that can be interpreted as denoting any element of the set that it "ranges over".
 +
If a sentence denotes a proposition f : U �> B, then the "value" of the sentence with regard to u C U is the value f(u) of the proposition at u, where "0" is interpreted as "false" and "1" is interpreted as "true".
 +
Since the value of a sentence or a proposition depends on the universe of discourse to which it is "referred", and since it also depends on the element of the universe with regard to which it is evaluated, it is usual to say that a sentence or a proposition "refers" to a universe and to its elements, though perhaps in a variety of different senses.  Furthermore, a proposition, acting in the role of as an indicator function, "refers" to the elements that it "indicates", namely, the elements on which it takes a positive value.  In order to sort out the possible confusions that are capable of arising here, I need to examine how these various notions of reference are related to the notion of denotation that is used in the pragmatic theory of sign relations.
 +
One way to resolve the various senses of "reference" that arise in this setting is to make the following sorts of distinctions among them.  Let the reference of a sentence or a proposition to a universe of discourse, the one that it acquires by way of taking on any interpretation at all, be taken as its "general reference", the kind of reference that one can safely ignore as irrelevant, at least, so long as one stays immersed in only one context of discourse or only one moment of discussion.  Let the references that an indicator function f has to the elements on which it evaluates to 0 be called its "negative references".  Let the references that an indicator function f has to the elements on which it evaluates to 1 be called its "positive references" or its "indications".  Finally, unspecified references to the "references" of a sentence, a proposition, or an indicator function can be taken by default as references to their specific, positive references.
 +
The universe of discourse for a sentence, the set whose elements the sentence is interpreted to be about, is not a property of the sentence by itself, but of the sentence in the presence of its interpretation.  Independently of how many explicit variables a sentence contains, its value can always be interpreted as depending on any number of implicit variables.  For instance, even a sentence with no explicit variable, a constant expression like "0" or "1", can be taken to denote a constant proposition of the form c : U �> B.  Whether or not it has an explicit variable, I always take a sentence as referring to a proposition, one whose values refer to elements of a universe U.
 +
Notice that the letters "P" and "Q", interpreted as signs that denote indicator functions P, Q : U �> B, have the character of sentences in relation to propositions, at least, they have the same status in this abstract discussion as genuine sentences have in concrete discussions.  This illustrates the relation between sentences and propositions as a special case of the relation between signs and objects.
 +
To assist the reading of informal examples, I frequently use the letters "s", "t", and "S", "T" to denote sentences.  Thus, it is conceivable to have a situation where S = "P" and where P : U �> B.  Altogether, this means that the sign "S" denotes the sentence S, that the sentence S is the sentence "P", and that the sentence "P" denotes the proposition or the indicator function P : U �> B.  In settings where it is necessary to keep track of a large number of sentences, I use subscripted letters like "e1", ..., "en" to refer to the various expressions.
 +
A "sentential connective" is a sign, a coordinated sequence of signs, a significant pattern of arrangement, or any other syntactic device that can be used to connect a number of sentences together in order to form a single sentence.  If k is the number of sentences that are connected, then the connective is said to be of order k.  If the sentences acquire a logical relationship by this means, and are not just strung together by this mechanism, then the connective is called a "logical connective".  If the value of the constructed sentence depends on the values of the component sentences in such a way that the value of the whole is a boolean function of the values of the parts, then the connective is called a "propositional connective".
 +
1.3.10.6  Stretching Principles
 +
There is a principle, of constant use in this work, that needs to be made explicit.  In order to give it a name, I refer to this idea as the "stretching principle".  Expressed in different ways, it says that:
 +
1. Any relation of values extends to a relation of what is valued.
 +
2. Any statement about values says something about the things that are given these values.
 +
3. Any association among a range of values establishes an association among the domains of things that these values are the values of.
 +
4. Any connection between two values can be stretched to create a connection, of analogous form, between the objects, persons, qualities, or relationships that are valued in these connections.
 +
5. For every operation on values, there is a corresponding operation on the actions, conducts, functions, procedures, or processes that lead to these values, as well as there being analogous operations on the objects that instigate all of these various proceedings.
 +
Nothing about the application of the stretching principle guarantees that the analogues it generates will be as useful as the material it works on.  It is another question entirely whether the links that are forged in this fashion are equal in their strength and apposite in their bearing to the tried and true utilities of the original ties, but in principle they are always there.
 +
In particular, a connection F : Bk �> B can be understood to indicate a relation among boolean values, namely, the k�ary relation F�1(1) c Bk.  If these k values are values of things in a universe U, that is, if one imagines each value in a k�tuple of values to be the functional image that results from evaluating an element of U under one of its possible aspects of value, then one has in mind the k propositions fj : U �> B, for j = 1 to k, in sum, one embodies the imagination f = <f1, ..., fk>.  Together, the imagination f ? (U �> B)k and the connection F : Bk �> B stretch each other to cover the universe U, yielding a new proposition P : U �> B.
 +
To encapsulate the form of this general result, I define a composition that takes an imagination f = <f1, ..., fk> C (U �> B)k and a boolean connection F : Bk �> B and gives a proposition P : U �> B.  Depending on the situation, specifically, according to whether many F and many f, a single F and many f, or many F and a single f are being considered, respectively, I refer to this P under one of three descriptions:
 +
1. In a general setting, where the connection F and the imagination f are both permitted to take up a variety of concrete possibilities, call P the "stretch of F and f from U to B", and write it in the style of a composition as "F $ f".  This is meant to suggest that the symbol "$", here read as "stretch", denotes an operator of the form $ : (Bk �> B) x (U �> B)k �> (U �> B).
 +
2. In a setting where the connection F is fixed but the imagination f is allowed to vary over a wide range of possibilities, call P the "stretch of F to f on U", and write it in the style "F$f", exactly as if "F$" denotes an operator F$ : (U �> B)k �> (U �> B) that is derived from F and applied to f, ultimately yielding a proposition F$f : U �> B.
 +
3. In a setting where the imagination f is fixed but the connection F is allowed to range over wide variety of possibilities, call P the "stretch of f by F to B", and write it in the style "f$F", exactly as if "f$" denotes an operator f$ : (Bk �> B) �> (U �> B) that is derived from f and applied to F, ultimately yielding a proposition f$F : U �> B.
 +
Because this notation is only used in settings where the imagination f : (U �> B)k and the connection F : Bk �> B are distinguished by their types, it does not really matter whether one writes "F $ f" or "f $ F" for the initial composition.
 +
Just as a sentence is a sign that denotes a proposition, which thereby serves to indicate a set, a propositional connective is a provision of syntax whose mediate effect is to denote an operation on propositions, which thereby manages to indicate the result of an operation on sets.  In order to see how these compound forms of indication can be defined, it is useful to go through the steps that are needed to construct them.  In general terms, the ingredients of the construction are as follows:
 +
1. An imagination of degree k on U, in other words, a k�tuple of propositions fj : U �> B, for j = 1 to k, or an object of the form f = <f1, ..., fk> : (U �> B)k.
 +
2. A connection of degree k, in other words, a proposition about things in Bk, or a boolean function of the form F : Bk �> B.
 +
From these materials, it is required to construct a proposition P : U �> B such that P(u) = F(f1(u), ..., fk(u)), for all u C U.  The desired construction is determined as follows:
 +
The cartesian power Bk, as a cartesian product, is characterized by the possession of a "projective imagination" p = <p1, ..., pk> of degree k on Bk, along with the property that any imagination f = <f1, ..., fk> of degree k on an arbitrary set W determines a unique map f! : W �> Bk, the play of whose projective images <p1(f!(w), ..., pk(f!(w)) on the functional image f!(w) matches the play of images <f1(w), ..., fk(w)> under f, term for term and at every element w in W.
 +
Just to be on the safe side, I state this again in more standard terms.  The cartesian power Bk, as a cartesian product, is characterized by the possession of k projection maps pj : Bk �> B, for j = 1 to k, along with the property that any k maps fj : W �> B, from an arbitrary set W to B, determine a unique map f! : W �> Bk such that pj(f!(w)) = fj(w), for all j = 1 to k, and for all w C W.
 +
Now suppose that the arbitrary set W in this construction is just the relevant universe U.  Given that the function f! : U �> Bk is uniquely determined by the imagination f : (U �> B)k, that is, by the k�tuple of propositions f = <f1, ..., fk>, it is safe to identify f! and f as being a single function, and this makes it convenient on many occasions to refer to the identified function by means of its explicitly descriptive name "<f1, ..., fk>".  This facility of address is especially appropriate whenever a concrete term or a constructive precision is demanded by the context of discussion.
 +
1.3.10.7  Stretching Operations
 +
The preceding discussion of stretch operations is slightly more general than is called for in the present context, and so it is probably a good idea to draw out the particular implications that are needed right away.
 +
If F : Bk �> B is a boolean function on k variables, then it is possible to define a mapping F$ : (U �> B)k �> (U �> B), in effect, an operation that takes k propositions into a single proposition, where F$ satisfies the following conditions:
 +
F$(f1, ..., fk) : U �> B
 +
:
 +
F$(f1, ..., fk)(u) = F(f(u))
 +
 +
= F(<f1, ..., fk>(u))
 +
 +
= F(f1(u), ..., fk(u)).
 +
Thus, F$ is what a propositional connective denotes, a particular way of connecting the propositions that are denoted by a number of sentences into a proposition that is denoted by a single sentence.
 +
Now "fX" is sign that denotes the proposition fX, and it certainly seems like a sufficient sign for it.  Why is there is a need to recognize any other signs of it?
 +
If one takes a sentence as a type of sign that denotes a proposition and a proposition as a type of function whose values serve to indicate a set, then one needs a way to grasp the overall relation between the sentence and the set as taking place within a "higher order" (HO) sign relation.
 +
Roughly sketched, the relations of denotation and indication that exist among sets, propositions, sentences, and values can be diagrammed as in Table 10.
 +
Table 10.  Levels of Indication
 +
Object
 +
Sign
 +
Higher Order Sign
 +
Set
 +
Proposition
 +
Sentence
 +
f-1(v)
 +
f
 +
“F”
 +
X
 +
1
 +
“1”
 +
~X
 +
0
 +
“0” Sign Higher Order Sign
 +
Set Proposition Sentence
 +
f�1(v) f "f"
 +
X 1 "1"
 +
 +
~X 0 "0"
 +
Strictly speaking, a proposition is too abstract to be a sign, and so the contents of Table 10 have to be taken with the indicated grains of salt.  Propositions, as indicator functions, are abstract mathematical objects, not any kinds of syntactic elements, and so propositions cannot literally constitute the orders of concrete signs that remain of ultimate interest in the pragmatic theory of signs, or in any theory of effective meaning.  Therefore, it needs to be understood that a proposition f can be said to "indicate" a set X only insofar as the values of 1 and 0 that it assigns to the elements of the universe U are positive and negative indications, respectively, of the elements in X, and thus indications of the set X and of its complement ~X = U � X, respectively.  It is actually these values, when rendered by a concrete implementation of the indicator function f, that are the actual signs of the objects that are inside the set X and the objects that are outside the set X, respectively.
 +
In order to deal with the HO sign relations that are involved in this situation, I introduce a couple of new notations:
 +
1. To mark the relation of denotation between a sentence S and the proposition that it denotes, let the "spiny bracket" notation "[S]" be used for "the indicator function denoted by the sentence S".
 +
2. To mark the relation of denotation between a proposition P and the set that it indicates, let the "spiny brace" notation "{X}" be used for "the indicator function of the set X".
 +
Notice that the spiny bracket operator "[ ]" takes one "downstream", in accord with the usual direction of denotation, from a sign to its object, while the spiny brace operator "{ }" takes one "upstream", against the usual direction of denotation, and thus from an object to its sign.
 +
In order to make these notations useful in practice, it is necessary to note of a couple of their finer points, points that might otherwise seem too fine to take much trouble over.  For this reason, I express their usage a bit more carefully as follows:
 +
1. Let "spiny brackets", like "[ ]", be placed around a name of a sentence S, as in the expression "[S]", or else around a token appearance of the sentence itself, to serve as a name for the proposition that S denotes.
 +
2. Let "spiny braces", like "{ }", be placed around a name of a set X, as in the expression "{X}", to serve as a name for the indicator function fX.
 +
Table 11 illustrates the use of this notation, listing in each column several different but equivalent ways of referring to the same entity.
 +
Table 11.  Illustrations of Notation
 +
Object Sign Higher Order Sign
 +
Set Proposition Sentence
 +
X P S
 +
 +
|[S]| [S] S
 +
 +
|P| P "P"
 +
 +
|fX| fX "fX"
 +
 +
X {X} "{X}"
 +
In particular, one can observe the following relations and formulas, all of a purely notational character:
 +
1. If the sentence S denotes the proposition P : U �> B, then [S] = P.
 +
2. If the sentence S denotes the proposition P : U �> B
 +
such that |P| = P�1(1) = X c U, then [S] = P = fX = {X}.
 +
3. X = {u C U : u C X}
 +
 +
= |{X}| = {X}�1(1)
 +
 +
= |fX| = fX�1(1).
 +
4. {X} = { {u C U : u C X} }
 +
 +
= [u C X]
 +
 +
= fX.
 +
Now if a sentence S really denotes a proposition P, and if the notation "[S]" is merely meant to supply another name for the proposition that S already denotes, then why is there any need for the additional notation?  It is because the interpretive mind habitually races from the sentence S, through the proposition P that it denotes, and on to the set X = P�1(1) that the proposition P indicates, often jumping to the conclusion that the set X is the only thing that the sentence S is intended to denote.  This HO sign situation and the mind's inclination when placed within its setting calls for a linguistic mechanism or a notational device that is capable of analyzing the compound action and controlling its articulate performance, and this requires a way to interrupt the flow of assertion that typically takes place from S to P to X.
 +
1.3.10.8.  The Cactus Patch
 +
Thus, what looks to us like a sphere of scientific knowledge more accurately should be represented as the inside of a highly irregular and spiky object, like a pincushion or porcupine, with very sharp extensions in certain directions, and virtually no knowledge in immediately adjacent areas.  If our intellectual gaze could shift slightly, it would alter each quill's direction, and suddenly our entire reality would change.
 +
(Herbert Bernstein, NWOK, 38).
 +
In this and the four subsections that follow, I describe a calculus for representing propositions as sentences, in other words, as syntactically defined sequences of signs, and for manipulating these sentences chiefly in the light of their semantically defined contents, in other words, with respect to their logical values as propositions.  In their computational representation, the expressions of this calculus parse into a class of tree�like data structures called "painted cacti".  This is a family of graph�theoretic data structures that can be observed to have especially nice properties, turning out to be not only useful from a computational standpoint but also quite interesting from a theoretical point of view.  The rest of this subsection serves to motivate the development of this calculus and treats a number of general issues that surround the topic.
 +
In order to facilitate the use of propositions as indicator functions it helps to acquire a flexible notation for referring to propositions in that light, for interpreting sentences in a corresponding role, and for negotiating the requirements of mutual sense between these two domains.  If none of the formalisms that are readily available or in common use are able to meet the design requirements that come to mind, then it is necessary to contemplate the design of a new language that is especially tailored to the purpose.  In the present application, there is a pressing need to devise a general calculus for composing propositions, computing their values on particular arguments, and inverting their indications to arrive at the sets of things in the universe that are indicated by them.
 +
For computational purposes, it is convenient to have a middle ground or an intermediate language for negotiating between the koine of sentences regarded as strings of literal characters and the realm of propositions regarded as objects of logical value, even if this renders it necessary to introduce an artificial medium of exchange between these two domains.  If one envisions these computations to be carried out in any organized fashion, and ultimately or partially by means of the familiar sorts of machines, then the strings that represent these logical propositions are likely to find themselves parsed into tree�like data structures at some stage of the game.  With regard to their abstract structures as graphs, there are several species of graph�theoretic data structures that can be used to accomplish this job in a reasonably effective and efficient way.
 +
Over the course of this project, I plan to use two species of graphs:
 +
1.  "painted and rooted cacti" (PARCA).
 +
2.  "painted and rooted conifers" (PARCO).
 +
For now, it is enough to discuss the former class of data structures, leaving the consideration of the latter class to a part of the project where their distinctive features are key to developments at that stage.  Accordingly, within the context of the current patch of discussion, or until it becomes necessary to attach further notice to the conceivable varieties of parse graphs, the acronym "PARC" is sufficient to indicate the pertinent genus of abstract graph that is under consideration.
 +
By way of making these tasks feasible to carry out on a regular basis, a prospective language designer is required not only to supply a fluent medium for the expression of propositions, but further to accompany the assertions of their sentences with a canonical mechanism for teasing out the fibers of their indicator functions.  Accordingly, with regard to a body of conceivable propositions, one needs to furnish a standard array of techniques for following the threads of their indications from their objective universe to their values for the mind and back again, that is, for tracing the clues that sentences provide from the universe of their objects to the signs of their values, and, in turn, from signs to objects.  Ultimately, one seeks to render propositions so functional as indicators of sets and so essential for examining the equality of sets that they can constitute a veritable criterion for the practical conceivability of sets.  Tackling this task requires me to introduce a number of new definitions and a collection of additional notational devices, to which I now turn.
 +
Depending on whether a formal language is called by the type of sign that makes it up or whether it is named after the type of object that its signs are intended to denote, one may refer to this cactus language as a "sentential calculus" or as a "propositional calculus", respectively.
 +
When the syntactic definition of the language is well enough understood, then the language can begin to acquire a semantic function.  In natural circumstances, the syntax and the semantics are likely to be engaged in a process of co�evolution, whether in ontogeny or in phylogeny, that is, the two developments probably form parallel sides of a single bootstrap.  But this is not always the easiest way, at least, at first, to formally comprehend the nature of their action or the power of their interaction.
 +
According to the customary mode of formal reconstruction, the language is first presented in terms of its syntax, in other words, as a formal language of strings called "sentences", amounting to a particular subset of the possible strings that can be formed on a finite alphabet of signs.  A syntactic definition of the "cactus language", one that proceeds along purely formal lines, is carried out in the next subsection.  After that, the development of the language's more concrete aspects can be seen as a matter of defining two functions:  The first is a function that takes each sentence of the language into a computational data structure, to be exact, a tree�like parse graph called a "painted cactus".  The second is a function that takes each sentence of the language, or its interpolated parse graph, into a logical proposition, in effect, ending up with an indicator function as the object denoted by the sentence.
 +
The discussion of syntax brings up a number of associated issues that have to be clarified before going on.  These are questions of "style", that is, the sort of description, "grammar", or theory that one finds available or chooses as preferable for a given language.  These issues are discussed in the subsection after next (Subsection 10).
 +
There is an aspect of syntax that is so schematic in its basic character that it can be conveyed by computational data structures, so algorithmic in its uses that it can be automated by routine mechanisms, and so fixed in its nature that its practical exploitation can be served by the usual devices of computation.  Because it involves the transformation of signs, it can be recognized as an aspect of semiotics.  Since it can be carried out in abstraction from meaning, it is not up to the level of semantics, much less a complete pragmatics, though it does incline to the pragmatic aspects of computation that are auxiliary to and incidental to the human use of language.  Therefore, I refer to this aspect of formal language use as the "algorithmics" or the "mechanics" of language processing.  A mechanical conversion of the "cactus language" into its associated data structures is discussed in Subsection 11.
 +
In the usual way of proceeding on formal grounds, meaning is added by giving each "grammatical sentence", or each syntactically distinguished string, an interpretation as a logically meaningful sentence, in effect, providing each abstractly well�formed sentence with a proposition for it to denote.  A semantic interpretation of the "cactus language" is carried out in Subsection 12.
 +
</pre>
    
==References==
 
==References==
12,080

edits