| # Under the ''sign-theoretic'' alternative one takes the partiality as something affecting only the signs used in discussion. Accordingly, one approaches the task as a matter of handling partial information about ordinary objects, namely, the same domains of objects initially given at the outset of discussion. | | # Under the ''sign-theoretic'' alternative one takes the partiality as something affecting only the signs used in discussion. Accordingly, one approaches the task as a matter of handling partial information about ordinary objects, namely, the same domains of objects initially given at the outset of discussion. |
− | The only time when a finite sign or expression can give the appearance of determining a perfectly precise content or a post finite amount of information, for example, when the symbol “e” is used to denote the number also known as “the unique base of the natural logarithms” — this can only happen when interpreters are prepared, by dint of the information embodied in their prior design and preliminary training, to accept as meaningful and be terminally satisfied with what is still only a finite content, syntactically speaking. Every remaining impression that a perfectly determinate object, an "individual" in the original sense of the word, has nevertheless been successfully specified — this can only be the aftermath of some prestidigitation, that is, the effect of some pre arranged consensus, for example, of accepting a finite system of definitions and axioms that are supposed to define the space R and the element e within it, and of remembering or imagining that an effective proof system has once been able or will yet be able to convince one of its demonstrations. | + | The only time when a finite sign or expression can give the appearance of determining a perfectly precise content or a post finite amount of information, for example, when the symbol <math>{}^{\backprime\backprime} e {}^{\prime\prime}\!</math> is used to denote the number also known as “the unique base of the natural logarithms” — this can only happen when interpreters are prepared, by dint of the information embodied in their prior design and preliminary training, to accept as meaningful and be terminally satisfied with what is still only a finite content, syntactically speaking. Every remaining impression that a perfectly determinate object, an ''individual'' in the original sense of the word, has nevertheless been successfully specified — this can only be the aftermath of some prestidigitation, that is, the effect of some pre-arranged consensus, for example, of accepting a finite system of definitions and axioms that are supposed to define the space <math>\mathbb{R}\!</math> and the element <math>e\!</math> within it, and of remembering or imagining that an effective proof system has once been able or will yet be able to convince one of its demonstrations. |
| Ultimately, one must be prepared to work with probability distributions that are defined on entire spaces O of the relevant objects or outcomes. But probability distributions are just a special class of functions f : O > [0, 1] c R, where R is the real line, and this means that the corresponding theory of partializations involves the dual aspect of the domain O, dealing with the "functionals" defined on it, or the functions that map it into "coefficient" spaces. And since it is unavoidable in a computational framework, one way or another every type of coefficient information, real or otherwise, must be approached bit by bit. That is, all information is defined in terms of the either or decisions that must be made to really and practically determine it. So, to make a long story short, one might as well approach this dual aspect by starting with the functions f : O > B = {0, 1}, in effect, with the logic of propositions. | | Ultimately, one must be prepared to work with probability distributions that are defined on entire spaces O of the relevant objects or outcomes. But probability distributions are just a special class of functions f : O > [0, 1] c R, where R is the real line, and this means that the corresponding theory of partializations involves the dual aspect of the domain O, dealing with the "functionals" defined on it, or the functions that map it into "coefficient" spaces. And since it is unavoidable in a computational framework, one way or another every type of coefficient information, real or otherwise, must be approached bit by bit. That is, all information is defined in terms of the either or decisions that must be made to really and practically determine it. So, to make a long story short, one might as well approach this dual aspect by starting with the functions f : O > B = {0, 1}, in effect, with the logic of propositions. |