# Difference between revisions of "Talk:Logical graph"

## Notes & Queries

$$\cdots$$

## Place For Discussion

$$\cdots$$

## Logical Equivalence Problem

### Problem

• Required to show that $$\lnot (p \Leftrightarrow q)$$ is equivalent to $$(\lnot q) \Leftrightarrow p.$$

### Solution

In logical graphs, the required equivalence looks like this:

      q o   o p           q o
|   |               |
p o   o q             o   o p
\ /                |   |
o               p o   o--o q
|                  \ /
@         =         @


We have a theorem that says:

        y o                xy o
|                   |
x @        =        x @


Applying this twice to the left hand side of the required equation, we get:

      q o   o p          pq o   o pq
|   |               |   |
p o   o q           p o   o q
\ /                 \ /
o                   o
|                   |
@         =         @


By collection, the reverse of distribution, we get:

          p   q
o   o
pq  \ /
o   o
\ /
@


But this is the same result that we get from one application of double negation to the right hand side of the required equation.

QED

### Discussion

Back to the initial problem:

• Show that $$\lnot (p \Leftrightarrow q)$$ is equivalent to $$(\lnot q) \Leftrightarrow p.$$

We can translate this into logical graphs by supposing that we have to express everything in terms of negation and conjunction, using parentheses for negation and simple concatenation for conjunction. In this way of assigning logical meaning to graphical forms — for historical reasons called the "existential interpretation" of logical graphs — basic logical forms are given the following expressions:

The constant $$\operatorname{true}$$ is written as a null character or a space.

This corresponds to an unlabeled terminal node in a logical graph. When we are thinking of it by itself, we draw it as a rooted node:

          @


The constant $$\operatorname{false}$$ is written as an empty parenthesis$(~).$

This corresponds to an unlabeled terminal edge in a logical graph. When we are thinking of it by itself, we draw it as a rooted edge:

          o
|
@


The negation $$\lnot x$$ is written $$(x).\!$$

This corresponds to the logical graph:

          x
o
|
@


The conjunction $$x \land y$$ is written $$x y.\!$$

This corresponds to the logical graph:

         x y
@


The conjunction $$x \land y \land z$$ is written $$x y z.\!$$

This corresponds to the logical graph:

        x y z
@


And so on.

The disjunction $$x \lor y$$ is written $$((x)(y)).\!$$

This corresponds to the logical graph:

        x   y
o   o
\ /
o
|
@


The disjunction $$x \lor y \lor z$$ is written $$((x)(y)(z)).\!$$

This corresponds to the logical graph:

        x y z
o o o
\|/
o
|
@


And so on.

The implication $$x \Rightarrow y$$ is written $$(x (y)).\!$$ Reading the latter as "not $$x\!$$ without $$y\!$$" helps to recall its implicational sense.

This corresponds to the logical graph:

        y o
|
x o
|
@


Thus, the equivalence $$x \Leftrightarrow y$$ has to be written somewhat inefficiently as a conjunction of two implications$(x (y)) (y (x)).\!$

This corresponds to the logical graph:

      y o   o x
|   |
x o   o y
\ /
@


Putting all the pieces together, showing that $$\lnot (p \Leftrightarrow q)$$ is equivalent to $$(\lnot q) \Leftrightarrow p$$ amounts to proving the following equation, expressed in the forms of logical graphs and parse strings, respectively:

      q o   o p           q o
|   |               |
p o   o q             o   o p
\ /                |   |
o               p o   o--o q
|                  \ /
@         =         @

( (p (q)) (q (p)) ) = (p ( (q) )) ((p)(q))


That expresses the proposed equation in the language of logical graphs. To test whether the equation holds we need to use the rest of the formal system that comes with this formal language, namely, a set of axioms taken for granted and a set of inference rules that allow us to derive the consequences of these axioms.

The formal system that we use for logical graphs has just four axioms. These are given here:

Proceeding from these axioms is a handful of very simple theorems that we tend to use over and over in deriving more complex theorems. A sample of these frequently used theorems is given here:

In my experience with a number of different propositional calculi, the logical graph picture is almost always the best way to see why a theorem is true. In the example at hand, most of the work was already done by the time we wrote down the problem in logical graph form. All that remained was to see the application of the generation and double negation theorems to the left and right hand sides of the equation, respectively.

### Reflection

$$\cdots$$

## Inquiry Into Intuitionism

Notes on a discussion with "Gribskoff" (Manuel S. Lourenço) about his article on Intuitionistic Logic at PlanetMath.

$$\cdots$$

## Bridges And Barriers

Notes on a couple of discussions that I found in the Foundations Of Mathematics Archives (FOMA) about building bridges between classical-apagogical and constructive-intuitionsitic mathematics.

### Background

AM = A Mani
HF = Harvey Friedman
NT = Neil Tennant
SS = Stephen G Simpson
TF = Torkel Franzen
VP = Vaughan Pratt

### Foreground

Harvey Friedman (15 Oct 2008), "Classical/Constructive Mathematics", FOMA.

There seems to be a resurgence of interest in comparisons betweenclassical and constructive (foundations of) mathematics. This is a topic that has been discussed quite a lot previously on the FOM. I have been an active participant in prior discussions.

There was a lot of basic information presented earlier, and I think that it would be best to restate some of this, so that the discussion can go forward with its benefit.

In this message, I would like to focus on some important ways in which classical and constructive foundations are alike or closely related.

For many formal systems for fragments of classical mathematics, T, there is a corresponding system T' obtained by merely restricting the classical logical axioms to constructive logical axioms — where the resulting system is readily acceptable as a formal system for a "corresponding" fragment of constructive mathematics. Of course, there may be good ways of restating the axioms in the classical system, which do NOT lead to any reasonable fragment of constructive mathematics in this way.

The most well known example of this is PA = Peano Arithmetic. Suppose we formalize PA in the most common way, with the axioms for successor, the defining axioms for addition and multiplication, and the axiom scheme of induction, with the usual axioms and rules of classical logic. Then HA = Heyting Arithmetic, is simply PA with the axioms and rules of classical logic weakened to the axioms and rules of constructive logic.

Why do we consider HA as being a reasonable constructive system? A common answer is simply that a constructivist reads the axioms as "true" or "valid".

An apparently closely related fact about HA is purely formal. HA possesses a great number of properties that are commonly associated with "constructivism". The early pioneering work along these lines is, if I remember correctly, due to S.C. Kleene. Members of this list should be able to supply really good references for this work, better than I can. PA possesses NONE of these properties.

RESEARCH PROBLEM: Is there such a thing as a complete list of such formal properties? Is there a completeness theorem along these lines? I.e., can we state and prove that HA obeys all such (good from the constructive viewpoint) properties?

On the other hand, we can formalize PA, equivalently, using the *least number principle scheme* instead of the induction scheme. If a property holds of n, then that property holds of a least n. Then, when we convert to constructive logic, we get a system PA# that is equivalent to PA — thus possessing none of these properties!

For many of these T,T' pairs, some very interesting relationships obtain between the T and T'. Here are three important ones.

1. It can be proved that T is consistent if and only if T' is consistent.

2. Every A…A sentence, whose matrix has only bounded quantifiers, that is provable in T, is already provable in T'.

3. More strongly, every A…AE…E sentence, whose matrix has only bounded quantifiers, that is provable in T, is already provable in T'.

The issue arises as to just where these proofs are carried out — e.g., constructively or classically. This is particularly critical in the case of 1. The situation is about as "convincing" as possible:

Specifically, for each of these results, one can use weak quantifier free systems K of arithmetic, where constructive and classical amount to the same. E.g., for 1, there is a primitive operation in K which, provably in K, converts any inconsistency in T to a corresponding inconsistency in T'.

Results like 1 point in the direction of there being no difference between the "safety" of classical and constructive mathematics.

Results like 2,3 point in the direction of there being no difference between the "applicability" of classical and constructive mathematics, in many contexts.

CAUTION: For AEA sentences, PA and HA differ. There are some celebrated A…AE…EA…A theorems of PA which are not known to be provable in HA. Some examples were discussed previously on the FOM.

RESEARCH PROBLEM: Determine, in some readily intelligible terms (perhaps classical), necessary and sufficient conditions for a sentence of a given form is provable in HA and PA. Matters get delicate when there are several quantifiers and arrows (-->) present.

I will continue with this if sufficient responses are generated.

I, too, find myself returning to questions about classical v. constructive logic lately, partly in connection with Peirce's Law, the Propositions As Types (PAT) analogy, the question of a PAT analogy for classical propositional calculus, and the eternal project of integrating functional, relational, and logical styles of programming as much as possible.

I am still in the phase of chasing down links between the various questions and I don't have any news or conclusions to offer, but my web searches keep bringing me back to this old discussion on the FOM list:

I find one comment by Vaughan Pratt to be especially e-∧/∨-pro-vocative:

Vaughn Pratt (27 Feb 1998), "Intuitionistic Mathematics and Building Bridges", FOMA.

It has been my impression from having dealt with a lot of lawyers over the last twenty years that the logic of the legal profession is rarely Boolean, with a few isolated exceptions such as jury verdicts which permit only guilty or not guilty, no middle verdict allowed. Often legal logic is not even intuitionistic, with conjunction failing commutativity and sometimes even idempotence. But that aside, excluded middle and double negation are the exception rather than the rule.

Lawyers aren't alone in this. The permitted rules of reasoning that go along with whichever scientific method is currently in vogue seem to have the same non-Boolean character in general.

The very *thought* of a lawyer or scientist appealing to Peirce's law, ((P→Q)→P)→P, to prove a point boggles the mind. And imagine them trying to defend their use of that law by actually proving it: the audience would simply ssume this was one of those bits of logical sleight-of-hand where the wool is pulled over one's eyes by some sophistry that goes against common sense.

Anyway, to make a long story elliptic, here is one of my current write-ups on Peirce's Law that led me back into this old briar patch:

More to say on this later, but I just wanted to get a good chunk of the background set out in one place.

## Logical Graph Sandbox : Very Rough Sand Reckoning

### More thoughts on Peirce's law

1-way version$((a \Rightarrow b) \Rightarrow a) \Rightarrow a$

        a o--o b
|
o--o a
|
o--o a
|
@         =         @


2-way version$((a \Rightarrow b) \Rightarrow a) \Leftrightarrow a$

        a o--o b
|
o--o a
|                   a
@         =         @


Compare with:

        a o b
|
o--o a
|                   a
@         =         @


That is:

       ab   a
o   o
\ /
o
|                   a
@         =         @


This is the so-called absorption law, commonly written in the following ways:

$(a \land b) \lor a \iff a$

$ab \lor a = a$

### Reports of my counter-intuitiveness are greatly exaggerated

We have the following theorem of classical propositional calculus:

$(p \Rightarrow q) \lor (q \Rightarrow p)$

The proposition may appear counter-intuitive on some ways of reading it, and it is usually excluded from the theorems of intuitionistic propositional calculi.

Read as a statement about the values of propositions — where the values $$p, q\!$$ are drawn from the boolean domain $$\mathbb{B} = \{0, 1 \}$$ — and written as an order law, its sense may become more sensible:

$(p \le q) \lor (q \le p)$

Here it is in logical graphs:

      q o   o p
|   |
p o   o q
|   |
o   o
\ /
o
|
@         =         @


Proof

      q o   o p
|   |
p o   o q
|   |       q   p
o   o       o   o       o   o         o           o
\ /         \ /         \ /          |           |
o        pq o        pq o        pq o           o
|           |           |           |           |
@     =     @     =     @     =     @     =     @     =     @


My guess as to what's going on here — why the classical and intuitionistic reasoners appear to be talking past each other on this score — is that they are really talking about two different domains of mathematical objects. That is, the variables $$p, q\!$$ range over $$\mathbb{B}$$ in the classical reading while they range over a space of propositions, say, $$p, q : X \to \mathbb{B}$$ in the intuitionistic reading of the formulas. Just my initial guess.

On the reading $$P, Q : X \to \mathbb{B},$$ another guess at what's gone awry here might be the difference between the following two statements:

$$(\forall x \in X)(Px \Rightarrow Qx) \lor (Qx \Rightarrow Px)$$

$$(\forall x \in X)(Px \Rightarrow Qx) \lor (\forall x \in X)(Qx \Rightarrow Px)$$

But the latter is not a theorem in anyone's philosophy, so there is really no disagreement here.

## Functional Quantifiers

The umpire measure of type $$\Upsilon : (X \to \mathbb{B}) \to \mathbb{B}$$ links the constant proposition $$1 : X \to \mathbb{B}$$ to a value of 1 and every other proposition to a value of 0. Expressed in symbolic form:

 $$\Upsilon (u) = 1_\mathbb{B} \quad \Leftrightarrow \quad u = 1_{X \to \mathbb{B}}.$$

The umpire operator of type $$\Upsilon : (X \to \mathbb{B})^2 \to \mathbb{B}$$ links pairs of propositions in which the first implies the second to a value of 1 and every other pair to a value of 0. Expressed in symbolic form:

 $$\Upsilon (u, v) = 1 \quad \Leftrightarrow \quad u \Rightarrow v.$$

### Tables

Define two families of measures:

 $$\alpha_i, \beta_i : (\mathbb{B}^2 \to \mathbb{B}) \to \mathbb{B}, i = 0 \ldots 15,$$

by means of the following formulas:

 $$\alpha_i f = \Upsilon (f_i, f) = \Upsilon (f_i \Rightarrow f),$$ $$\beta_i f = \Upsilon (f, f_i) = \Upsilon (f \Rightarrow f_i).$$

#### Table 1

The values of the sixteen $$\alpha_i\!$$ on each of the sixteen boolean functions $$f : \mathbb{B}^2 \to \mathbb{B}$$ are shown in Table 1. Expressed in terms of the implication ordering on the sixteen functions, $$\alpha_i f = 1\!$$ says that $$f\!$$ is above or identical to $$f_i\!$$ in the implication lattice, that is, $$\ge f_i\!$$ in the implication ordering.

 $$u:$$$$v:$$ 11001010 $$f\!$$ $$\alpha_0$$ $$\alpha_1$$ $$\alpha_2$$ $$\alpha_3$$ $$\alpha_4$$ $$\alpha_5$$ $$\alpha_6$$ $$\alpha_7$$ $$\alpha_8$$ $$\alpha_9$$ $$\alpha_{10}$$ $$\alpha_{11}$$ $$\alpha_{12}$$ $$\alpha_{13}$$ $$\alpha_{14}$$ $$\alpha_{15}$$ $$f_0$$ 0000 $$(~)$$ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 $$f_1$$ 0001 $$(u)(v)\!$$ 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 $$f_2$$ 0010 $$(u) v\!$$ 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 $$f_3$$ 0011 $$(u)\!$$ 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 $$f_4$$ 0100 $$u (v)\!$$ 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 $$f_5$$ 0101 $$(v)\!$$ 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 $$f_6$$ 0110 $$(u, v)\!$$ 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 $$f_7$$ 0111 $$(u v)\!$$ 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 $$f_8$$ 1000 $$u v\!$$ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 $$f_9$$ 1001 $$((u, v))\!$$ 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 $$f_{10}$$ 1010 $$v\!$$ 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 $$f_{11}$$ 1011 $$(u (v))\!$$ 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 $$f_{12}$$ 1100 $$u\!$$ 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 $$f_{13}$$ 1101 $$((u) v)\!$$ 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 $$f_{14}$$ 1110 $$((u)(v))\!$$ 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 $$f_{15}$$ 1111 $$((~))$$ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

#### Table 2

The values of the sixteen $$\beta_i\!$$ on each of the sixteen boolean functions $$f : \mathbb{B}^2 \to \mathbb{B}$$ are shown in Table 2. Expressed in terms of the implication ordering on the sixteen functions, $$\beta_i f = 1\!$$ says that $$f\!$$ is below or identical to $$f_i\!$$ in the implication lattice, that is, $$\le f_i\!$$ in the implication ordering.

 $$u:$$$$v:$$ 11001010 $$f\!$$ $$\beta_0$$ $$\beta_1$$ $$\beta_2$$ $$\beta_3$$ $$\beta_4$$ $$\beta_5$$ $$\beta_6$$ $$\beta_7$$ $$\beta_8$$ $$\beta_9$$ $$\beta_{10}$$ $$\beta_{11}$$ $$\beta_{12}$$ $$\beta_{13}$$ $$\beta_{14}$$ $$\beta_{15}$$ $$f_0$$ 0000 $$(~)$$ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 $$f_1$$ 0001 $$(u)(v)\!$$ 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 $$f_2$$ 0010 $$(u) v\!$$ 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 $$f_3$$ 0011 $$(u)\!$$ 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 $$f_4$$ 0100 $$u (v)\!$$ 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 $$f_5$$ 0101 $$(v)\!$$ 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 $$f_6$$ 0110 $$(u, v)\!$$ 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 $$f_7$$ 0111 $$(u v)\!$$ 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 $$f_8$$ 1000 $$u v\!$$ 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 $$f_9$$ 1001 $$((u, v))\!$$ 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 $$f_{10}$$ 1010 $$v\!$$ 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 $$f_{11}$$ 1011 $$(u (v))\!$$ 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 $$f_{12}$$ 1100 $$u\!$$ 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 $$f_{13}$$ 1101 $$((u) v)\!$$ 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 $$f_{14}$$ 1110 $$((u)(v))\!$$ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 $$f_{15}$$ 1111 $$((~))\!$$ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

#### Table 3

 $$u:$$$$v:$$ 11001010 $$f\!$$ $$(\ell_{11})$$$$\text{No } u$$$$\text{is } v$$ $$(\ell_{10})$$$$\text{No } u$$$$\text{is }(v)$$ $$(\ell_{01})$$$$\text{No }(u)$$$$\text{is } v$$ $$(\ell_{00})$$$$\text{No }(u)$$$$\text{is }(v)$$ $$\ell_{00}$$$$\text{Some }(u)$$$$\text{is }(v)$$ $$\ell_{01}$$$$\text{Some }(u)$$$$\text{is } v$$ $$\ell_{10}$$$$\text{Some } u$$$$\text{is }(v)$$ $$\ell_{11}$$$$\text{Some } u$$$$\text{is } v$$ $$f_0$$ 0000 $$(~)$$ 1 1 1 1 0 0 0 0 $$f_1$$ 0001 $$(u)(v)\!$$ 1 1 1 0 1 0 0 0 $$f_2$$ 0010 $$(u) v\!$$ 1 1 0 1 0 1 0 0 $$f_3$$ 0011 $$(u)\!$$ 1 1 0 0 1 1 0 0 $$f_4$$ 0100 $$u (v)\!$$ 1 0 1 1 0 0 1 0 $$f_5$$ 0101 $$(v)\!$$ 1 0 1 0 1 0 1 0 $$f_6$$ 0110 $$(u, v)\!$$ 1 0 0 1 0 1 1 0 $$f_7$$ 0111 $$(u v)\!$$ 1 0 0 0 1 1 1 0 $$f_8$$ 1000 $$u v\!$$ 0 1 1 1 0 0 0 1 $$f_9$$ 1001 $$((u, v))\!$$ 0 1 1 0 1 0 0 1 $$f_{10}$$ 1010 $$v\!$$ 0 1 0 1 0 1 0 1 $$f_{11}$$ 1011 $$(u (v))\!$$ 0 1 0 0 1 1 0 1 $$f_{12}$$ 1100 $$u\!$$ 0 0 1 1 0 0 1 1 $$f_{13}$$ 1101 $$((u) v)\!$$ 0 0 1 0 1 0 1 1 $$f_{14}$$ 1110 $$((u)(v))\!$$ 0 0 0 1 0 1 1 1 $$f_{15}$$ 1111 $$((~))$$ 0 0 0 0 1 1 1 1

#### Table 4

 $$u:$$$$v:$$ 11001010 $$f\!$$ $$(\ell_{11})$$$$\text{No } u$$$$\text{is } v$$ $$(\ell_{10})$$$$\text{No } u$$$$\text{is }(v)$$ $$(\ell_{01})$$$$\text{No }(u)$$$$\text{is } v$$ $$(\ell_{00})$$$$\text{No }(u)$$$$\text{is }(v)$$ $$\ell_{00}$$$$\text{Some }(u)$$$$\text{is }(v)$$ $$\ell_{01}$$$$\text{Some }(u)$$$$\text{is } v$$ $$\ell_{10}$$$$\text{Some } u$$$$\text{is }(v)$$ $$\ell_{11}$$$$\text{Some } u$$$$\text{is } v$$ $$f_0$$ 0000 $$(~)$$ 1 1 1 1 0 0 0 0 $$f_1$$ 0001 $$(u)(v)\!$$ 1 1 1 0 1 0 0 0 $$f_2$$ 0010 $$(u) v\!$$ 1 1 0 1 0 1 0 0 $$f_4$$ 0100 $$u (v)\!$$ 1 0 1 1 0 0 1 0 $$f_8$$ 1000 $$u v\!$$ 0 1 1 1 0 0 0 1 $$f_3$$ 0011 $$(u)\!$$ 1 1 0 0 1 1 0 0 $$f_{12}$$ 1100 $$u\!$$ 0 0 1 1 0 0 1 1 $$f_6$$ 0110 $$(u, v)\!$$ 1 0 0 1 0 1 1 0 $$f_9$$ 1001 $$((u, v))\!$$ 0 1 1 0 1 0 0 1 $$f_5$$ 0101 $$(v)\!$$ 1 0 1 0 1 0 1 0 $$f_{10}$$ 1010 $$v\!$$ 0 1 0 1 0 1 0 1 $$f_7$$ 0111 $$(u v)\!$$ 1 0 0 0 1 1 1 0 $$f_{11}$$ 1011 $$(u (v))\!$$ 0 1 0 0 1 1 0 1 $$f_{13}$$ 1101 $$((u) v)\!$$ 0 0 1 0 1 0 1 1 $$f_{14}$$ 1110 $$((u)(v))\!$$ 0 0 0 1 0 1 1 1 $$f_{15}$$ 1111 $$((~))$$ 0 0 0 0 1 1 1 1

#### Table 5

 $$\text{Mnemonic}$$ $$\text{Category}$$ $$\text{Classical Form}$$ $$\text{Alternate Form}$$ $$\text{Symmetric Form}$$ $$\text{Operator}$$ $$\text{E}\!$$$$\text{Exclusive}$$ $$\text{Universal}$$$$\text{Negative}$$ $$\text{All}\ u\ \text{is}\ (v)$$ $$\text{No}\ u\ \text{is}\ v$$ $$(\ell_{11})$$ $$\text{A}\!$$$$\text{Absolute}$$ $$\text{Universal}$$$$\text{Affirmative}$$ $$\text{All}\ u\ \text{is}\ v$$ $$\text{No}\ u\ \text{is}\ (v)$$ $$(\ell_{10})$$ $$\text{All}\ v\ \text{is}\ u$$ $$\text{No}\ v\ \text{is}\ (u)$$ $$\text{No}\ (u)\ \text{is}\ v$$ $$(\ell_{01})$$ $$\text{All}\ (v)\ \text{is}\ u$$ $$\text{No}\ (v)\ \text{is}\ (u)$$ $$\text{No}\ (u)\ \text{is}\ (v)$$ $$(\ell_{00})$$ $$\text{Some}\ (u)\ \text{is}\ (v)$$ $$\text{Some}\ (u)\ \text{is}\ (v)$$ $$\ell_{00}\!$$ $$\text{Some}\ (u)\ \text{is}\ v$$ $$\text{Some}\ (u)\ \text{is}\ v$$ $$\ell_{01}\!$$ $$\text{O}\!$$$$\text{Obtrusive}$$ $$\text{Particular}$$$$\text{Negative}$$ $$\text{Some}\ u\ \text{is}\ (v)$$ $$\text{Some}\ u\ \text{is}\ (v)$$ $$\ell_{10}\!$$ $$\text{I}\!$$$$\text{Indefinite}$$ $$\text{Particular}$$$$\text{Affirmative}$$ $$\text{Some}\ u\ \text{is}\ v$$ $$\text{Some}\ u\ \text{is}\ v$$ $$\ell_{11}\!$$

### Exercises

Express the following formulas in functional terms.

#### Exercise 1

$$(\forall x \in X)(u(x) \Rightarrow v(x))$$

$$\prod_{x \in X} (u_x (v_x)) = 1$$

This is just the form $$\operatorname{All}\ u\ \operatorname{are}\ v,$$ already covered here:

Application of Higher Order Propositions to Quantification Theory

Need to think a little more about the proposition $$u \Rightarrow v$$ as a boolean function of type $$\mathbb{B}^2 \to \mathbb{B}$$ and the corresponding higher order proposition of type $$(\mathbb{B}^2 \to \mathbb{B}) \to \mathbb{B}.$$

#### Exercise 2

$$(\forall x \in X)(Px \Rightarrow Qx) \lor (Qx \Rightarrow Px)$$

#### Exercise 3

$$(\forall x \in X)(Px \Rightarrow Qx) \lor (\forall x \in X)(Qx \Rightarrow Px)$$