# Difference between revisions of "Talk:Logical graph"

Jon Awbrey (talk | contribs) (→Exercise 1: fix typo) |
Jon Awbrey (talk | contribs) |
||

Line 335: | Line 335: | ||

: [[Directory:Jon_Awbrey/Papers/Functional_Logic_:_Quantification_Theory#Application_of_Higher_Order_Propositions_to_Quantification_Theory|Application of Higher Order Propositions to Quantification Theory]] | : [[Directory:Jon_Awbrey/Papers/Functional_Logic_:_Quantification_Theory#Application_of_Higher_Order_Propositions_to_Quantification_Theory|Application of Higher Order Propositions to Quantification Theory]] | ||

+ | |||

+ | Need to think a little more about the proposition <math>p \Rightarrow q</math> as a boolean function of type <math>\mathbb{B}^2 \to \mathbb{B}</math> and the corresponding higher order proposition of type <math>(\mathbb{B}^2 \to \mathbb{B}) \to \mathbb{B}.</math> | ||

====Exercise 2==== | ====Exercise 2==== |

## Revision as of 19:34, 8 December 2008

## Notes & Queries

\(\cdots\)

## Place For Discussion

\(\cdots\)

## Logical Equivalence Problem

### Problem

Problem posted by Mike1234 on the Discrete Math List at the Math Forum.

- Required to show that \(\lnot (p \Leftrightarrow q)\) is equivalent to \((\lnot q) \Leftrightarrow p.\)

### Solution

Solution posted by Jon Awbrey, using the calculus of logical graphs.

In logical graphs, the required equivalence looks like this:

q o o p q o | | | p o o q o o p \ / | | o p o o--o q | \ / @ = @

We have a theorem that says:

y o xy o | | x @ = x @

See Logical Graph : C_{2}. Generation Theorem.

Applying this twice to the left hand side of the required equation, we get:

q o o p pq o o pq | | | | p o o q p o o q \ / \ / o o | | @ = @

By collection, the reverse of distribution, we get:

p q o o pq \ / o o \ / @

But this is the same result that we get from one application of double negation to the right hand side of the required equation.

QED

### Discussion

Back to the initial problem:

- Show that \(\lnot (p \Leftrightarrow q)\) is equivalent to \((\lnot q) \Leftrightarrow p.\)

We can translate this into logical graphs by supposing that we have to express everything in terms of negation and conjunction, using parentheses for negation and simple concatenation for conjunction. In this way of assigning logical meaning to graphical forms — for historical reasons called the "existential interpretation" of logical graphs — basic logical forms are given the following expressions:

The constant \(\operatorname{true}\) is written as a null character or a space.

This corresponds to an unlabeled terminal node in a logical graph. When we are thinking of it by itself, we draw it as a rooted node:

@

The constant \(\operatorname{false}\) is written as an empty parenthesis\[(~).\]

This corresponds to an unlabeled terminal edge in a logical graph. When we are thinking of it by itself, we draw it as a rooted edge:

o | @

The negation \(\lnot x\) is written \((x).\!\)

This corresponds to the logical graph:

x o | @

The conjunction \(x \land y\) is written \(x y.\!\)

This corresponds to the logical graph:

x y @

The conjunction \(x \land y \land z\) is written \(x y z.\!\)

This corresponds to the logical graph:

x y z @

And so on.

The disjunction \(x \lor y\) is written \(((x)(y)).\!\)

This corresponds to the logical graph:

x y o o \ / o | @

The disjunction \(x \lor y \lor z\) is written \(((x)(y)(z)).\!\)

This corresponds to the logical graph:

x y z o o o \|/ o | @

And so on.

The implication \(x \Rightarrow y\) is written \((x (y)).\!\) Reading the latter as "not \(x\!\) without \(y\!\)" helps to recall its implicational sense.

This corresponds to the logical graph:

y o | x o | @

Thus, the equivalence \(x \Leftrightarrow y\) has to be written somewhat inefficiently as a conjunction of two implications\[(x (y)) (y (x)).\!\]

This corresponds to the logical graph:

y o o x | | x o o y \ / @

Putting all the pieces together, showing that \(\lnot (p \Leftrightarrow q)\) is equivalent to \((\lnot q) \Leftrightarrow p\) amounts to proving the following equation, expressed in the forms of logical graphs and parse strings, respectively:

q o o p q o | | | p o o q o o p \ / | | o p o o--o q | \ / @ = @ ( (p (q)) (q (p)) ) = (p ( (q) )) ((p)(q))

That expresses the proposed equation in the language of logical graphs. To test whether the equation holds we need to use the rest of the formal system that comes with this formal language, namely, a set of axioms taken for granted and a set of inference rules that allow us to derive the consequences of these axioms.

The formal system that we use for logical graphs has just four axioms. These are given here:

Proceeding from these axioms is a handful of very simple theorems that we tend to use over and over in deriving more complex theorems. A sample of these frequently used theorems is given here:

In my experience with a number of different propositional calculi, the logical graph picture is almost always the best way to see *why* a theorem is true. In the example at hand, most of the work was already done by the time we wrote down the problem in logical graph form. All that remained was to see the application of the generation and double negation theorems to the left and right hand sides of the equation, respectively.

### Reflection

\(\cdots\)

## Logical Graph Sandbox

### More thoughts on Peirce's law

1-way version\[((a \Rightarrow b) \Rightarrow a) \Rightarrow a\]

a o--o b | o--o a | o--o a | @ = @

2-way version\[((a \Rightarrow b) \Rightarrow a) \Leftrightarrow a\]

a o--o b | o--o a | a @ = @

Compare with:

a o b | o--o a | a @ = @

That is:

ab a o o \ / o | a @ = @

This is the so-called *absorption law*, commonly written in the following ways:

\[(a \land b) \lor a \iff a\]

\[ab \lor a = a\]

### Reports of my counter-intuitiveness are greatly exaggerated

We have the following theorem of classical propositional calculus:

\[(p \Rightarrow q) \lor (q \Rightarrow p)\]

The proposition may appear counter-intuitive on some ways of reading it, and it is usually excluded from the theorems of intuitionistic propositional calculi.

Read as a statement about the values of propositions — where the values \(p, q\!\) are drawn from the boolean domain \(\mathbb{B} = \{0, 1 \}\) — and written as an order law, its sense may become more sensible:

\[(p \le q) \lor (q \le p)\]

Here it is in logical graphs:

q o o p | | p o o q | | o o \ / o | @ = @

Proof

q o o p | | p o o q | | q p o o o o o o o o \ / \ / \ / | | o pq o pq o pq o o | | | | | @ = @ = @ = @ = @ = @

My guess as to what's going on here — why the classical and intuitionistic reasoners appear to be talking past each other on this score — is that they are really talking about two different domains of mathematical objects. That is, the variables \(p, q\!\) range over \(\mathbb{B}\) in the classical reading while they range over a space of propositions, say, \(p, q : X \to \mathbb{B}\) in the intuitionistic reading of the formulas. Just my initial guess.

On the reading \(P, Q : X \to \mathbb{B},\) another guess at what's gone awry here might be the difference between the following two statements:

\((\forall x \in X)(Px \Rightarrow Qx) \lor (Qx \Rightarrow Px)\)

\((\forall x \in X)(Px \Rightarrow Qx) \lor (\forall x \in X)(Qx \Rightarrow Px)\)

But the latter is not a theorem in anyone's philosophy, so there is really no disagreement here.

### Functional quantifiers

**Exercises.** Express the following in functional terms:

#### Exercise 1

\((\forall x \in X)(Px \Rightarrow Qx)\)

This is just the form \(\operatorname{All}\ P\ \operatorname{are}\ Q,\) already covered here:

Need to think a little more about the proposition \(p \Rightarrow q\) as a boolean function of type \(\mathbb{B}^2 \to \mathbb{B}\) and the corresponding higher order proposition of type \((\mathbb{B}^2 \to \mathbb{B}) \to \mathbb{B}.\)

#### Exercise 2

\((\forall x \in X)(Px \Rightarrow Qx) \lor (Qx \Rightarrow Px)\)

#### Exercise 3

\((\forall x \in X)(Px \Rightarrow Qx) \lor (\forall x \in X)(Qx \Rightarrow Px)\)