Difference between revisions of "Talk:Logical graph"

MyWikiBiz, Author Your Legacy — Wednesday November 27, 2024
Jump to navigationJump to search
(cleanup)
(→‎Discussion: tweak text)
Line 78: Line 78:
 
Back to the initial problem:
 
Back to the initial problem:
  
* Show that <math>\lnot (p \Leftrightarrow q)</math> is equivalent to <math>(\lnot q) \Leftrightarrow p.</math>
+
:* Show that <math>\lnot (p \Leftrightarrow q)</math> is equivalent to <math>(\lnot q) \Leftrightarrow p.</math>
  
 
We can translate this into logical graphs by supposing that we have to express everything in terms of negation and conjunction, using parentheses for negation and simple concatenation for conjunction.  In this way of assigning logical meaning to graphical forms &mdash; for historical reasons called the "existential interpretation" of logical graphs &mdash; basic logical operations are given the following expressions:
 
We can translate this into logical graphs by supposing that we have to express everything in terms of negation and conjunction, using parentheses for negation and simple concatenation for conjunction.  In this way of assigning logical meaning to graphical forms &mdash; for historical reasons called the "existential interpretation" of logical graphs &mdash; basic logical operations are given the following expressions:
Line 191: Line 191:
 
:* [[Logical_graph#C3._Dominant_form_theorem|C<sub>3</sub>. Dominant Form Theorem]]
 
:* [[Logical_graph#C3._Dominant_form_theorem|C<sub>3</sub>. Dominant Form Theorem]]
  
In my experience with a variety of propositional calculi, the logical graph picture is almost always the best way to see ''why'' a theorem is true.  In the example at hand, most of the work was already done by the time we wrote down the problem in logical graph form.  All that remained was to see the application of the generation and double negation theorems to the left and right sides of the equation, respectively.
+
In my experience with a number of different propositional calculi, the logical graph picture is almost always the best way to see ''why'' a theorem is true.  In the example at hand, most of the work was already done by the time we wrote down the problem in logical graph form.  All that remained was to see the application of the generation and double negation theorems to the left and right sides of the equation, respectively.

Revision as of 13:24, 4 December 2008

Notes & Queries

Logical Graph Sandbox


\(\cdots\)

Place For Discussion


\(\cdots\)

Logical Equivalence Problem

Problem

Problem posted by Mike1234 on the Discrete Math List at the Math Forum.

  • Required to show that \(\lnot (p \Leftrightarrow q)\) is equivalent to \((\lnot q) \Leftrightarrow p.\)

Solution

Solution posted by Jon Awbrey, working by way of logical graphs.

In logical graphs, the required equivalence looks like this:

      q o   o p           q o
        |   |               |
      p o   o q             o   o p
         \ /                |   |
          o               p o   o--o q
          |                  \ / 
          @         =         @

We have a theorem that says:

        y o                xy o
          |                   |
        x @        =        x @

See Logical Graph : C2. Generation Theorem.

Applying this twice to the left hand side of the required equation, we get:

      q o   o p          pq o   o pq
        |   |               |   |
      p o   o q           p o   o q
         \ /                 \ /
          o                   o
          |                   |
          @         =         @

By collection, the reverse of distribution, we get:

          p   q
          o   o
       pq  \ / 
        o   o
         \ /
          @

But this is the same result that we get from one application of double negation to the right hand side of the required equation.

QED

Discussion

Back to the initial problem:

  • Show that \(\lnot (p \Leftrightarrow q)\) is equivalent to \((\lnot q) \Leftrightarrow p.\)

We can translate this into logical graphs by supposing that we have to express everything in terms of negation and conjunction, using parentheses for negation and simple concatenation for conjunction. In this way of assigning logical meaning to graphical forms — for historical reasons called the "existential interpretation" of logical graphs — basic logical operations are given the following expressions:

The negation \(\lnot x\) is written \((x).\!\)

This corresponds to the logical graph:

          x
          o
          |
          @

The conjunction \(x \land y\) is written \(x y.\!\)

This corresponds to the logical graph:

         x y
          @

The conjunction \(x \land y \land z\) is written \(x y z.\!\)

This corresponds to the logical graph:

        x y z
          @

And so on.

The disjunction \(x \lor y\) is written \(((x)(y)).\!\)

This corresponds to the logical graph:

        x   y
        o   o
         \ /
          o
          |
          @

The disjunction \(x \lor y \lor z\) is written \(((x)(y)(z)).\!\)

This corresponds to the logical graph:

        x y z
        o o o
         \|/
          o
          |
          @

And so on.

The implication \(x \Rightarrow y\) is written \((x (y)).\!\) Reading the latter as "not \(x\!\) without \(y\!\)" helps to recall its implicational sense.

This corresponds to the logical graph:

        y o
          |
        x o
          |
          @

Thus, the equivalence \(x \Leftrightarrow y\) has to be written somewhat inefficiently as a conjunction of two implications\[(x (y)) (y (x)).\!\]

This corresponds to the logical graph:

      y o   o x
        |   |
      x o   o y
         \ /
          @

Putting all the pieces together, showing that \(\lnot (p \Leftrightarrow q)\) is equivalent to \((\lnot q) \Leftrightarrow p\) amounts to proving the following equation, expressed in the forms of logical graphs and parse strings, respectively:

      q o   o p           q o
        |   |               |
      p o   o q             o   o p
         \ /                |   |
          o               p o   o--o q
          |                  \ /
          @         =         @

( (p (q)) (q (p)) ) = (p ( (q) )) ((p)(q))

That expresses the proposed equation in the language of logical graphs. To test whether the equation holds we need to use the rest of the formal system that comes with this formal language, namely, a set of axioms taken for granted and a set of inference rules that allow us to derive the consequences of these axioms.

The formal system that we use for logical graphs has just four axioms. These are given here:

Proceeding from these axioms is a handful of very simple theorems that we tend to use over and over in deriving more complex theorems. A sample of these frequently used theorems is given here:

In my experience with a number of different propositional calculi, the logical graph picture is almost always the best way to see why a theorem is true. In the example at hand, most of the work was already done by the time we wrote down the problem in logical graph form. All that remained was to see the application of the generation and double negation theorems to the left and right sides of the equation, respectively.