# Propositional Logic

**Propositional logic** is a branch of mathematical logic concerned with the study of proposition, their truth or falsehood. Propositional logic is also known by the names sentential logic, propositional calculus and sentential calculus.

#### Contents

## Proposition Letters

These are the units of propositional logic which may not be atomic formulas but may contain logical connectives (see below for details on connectives). These refer to some proposition in a natural language (a human language). We enlist the respective meanings of the propositions in the *symbolization key*.

It is important to note that when a sentence in English is assigned a proposition letter, we lose information about the internal interpretations of the sentence. This is because, in a formal language, all we are interested in is the form.

We use a capital letter to denote a sentence letter (also called an atomic element), by convention.

\[\begin{align} A: &\text{ Charges are quantized.} \\ B: &\text{ Calvin has 2 siblings.} \end{align}\]

## Truth Value

In a bivalent language, the truth value of a proposition is simply **True(1)** or **False(0)**.

Although truth is motivated by the philosophical idea of truth, we make no attempt to define True and False in propositional calculus. They are just treated as primitive notions.

We say that \(v(\mathscr{P})\) evaluates the proposition \(\mathscr{P}\), i.e. returns its truth value.

Note that while translating to propositional logic, we lose information of the actual interpretation of what the sentence was. Hence, we are not exactly concerned about whether the proposition is actually *true* ("actually" though again is a primitive notion, it stands for "in reality" or in the "material world"). For example, it is possible to create a model where **B** is **True** with the proposition being \(B: \text { Pink unicorns exist.}\)

Hence, we *assign* truth values to atomic sentences. In other words, for atomic sentences \(v(\mathscr{P}) = a(\mathscr{P})\), where \(a(\mathscr{P}) \) is the value we've assigned to it.

If the proposition is not an atomic sentence, we evaluate it according to the rules of the connectives involved.

Obviously, questions like "How many integers divide 23?" or commands like "Enter your answer as 0" cannot be assigned a truth value. Such things cannot be assigned sentence letters either and are of no interest to propositional logic because they are not propositions.

## Connectives

Connectives are elements of formal language that connect two or more propositions in a grammatically valid way (following the rules of formal grammar) to form a new proposition, called a compound proposition such that the sense (note that, in contrast with "meaning", "sense" or "idea" is rather an intuitive notion, so we should use the word "sense" here) of the compound proposition depends only on those of the original propositions. They are logical constants (in the sense that in a given language model, connectives are symbols which give the same semantic value under every interpretation of the language model). Connectives are equivalent to conjunctions in natural languages (to some extent), and logic gates in Boolean algebra and electronics, though whether these can be called analogies can be a point of logical dispute.

\[\]
**Negation:**

Negation is a unary logical connective. For any proposition \(\mathscr{P}\), the negation of \(\mathscr{P}\), denoted \(\neg \mathscr{P},\) is a proposition implying that \(\mathscr{P}\) is false. \(\neg \mathscr{P}\) is also said as "not" \(\mathscr{P}\).

\[\begin{align} A: &\text{ The moon is made of green cheese.} \\ \neg A: &\text{ The moon is not made of green cheese.} \end{align} \]

\[ v(\neg\mathscr{B}) = \left\{\begin{matrix} 1 &&& \text{if } v(\mathscr{B})= 0\\ 0 &&& \text{otherwise.} \ _\square \end{matrix}\right.\]

\[\]
**Conjunction:**

Logical conjunction is an associative binary logical connective such that any compound sentence formed by connection of two or more (note that connection of more than two propositions is possible here since conjunction is associative) propositions using conjunction connective is true if and only if all the original propositions are simultaneously (note that the word "simultaneously" carries the same sense of conjunction in natural language) true. Conjunction, denoted \(\wedge\), is the equivalent of *and* in English:

\[ v(\mathscr{A} \wedge \mathscr{B}) = \left\{\begin{matrix} 1 && \text{if } v(\mathscr{B})= 1 \text{ and (this "and" is from natural language) } v(\mathscr{A})= 1 \\ 0 && \text{otherwise}. \end{matrix}\right.\]

23 is odd and primecan be paraphrased as23 is oddand23 is prime.

We can symbolize this as \[\begin{align} A: &\text{ 23 is odd.} \\ B: &\text{ 23 is prime.} \\ A \wedge B: &\text{ 23 is odd and 23 is prime.} \end{align} \]

2 is even and primecan be paraphrased asIt is false that 2 is odd(or2 is not odd) and2 is prime.

This can be symbolized as follows: \[\begin{align} A: &\text{ 2 is odd.} \\ B: &\text{ 2 is prime.} \\ \neg A \wedge B: &\text{ 2 is not odd and 2 is prime.} \end{align} \]

\[\]
**Disjunction:**

Logical disjunction is an associative binary logical connective such that any compound sentence formed by connection of two or more propositions using disjunction connective is true if and only if at least one ("at least one" is equivalent to "one OR many", note the use of "or" from natural language implicit in the definition of logical disjunction connective) of the original propositions is true. Disjunction, denoted \(\vee ,\) thus seems to be the equivalent of *or* in English.

In English, the *or* could be *inclusive*, or *exclusive*. However, in propositional calculus, since every connective must be a logical constant (i.e. must have a semantic value invariant of the interpretation of the language) the *or* is always inclusive in propositional calculus. This means that there is no requirement that both the propositions it connects cannot be true at the same time.

\[ v(\mathscr{A} \vee \mathscr{B}) = \left\{\begin{matrix} 0 &&& \text{if } v(\mathscr{B})= 0,v(\mathscr{A})= 0 \\ 1 &&& \text{otherwise.} \ _\square \end{matrix}\right.\]

When you say "I will buy either the red car or the blue car," you mean "I will buy the red car or the blue car but not both."

We can symbolize this as follows:

\[\begin{align} A: &\text{ I will buy the red car.} \\ B: &\text{ I will buy the blue car.} \\ &(A \vee B) \wedge \neg (A \wedge B). \end{align} \]

\[\]
**Material Conditional:**

The material conditional is a special connective. The conditional is the equivalent of *if ..., then ...* in English.

In the expression \(\mathscr{A} \rightarrow \mathscr{B}\), \(\mathscr{A}\) is called the *antecedent* and \(\mathscr{B}\) is the *consequent*.

The statement \(\mathscr{A} \rightarrow \mathscr{B}\) is said as "If \(\mathscr{A}\) then \(\mathscr{B}.\) " or "\(\mathscr{A}\) only if \(\mathscr{B}.\)"

The conditional if true guarantees that the consequent is never false when the antecedent is true. Because propositional logic is bivalent, we *define* the conditional to be False only if the antecedent is True and the consequent is False.

\[ v(\mathscr{A} \to \mathscr{B}) = \left\{\begin{matrix} 0 &&& \text{if } v(\mathscr{A})= 1, v(\mathscr{B})= 0 \\ 1 &&& \text{otherwise.} \ _\square \end{matrix}\right.\]

This is why the conditional is also called the *material conditional*. Unlike in natural languages, we are not bothered about whether ... *would* be, *if* .... Under the material conditional a non-sensical statement like *if 3 were prime India borders China*, or *if a man has fifty hands he is intelligent* would be true because the material conditional has nothing to do with causation. Also, according to the definition *if 3 is composite, then Santa Clause exists* is true. This is a vacuous argument. If the antecedent is False, the conditional is vacuously False, while if the antecedent is true, the truth value of the conditional is the same as that of the consequent.

\[\begin{align} A :&\text{ You cut the red wire.} \\ B: &\text{ The bomb explodes.} \end{align} \]

In the sentence

the bomb will explode only if you cut the red wire,which one is the consequent and which one is the precedent?Although in reality the bomb explodes

becauseI cut the red wire, causation has nothing to do with the conditional.It would not be right to say that \(A \to B\) because it is possible that would mean that it is possible for \(B\) to be true even though \(A\) isn't. This is not what we want to say.

What if the bomb explodes, i.e. \(B\) is true? That would definitely mean that I've cut the red wire, i.e. \(A\) is true.

Hence, we'd better translate the proposition as \(B \to A.\)

\[\]
**Biconditional:**

A biconditional is equivalent to an *if and only if* in a natural language.

What it does check for is whether both the propositions evaluate to the same truth value. It can also be thought of as \( (\mathscr{A} \to \mathscr{B})\wedge(\mathscr{B} \to \mathscr{A}).\)

\[ v(\mathscr{A} \leftrightarrow \mathscr{B}) = \left\{\begin{matrix} 1 &&& \text{if } v(\mathscr{A})= v(\mathscr{B}) \\ 0 &&& \text{otherwise.} \ _\square \end{matrix}\right.\]

Once again, this has got nothing to do with causation. It is possible that they are just correlation or even causally independent. For example, *the Santa Claus exists if and only if 3 is composite*. The biconditional is true when both the propositions are false (or true) and is false when at least one of them is false.

\[\begin{align} A: &\text{ The angle is right.} \\ B: &~a^2 + b^2 = c^2. \end{align} \]

Pythagoras theorem states \(A \to B.\)

The converse says \(B \to A.\)Together, we could claim \(A \leftrightarrow B.\)

## Truth Tables

See also Truth Tables

A truth table is a listing of possible assignments of truth values to atomic propositions along with a (usually) larger proposition.

(Add example)

We'll eventually see that truth tables are pretty useful for defining logical equivalence, contradictions, tautologies and inconsistencies.

## Propositions

**Expressions**

Any string consisting of and only of atomic sentences, connectives and parentheses is anexpression. \(_\square\)

**Well Formed Formulae**

Our definition of expressions includes kaboobly doo expressions like \(\to P Q ()\). We will define recursively what kinds of expressions are actually *well formed*.

- Any atomic proposition is a well formed formula.
- If \(\mathscr{A}\) is a well formed formula, then \(\neg \mathscr{A}\) is also a well formed formula.
- If \(\mathscr{A}\) and \(\mathscr{B}\) are well formed formulae, then \((\mathscr{A} \wedge \mathscr{B})\) is also a well formed formula.
- If \(\mathscr{A}\) and \(\mathscr{B}\) are well formed formulae, then \((\mathscr{A} \vee \mathscr{B})\) is also a well formed formula.
- If \(\mathscr{A}\) and \(\mathscr{B}\) are well formed formulae, then \((\mathscr{A} \to \mathscr{B})\) is also a well formed formula.
- If \(\mathscr{A}\) and \(\mathscr{B}\) are well formed formulae, then \((\mathscr{A} \leftrightarrow \mathscr{B})\) is also a well formed formula. \(_\square\)

**Propositions**

In propositional logic, it suffices to define a proposition as a well formed formula.

**Notational Conventions**

Notational conventions are just syntactic sugar that helps us make the expression more readable.

The following are some popular notational conventions adopted:

- The parentheses around the whole proposition may be dropped.
- Parentheses joining expressions involving three or more associative connectives may be dropped.
- Use of square brackets instead of parentheses makes the expression more readable.

## Tautologies and Contradictions

**Tautology**

A tautology is something that is true just as a matter of logic. An example would be *it is either raining or not raining*.

Formally, we say that a proposition is a

tautologyif it is true for all possible truth assignments of the atomic propositions involved. \(_\square\)

A non-trivial example of a tautology is

\[ ((A \land B) \to C) \leftrightarrow (A \to (B \to C)).\]

We can construct a truth table to verify this:

(Add truth table)

**Contradiction**

A contradiction is a statement that is false just as a matter of logic. An example would be *it is raining and not raining*.

Formally, we say that a proposition is a

contradictionif it is false for all possible truth assignments of the atomic propositions involved. \(_\square\)

The negation of a tautology is definitely a contradiction.

If you scroll a little, you will convince yourself that the following is a contradiction:

\[\neg (((A \land B) \to C) \leftrightarrow (A \to (B \to C))). \]

**Contingency**

A proposition iscontingentif and only if it is neither a contradiction nor a tautology. \(_\square\)

## Logical Equivalence

Two propositions \(\mathscr{A}\) and \(\mathscr{B}\) are

equivalentif and only if \(\mathscr{A} \leftrightarrow \mathscr{B}\) is a tautology. \(_\square\)

While the biconditional tests whether the two propositions are of equal value for a particular assignment of truth values, the equivalence is the test for *all* possible truth value assignments.

Here is a list of several famous logical identities:

Can you find out which of these are logical equivalences?

Although we denote an equivalence with \(\leftrightarrow\) here, it is also pretty common to show equivalence with \(\equiv.\) To show equivalence in proofs, the symbol \(\leftrightarrow\) is used as a *"bicondtional,"* also known as "if and only if."

## Consistency

A set of propositions is *inconsistent* if it cannot be simultaneously all true. Otherwise, it is *consistent*.

A set of propositions \(\phi = \left \{\mathscr{A}_1, \mathscr{A}_2, \cdots, \mathscr{A}_n \right \}\) is

inconsistentif and only if \(\left ( \mathscr{A}_1 \wedge \mathscr{A}_2 \wedge \cdots \wedge\mathscr{A}_n\right )\) is a contradiction.Otherwise, it is

consistent. \(_\square\)

An inconsistent set would be

\[ \phi = \left \{A, A \to B, \neg B, C \right \}. \]

Note that the proposition \(C\) has nothing to do with the inconsistency itself. However, it is a part of an inconsistent set anyway.

## Arguments

An argument is a tuple consisting of some *premises* and a *conclusion*.

Usually, it is written in the following form:

\[\mathscr{P}_1,\mathscr{P}_2, \cdots, \mathscr{P}_n \, \therefore \mathscr{C} .\]

Here, \( \left \{ \mathscr{P}_1,\mathscr{P}_2, \cdots \mathscr{P}_n \right \} \) is the premise, and \(\mathscr{C}\) is the conclusion.

Here is one of the simplest arguments:

It is raining. If it rains, I do not go to school. Therefore, I will not go to school.We symbolize this argument as follows:

\[\begin{align} A: &\text{ It is raining.} \\ B: &\text{ I will not go to school.} \\ \\ &\frac{A \to B,\; A}{\therefore B}. \end{align}\]

This argument is known as

modus ponens, otherwise shortened toMP.So, with statements \(A \to B\) and \(A\), you can prove that \(B\) with

modus ponens.

## Validity

So far, we have done nothing to distinguish between a good argument and a terrible one.

There are two ways in which an argument can go wrong:

- The premise is false.
- The logical structure of the argument is wrong.

Since, in propositional logic, we cannot do anything to determine whether the premise is actually true, we define the validity of an argument in terms of the second idea.

\( \left \{ \mathscr{P}_1, \mathscr{P}_2, \cdots, \mathscr{P}_n \right \} \) is said to

entail\(\mathscr{C}\) if and only if there is no truth assignment for which \( \mathscr{P}_1, \mathscr{P}_2, \cdots, \mathscr{P}_n \) are all true and \(\mathscr{C}\) is false. \(_\square\)

We denote entailment as follows:

\[ \left \{ \mathscr{P}_1, \mathscr{P}_2, \cdots, \mathscr{P}_n \right \} \models \mathscr{C}. \]

Here is another way to define entailment:

\(\left \{ \mathscr{P}_1, \mathscr{P}_2, \cdots, \mathscr{P}_n \right \} \models \mathscr{C}\) if and only if \( ( \left ( \mathscr{P}_1 \wedge \mathscr{P}_2 \wedge \cdots \wedge \mathscr{P}_n \right ) \to \mathscr{C} ) \) is a tautology. \(_\square\)

By now, it should be obvious to the reader that an argument is *valid* if and only if the premise entails the conclusion.

## Proofs

This is a very lengthy section. Skip to the next section.

This section is in development.

In proofs, you can use given hypotheses to try and prove another given statement. You first introduce hypotheses into a proof and assume they are true, and then go on and take more steps to prove the desired statement. Some proofs are simple (such as three-step ones), while some are very complex (some may even reach 30 steps!). Proofs in propositional and predicate calculus/logic are similar to geometry proofs.

A proof involving modus ponens can go on like this:

## Limitations of Propositional Logic

Propositional logic is only one of the many formal languages. It is possible that the structure of an argument is lost in converting it from English to propositional logic.

One of the most important other logics is quantified logic.

Consider the following famous argument:

- All men are mortal.
- Aristotle is a man.
- Therefore, Aristotle is mortal.
Let us try to symbolize this in propositional logic:

\[\begin{align} A: &\text{ All men are mortal.} \\ B: &\text{ Aristotle is a man.} \\ C: &\text{ Aristotle is mortal.} \end{align}\]

The argument becomes

\[\frac{A, \; B}{\therefore C}. \]

This is a completely invalid argument in propositional logic since A, B and C have no relations to each other.

Note that the problem isn't with the symbolization of the argument. In fact, this is the best symbolization propositional logic can offer for these statements.

## Applications

You are shown a set of four cards placed on a table, each of which has a number on one side and a colored patch on the other side.

The visible faces of the cards show 3, 8, red and brown.

Which card(s) must you turn over in order to test the truth of the proposition that if a card shows an even number on one face, then its opposite face is red?

**Cite as:**Propositional Logic.

*Brilliant.org*. Retrieved from https://brilliant.org/wiki/propositional-logic/