# Mathematical Logic and Computability

First of all, I want to apologize for my English and the names I'm going to use on this wiki, because many of them probably have different names when written in books (Guillermo Templado). The purpose of this wiki is to study ordinary logic or natural logic, using mathematical methods or a "mathematical logic." This wiki has 3 chapters and a chapter 0:

- Chapter 0: Preliminaries (9 sections)
- Chapter 1: Logic or Calculus of Propositional Logic (6 sections)
- Chapter 2: First-order Logic (11 sections)
- Chapter 3: Computability (10 sections)

This wiki is a summary of the book by J. Sancho San Román: Lógica matemática y computabilidad. In Chapter 3, we will discuss Turing machines, Church's thesis, and Kurt Gödel's incompleteness theorem (using lambda calculus), and of course, we'll prove them. It's important to tell that we'll assume axiom of choice which is equivalent to Zorn's lemma (Chapter 0.5), though; nevertheless there exists logic without assuming this axiom in set theory. It is not necessary, but knowledge of a first course in abstract algebra or group theory would be very useful.

**Note.-** This wiki will be continued on another page when the first chapter is finished

#### Contents

- Preface
- Introduction
- Chapter 0: Preliminaries
- \(\ \ \)1. Sets
- \(\ \ \)2. Propositional Symbols and Quantifiers
- \(\ \ \)3. Subsets: Intersection and Union
- \(\ \ \)4. Product Set
- \(\ \ \)5. Relations and Order
- \(\ \ \)6. Applications, Functions, and Operations
- \(\ \ \)7. Induction Principles in \(\mathbb{N}\)
- \(\ \ \)8. Countable Sets
- \(\ \ \)9. Parentheses Matching
- Chapter 1: Logic or Calculus of Propositional Logic
- \(\ \ \)1. Languages in the Propositional Calculation
- \(\ \ \)2. Induction and Recursion
- \(\ \ \)3. True Ratings
- 4. Compactness
- References

## Preface

Mathematical logic is a fundamental instrument in the construction of computers and the formation of programming languages. This wiki is going to be written thinking about people who want to know the precise concepts and fundamental rules of logic, or the reasons behind these rules. In particular, these concepts and rules will be of interest for all people working in logical programming, such as PROLOG, LISP, or expert systems of artificial intelligence.

For **getting to understand this wiki**, you will need a lot of time, review chapters over and over, constantly memorize, and use a lot of ordinary logic.

## Introduction

Mathematical logic or symbolic logic is a science whose primary task is the study of ordinary logic or natural logic with mathematical methods.
The ordinary logic or natural logic, basically, is composed of an **ordinary language** and sequences of sentences of said language that are called **deductions**. Language and deductions constitute the fundamental instrument of human mind, with which it has built all the sciences known today.

The study of anything from real life is done by creating a model of that thing. For this, some elements of the thing are taken, and some properties are chosen. Each element is represented by an ideal entity that resembles it, and the model is defined by these ideal entities that are the elements of the model, and by the chosen properties.

Can you study ordinary logic with mathematical methods? To answer this question, I will transcribe a few paragraphs of LUKASIEWICZ:

"Philosophical logic includes epistemological problems: What is the truth? Is there any criterion of truth? However, these issues do not belong to logic... If we remove from philosophical logic everything that belongs to epistemology, psychology and philosophy in general, what we have is the so-called **formal logic**, whose content is properly logical... There are no two logics, mathematical and philosophical, there is a single logic, founded by Aristotle, completed by the old school of the Stoics and continued, often with great subtlety, by medieval logicians, and this is the logic that develops mathematical logic."

## Chapter 0: Preliminaries

There are 9 sections in this chapter, and there will be some theorems and examples without proof. This doesn't mean you shouldn't prove it. You always have to prove all, for a total knowledge and understanding of this wiki. This chapter has many "known" concepts of a first course in abstract algebra and the notion of cardinality and countable sets.

## \(\ \ \)1. Sets

Any explanation of the meaning of the word "set" would use more complicated concepts than the one we are trying to define; however, we have a more or less clear notion, or sufficient coincidence, about the meaning that each person gives to this word in question. For us, a set is a well-defined collection of different objects (elements).

*Relevant link: Set*

- \(a \in A\) means \(a\) is an element of \(A\), i.e. \(a\) belongs to \(A\).
- \(a \notin A\) means \(a\) isn't an element of \(A\), i.e. \(a\) doesn't belong to \(A\).
- \(a = b\) means that \(a\) and \(b\) are the same thing.

For us,

- \(\mathbb{N} = \{0, 1, 2, 3, \ldots \}\)
- \(\mathbb{N}^{+} = \mathbb{Z}^{+} = \{1, 2, 3, \ldots \}\)
- \(\mathbb{Z} = \text{set of } \)integer numbers
- \(\mathbb{Q} = \text{set of } \)rational numbers
- \(\mathbb{R} = \text{set of } \) real numbers.

## \(\ \ \)2. Propositional Symbols and Quantifiers

A "proposition" is the expression that an event occurs. If this really happens, the proposition is said to be true; if it does not, it is said to be false. If whenever a proposition \(J\) is true, another \(K\) is also true, then it is said that \(J\) implies \(K.\) In this case, \(J\) is a sufficient condition for \(K\), and \(K\) is necessary if \(J:\)

\[J \implies K. \quad (\text{logical implication})\]

When not only \(J\) implies \(K\), but also \(K\) implies \(J\), it is said that propositions \(J\) and \(K\) are logically equivalent. Then the event of \(J\) occurs if and only if the event of \(K\) occurs:

\[ J \iff K. \quad (\text{logical equivalence})\]

The propositions "J = Today is Tuesday" and "K = Tomorrow is Wednesday" are logically equivalent.

There exist more propositional symbols. We'll see them later. There exist 2 symbols, \(\exists\) and \(\forall\), called **quantifiers** that are used with the following meanings:

"\(\exists a \in A\, /\, a \text{ satisfies } P\)" means there exists an element \(a\) belonging to the set \(A\) such that \(a\) satisfies the property \(P\).

"\(\forall a \in A,\, a \text{ satisfies }P\)" means every (each) element \(a\) belonging to the set \(A\) satisfies the property \(P\).

## \(\ \ \)3. Subsets: Intersection and Union

Set \(A\) is a subset of set \(E\) if each element of \(A\) belongs to \(E\). It's written \(A \subseteq E\).

According to this, \(E \subseteq E\) but in this case the inclusion is not strict. When \(A \subseteq E\), but \(E\) contains an element \(a\) such that \(a \notin A\), i. e, \(A \neq E\), the inclusion is strict and \(A\) is a proper subset of \(E\). It's written \(A \subset E,\space A \neq E\). Note that

\[(A \subseteq E \text{ and } E \subseteq A) \iff A = E.\]

For the following, you need to read Subsets, Power Set, Unions, Intersections and Complements.

A

partition of a set \(E \neq \emptyset\)is a collection \(F\) of subsets of \(E\), none empty, such that each element of \(E\) belongs to one and only one of the members of \(F\). Each non-empty element of \(F\) is aclass of the partitionof \(E\), i.e, \(E \neq \emptyset\) is the disjoint union of non-empty subsets of \(E\), where each one of these non-empty subsets is an element of \(F\).

Equivalence relation \(\iff\) Partition. We'll see this in section 5 of this chapter

## \(\ \ \)4. Product Set

Let \(E\) and \(G\) be two sets, then the

product setof these 2 sets is\[E \times G = \{(a,b) \text{ / } a \in E \text{ and } b \in G\}.\]

Note that, in general, \((a,b) \neq (b,a)\), i.e. the order of these 2-couplings is important.

In the same way, we define the product set of \(n\, (n > 2 \in \mathbb{Z}^{+})\) sets as

\[ E_1 \times E_2 \times \ldots \times E_n = \{(a_1, a_2, \ldots, a_n)\text{ / } a_1 \in E_1, a_2 \in E_2, \ldots, a_n \in E_n\}.\]

**Notation:** If \(E_i = E, \space \forall i =1, ..., n\), the product set is \(\underbrace{E \times E \times \cdots \times E}_\text{ n } = E^{n}.\)

## \(\ \ \)5. Relations and Order

Let \(E\) be a set, then a

\(n\)-ary relationin \(E\) is a subset of \(E^n.\)

The case \(n =2\), a **binary relation**, is especially important.

Let \(R \subseteq E^2\) be a binary relation in \(E\), then if \((a,b) \in R\), we'll write \(aRb\).

\(\mathbb{N} = E \). Then \(B =\{(a,b) \in \mathbb{N}^2 \text{ / } a < b\}\) is a binary relation in \(\mathbb{N}\). With this relation, \(1R4;\) nevertheless, \(5R4\) is not true, i.e. \((5,4) \notin B.\)

- A binary relation \(R\) in \(E\) is said to be a
reflexive relationif for each element \(a \in E\) is satisfied \(aRa\).- It is said that \(R\) is a
symmetric relationwhen \(aRb \implies bRa\).- It is said that \(R\) is a
transitive relationwhen \(aRb \text{ and } bRc \implies aRc\).- It is said that \(R\) is an
anti-symmetric relationwhen \(aRb \text{ and } bRa \implies a = b\).

A binary relation \(R\) in \(E \neq \emptyset\) is said to be a

equivalence relationif it owns reflexive, symmetric and transitive relations. Given an equivalence relation \(R\) in \(E \neq \emptyset\), and an element \(a\) in \(E\), \([a] = \{b \in E \text{ / } bRa\}\) is called theclass of equivalenceof \(a.\)

Note that \(a \in [a]\). The set of classes of equivalence is denoted by \(\frac{E}{R}\) and it's called **quotient set** made by the classes of equivalence of \(R.\) If there is no doubt about the equivalence relation of which one is speaking, the equivalence class of \(a\), \([a]\), is sometimes denoted by \(a\).

Let \(\mathbb{Z}\) be the set of integer numbers and \(n \in \mathbb{Z}\) a fixed integer number, then \(R = \{(a, b) \in \mathbb{Z}^2 \text{ / } \exists m \in \mathbb{Z} \text{ fulfilling } n\cdot m = a - b\}\) is a equivalence relation \(R\) in \(\mathbb{Z}\). In this case,

\[ \displaystyle \frac{E}{R}= \frac{\mathbb{Z}}{R} = \frac{\mathbb{Z}}{n\mathbb{Z}} = \mathbb{Z}_n = \big\{ [0], [1], ..., [n - 1]\big\}. \]

See Quotient Groups: \[ \mathbb{Z}_0 = \mathbb{Z}, \quad \mathbb{Z}_1 = \big\{[0]\big\}.\]

Every equivalence relation in a set \(E \neq \emptyset\) gives rise to a partition of the set \(E\), and conversely, every partition of a set \(E \neq \emptyset\) gives rise to an equivalence relation in the set \(E\).

A partition of a set \(E \neq \emptyset\) gives rise to an equivalence relation in the set \(E\), where each class of the partition of \(E\) is a class of equivalence of \(E\). On the other hand, an equivalence relation in a set \(E \neq \emptyset\) gives rise to a partition of the set \(E\), because each element \(a \in E\) satisfies \(a \in [a]\), and given two classes of equivalence \([a], [b]\) in \(E\), we have two possibilities:

\([a] \cap [b] = \emptyset,\) or

\([a] \cap [b] \neq \emptyset \implies \exists c \in [a]\) and \(c \in [b]\).

\(\big(c \in [a] \implies cRa \underbrace{\implies}_\text{symmetry} aRc\big)\) and \(\big(c \in [b] \implies cRb\big) \implies\) \(\big(aRc \text{ and } cRb\big) \underbrace{\implies}_\text{transitivity} aRb \implies a \in [b] \implies [a] \subseteq [b]\).

In the same way, \(b \in [a] \implies [b] \subseteq [a]\). Therefore, \([a] = [b]\). \(_\square\)

A binary relation \(R\) in \(E \neq \emptyset\) is said to be a

partial order relationif it owns reflexive, anti-symmetric and transitive relations. In this case, \(E\) is said to bepartially orderedby \(R\).A

partially ordered subset\(C\) of \(E\) is achain, if every two elements of \(C\) satisfy \(aRb\) or \(bRa\), that is, any two elements of \(C\) arecomparableand, in this case, \(C\) istotally orderedwith R.An element \(a \in E\) (a partially ordered set) is said to be

maximalif for each element \(x \in E\) it satisfies this: \(aRx \implies x = a.\)

See Partially Ordered Sets and Fundamentals in Combinatorial Sets.

The power set \(\mathbb{P}(E)\) of a set \(E \neq \emptyset\) is a partially ordered set with the binary relation \(\subseteq\), i.e. \(I = \{(A, B) \text { / } A \subseteq E, B \subseteq E \text{ and } A \subseteq B\}\) is a partial order relation in \(E.\)

Zorn's LemmaLet \( X\) be a partially ordered set with the property that every chain in \(X\) has an upper bound in \( X.\) Then \( X\) contains a maximal element.

See axiom of choice. This lemma can't be proved, it is a choice made by us. You can try to prove the equivalence between axiom of choice and Zorn's lemma, which needs an "advanced machinery" or, in other words, a "deep logic thought."

A totally ordered set is said to be

well-orderedif each and every non-empty subset has a smallest or least element.

The set \(\mathbb{N}\) of natural numbers is well-ordered with the binary relation of order \(\leq.\)

## \(\ \ \)6. Applications, Functions, and Operations

For seeing the definition of **application** = **function**, you only need review this wiki.

Given two sets \( E \) and \( B \), an

\(n\)-ary functionis a function from \(E^n\) to \(B\), i.e. a function \(f : E^n \to B\).An

internal \(n\)-ary operationin \(E\) is a \(n\)-ary function from \(E^n\) to \(E\), that is. a function \(f : E^n \to E\).

Sum and multiplication are internal binary operations in \(\mathbb{N}\). It's also said that \(\mathbb{N}\) is

closedorstablefor these internal binary operations. Subtraction is not an internal binary operation in \(\mathbb{N}\).

The

restriction of a function \(f\)is a new function obtained by choosing a smaller domain \(A\) for the original function \(f.\) The notation \(\displaystyle f_{| \small \text{A}}\) is also used.

An

algebraic structureis a set \(E\) provided with some relations and (internal or external) operations.

## \(\ \ \)7. Induction Principles in \(\mathbb{N}\)

See the beginnings of the following wiki pages: Induction and Strong Induction. This section is **very important**. So, you should really read them.

## \(\ \ \)8. Countable Sets

A set \(F\) is said to be

finiteif there exists a bijective application (function) from \(F\) to the set \(\{1, 2, \ldots , n\}\) with \(n \in \mathbb{Z}^{+}\), or \(F = \emptyset\).

A set \(A\) is a

countable setif \(A\) is finite, or there exists a bijective function from \(A\) to \(\mathbb{N}.\)

All subset of a countable set is a countable set.

The countable union of countable sets is a countable set.

The finite product of countable sets is a countable set.

- \(\mathbb{Z}\) is a countable set.
- \(\mathbb{Z}^{+}\) is a countable set.
- \(\mathbb{Q} \subset \mathbb{Z} \times \mathbb{Z}^{+}\) is a countable set.
- \(\mathbb{Q}^{5}\) is a countable set.
- \(\mathbb{R}\) isn't a countable set, i.e. \(\mathbb{R}\) is an
uncountableset.

## \(\ \ \)9. Parentheses Matching

Two symbols frequently used in any written language are the left parenthesis (bracket) "(" and the right parenthesis (bracket) ")". In an expression that contains parentheses, they are paired, one left with one right. But there is something else: each left parenthesis undoubtedly determines its right partner, and vice versa.

Let \(p_1, p_2, ... , p_r\) be a sequence of \(r\) parentheses. A

proper-pairingorparentheses matchingof this sequence is a bijective application \(f\) from the set of left parentheses to that of right parentheses, which satisfies the following:

- If \(f(p_i) = p_j,\) then \(i < j\).
- Given two distinct pairs \( \big\langle p_i , f(p_i) = p_j \big\rangle \) and \(\big\langle p_h, f(p_h) = p_k \big\rangle\), one of two occurs: either one pair is inside the other \(i < h < k < j\), or they are exterior to each other \((\)for example, \(i < j < h < k),\) i. e. In no case, there is an overlapping.

If a sequence of \((2n + 2)\) parentheses supports a proper-pairing, it is unique.

By induction on \(n.\)

If \(n = 0\), the theorem is obvious because there are only two parentheses.

Assume that the theorem is true for \(n \in \mathbb{N}\), and let us see that it is true for \(n + 1 \in \mathbb{N}\). Let \(p_1, ... , p_{2n + 4}\) be a sequence of parentheses that supports a proper-pairing \(f\). And let \(p_i\) be the rightmost parenthesis to the left.Since there is a proper- pairing (parentheses matching), \(i > 1\), and in addition \(p_i\) must have as pair \(p_ {i -1}\), or else there would be two overlapping pairs. Suppressing the parentheses \(p_i\) and \(p_{i -1}\), there remains a sequence of \(2n + 2\) parentheses that supports a proper-pairing, which by induction hypothesis is unique. Therefore, it is also unique the proper-pairing (parentheses matching) \(f\) of the \(2n +4\) parentheses. \(_\square\)

## Chapter 1: Logic or Calculus of Propositional Logic

This chapter has 6 sections. We'll build a language and truth tables, and use Induction and chapter 0. Efforts will be made to leave exercises and solutions to each section of this chapter, if its theory is achieved (ended).

\(A\) represents the phrase "My brother plays guitar," and \(B\) "Christian works a lot."

- My brother doesn't play guitar: \(\neg A\).
- My brother plays guitar and Christian works a lot: \(A \wedge B\).
- My brother plays guitar or Christian works a lot: \(A \vee B\).
- If my brother plays guitar, then Christian works a lot: \(A \implies B\).
- My brother plays guitar if and only if Christian works a lot: \(A \iff B\).

## \(\ \ \)1. Languages in the Propositional Calculation

A (formal)

languageis a set of symbols (alphabet) plus a set of finite sequences of these symbols, calledwords or phrasesand alsowell-formed formulas. The definition and properties of these "words" are the object of thesyntax(rules of writing words) of language.

- Binary language
- Alphabet: \(\{0, 1\}\)
- Words: sequences of the type \(01101010111\), or sequences of these

The most usual language of propositional logic has an alphabet that is composed:

- Parentheses: \((\),\()\)
- Connectives: \(\neg, \wedge, \vee, \Rightarrow, \iff\)
- Assertion symbols: a subset of \(\{A, B, ..., A_1, A_2, ...\}\)

Definition 1.1.1.A

language expression(or simply expression) is a finite sequence of \(n\) alphabet symbols, \(n\) not null (zero). It's written \(s_1 ... s_n\) in a row.

Two expressions cannot be equal if they are not identical, that is,

\[s_1 ... s_n = t_1 ... t_m \implies n = m, s_1 = t_1, s_2 = t_2, ... , s_n = t_n.\]

To say that the most usual language of propositional logic is the previous one, is not an absolute statement, only indicative. In addition, with the notation of Lukasiewicz, the parentheses are not necessary.

In the set of language expressions we define 5 internal operations (1 unary and 4 binary), as follows, indicating \(\alpha\) and \(\beta\) expressions:

\[\begin{align} E_{\neg} : \alpha &\to \neg \alpha\\ E_{\wedge} : (\alpha, \beta) &\to (\alpha \wedge \beta)\\ E_{\vee} : (\alpha, \beta) &\to (\alpha \vee \beta)\\ E_{\Rightarrow} : (\alpha, \beta) &\to (\alpha \Rightarrow \beta)\\ E_{\iff} : (\alpha, \beta) &\to (\alpha \iff \beta). \end{align}\]

Definition 1.1.2.A set of expressions \(I\) is

inductiveif it satisfiesA) each assertion symbol is in \(I;\)

B) If the \(\alpha\) and \(\beta\) expressions are in \(I\), they are also in \(I\), \(\neg \alpha, (\alpha \wedge \beta), (\alpha \vee \beta), (\alpha \Rightarrow \beta), (\alpha \iff \beta)\).

A **well-formed formula** (wff) is an expression that belongs to every inductive set, that is,

\[W = \{\text{wff}\} = \cap \{I \space \text{/ } I \text{ is inductive}\} = \text{minimum inductive set}.\]

Definition 1.1.3.We will call

construction successiona finite sequence of expressions \(\langle \alpha_1, ... , \alpha_n \rangle\) in which each term \(\alpha_i\) fulfills one of these three conditions:1) \(\alpha_i \) is an assertion symbol;

2) \(\alpha_i = \neg \alpha_j\) with \(j < i;\)

3) \(\alpha_i = (\alpha_j \text{ & } \alpha_k),\) where \(\text{ & }\) is a connective \(\wedge, \vee, \Rightarrow, \iff\), and \( j < i, k < i.\)

Now, we call **well-formed formula** each expression that is the last term of a construction succession.
Definitions 1.1.2. and 1.1.3. are equivalent, that is, they give rise to the same set \(W\) of well-formed formulas. This will be tested (proved) in section 2 , Chapter 1.

Example of a construction succession:\[\left\langle \space A, \neg A, B, (A \vee B), \big((A \vee B) \Rightarrow \neg A\big), \Big((A \vee B) \iff \big((A \vee B) \Rightarrow \neg A\big)\Big)\space \right\rangle\]

By definition 1.1.3. it is said that the set \(W\) is generated by the set of the assertion symbols through (by) the operations \(E_{\neg}, E_{\wedge}, E_{\vee}, E_{\Rightarrow}, E_{\iff}.\)

1.1.4. Induction principle on (for) well-formed formulas (wffs):Let \(C\) be a set of well-formed formulas, inductive. Then, \(C\) is the set \(W\) of all well-formed formulas.

Proof:By hypothesis \(C \subseteq W\) and \(C\) inductive implies that \( \cap \{I \space \text{/ } I \text{ is inductive }\} \subseteq C\). By 1.1.2, \( \cap \{I \space \text{/ } I \text{ is inductive }\} = W.\) Therefore, \(W \subseteq C\), and this implies that \(W = C\). \(_\square\)

Theorem 1.1.5.A well-formed formula (wff) lacks parentheses or the sequence of its parentheses admits a proper pairing (parentheses matching) (which will be unique as we saw in Chapter 0.9).

Proof:\((\)By Induction principle on (for) well-formed formulas (wffs)\()\)We have to prove that the set \(T\) of well-formed formulas that fulfill the thesis is inductive. To do this, we will prove the following steps: that each assertion symbol \(A\) belongs to \(T\), and that it is closed for operations \(E_{\neg}, E_{\wedge}, E_{\vee}, E_{\Rightarrow}, E_{\iff}\).

Step A) If the well-formed formula \(A\) lacks parentheses, then it belongs to \(T\).

Step \(\neg\)) If the wff \(\alpha\) fulfills the thesis, \(\neg \alpha\) also fulfills the thesis, because of sequence of parenteheses of \(\neg \alpha\) is the same as \(\alpha\).

Step \(\vee\)) If \(\alpha\) and \(\beta\) fulfills the thesis, then there is a proper pairing (parentheses matching) for \((\alpha \vee \beta)\): The one formed by the proper pairing of \(\alpha\), the one of \(\beta\), and the pair of extreme parentheses of \((\alpha \vee \beta)\).

Step \(\wedge, \Rightarrow, \iff\)) It is tested in the same way as in the previous step. \(_\square\)

Corollary 1.1.6.Let's indicate with \(\text{ &} \) any binary connective. Then, we have :\(\boxed{1}\) .- Given a well-formed formula \(\psi =\neg \alpha\), the proper pairing of \(\psi\) is the same as \(\alpha\). And given a well-formed formula \(\psi = (\alpha \text{ & } \beta)\), the proper pairing of \(\psi\) is composed of \(\alpha\), \(\beta\), and the pair of \(\psi\) extreme parentheses.

\(\boxed{2}\) .- Let \(\psi = \ldots p_i \ldots p_j \ldots\) being \(\psi\) a wff (well-formed formula), and \(p_i p_j\) a couple of parentheses of the proper pairing of \(\psi\). Then, the part \(p_i \ldots p_j \) of \(\psi\) is a wff.

Proof.-\(\boxed{1}\) was demonstrated in the preceding theorem.

\(\boxed{2}\)

exercise by induction on (for) wffs

Solution.-Let \(I\) be the set of well-formed formulas that verify the property.Step \(A\) = assertion symbols). Obvious.

Step \(\neg\) ) If wff \(\alpha\) verifies(checks) the property, then \(\neg \alpha\) checks the property as well.

Step \((\alpha \text{ & } \beta)\) ) The theorem and corollary are syntactic properties of the wfffs. \(\square\)

Due to definition 1.1.3 a well-formed formula (wff) is an assertion symbol or one of the following five types: \(\neg \alpha, (\alpha \wedge \beta), (\alpha \vee \beta), (\alpha \Rightarrow \beta), (\alpha \iff \beta)\) being \(\alpha\) and \(\beta\) wffs.

We will call the

initial partof a row \(s_1 ... s_n\) to any row \(s_1 ... s_k\) where \( k < n\).

Lemma 1.1.7.A well-formed formula (wff) can not be an initial part of another well-formed formula (wff).

Proof.-By induction, chapter 0.7.Let \(L\) be the \(\psi\) length, where \(\psi\) is a well-formed formula (\(L\) = number of symbols in \(\psi\)). If \(L = 1\), \(\psi = A\) would be an assertion symbol, but then it can not be \(\psi\) the initial part of another well-formed formula, because \(A \ldots\) isn't a wff.

Let's suppose that \(L > 1\) and that the statement holds for all well-formed formulas of length less than \(L\).

If \(\psi = \neg \alpha\) and \(\psi\) was the initial part of another well-formed formula \(\psi' = \neg \beta\), \(\alpha\) would be an initial part of \(\beta\), against the induction hypothesis, since the length of \(\alpha < L\).

If \(\psi = (\alpha \text{ & } \beta)\) was the initial part of another well-formed formula \(\psi' = (\gamma \text{ & } \delta)\), we have 3 possibilities:

1.- \(\alpha\) is an initial part of \(\gamma\) ( this is against the cited hypothesis).

2.- \(\gamma\) is an initial part of \(\alpha\) ( this is against the cited hypothesis).

3.- \(\alpha = \gamma\). In this case, \(\beta\) is an initial part of \(\delta\) ( this is against the cited hypothesis). \(\square\)

Theorem 1.1.8.We know that a well-formed formula (wff) is one of these six types: an assertion symbol or \(\neg \alpha, (\alpha \wedge \beta), (\alpha \vee \beta), (\alpha \Rightarrow \beta), (\alpha \iff \beta)\) being \(\alpha\) and \(\beta\) wffs. Then:\(\boxed{1}\) A well-formed formula \(\psi\) can not be of two types at a time.

\(\boxed{2}\) If \(\psi = \neg \alpha\) , then \(\psi\) determines \(\alpha\). If \(\psi = ( \alpha \text{ & } \beta)\), then \(\psi\) determines the pair \(\langle \alpha, \beta \rangle\).

\(\boxed{3}\) If \(\psi = \ldots (\ldots) \ldots\) is a wff and \((\ldots)\) is a wff, then the 2 parentheses are a couple in the proper pairing of \(\psi\).

Proof.-\(\boxed{1}\) \(A \neq \neg \alpha\) and \(A \neq (\alpha \text{ & } \beta)\), obviously. It must also be \(\neg \alpha \neq (\gamma \text{ & } \delta)\) since \(\neg \neq (\). It can't either be \( (\alpha \vee \beta) = (\gamma \Rightarrow \delta)\) because if it were \(\alpha \neq \gamma\) due to 1.1.7 \(\alpha\) would be an initial part of \(\gamma\) or viceversa (Contradiction), and if it were \(\alpha = \gamma\) this would imply \(\vee = \Rightarrow\)( False).

\(\boxed{2}\) If \(\neg \alpha = \neg \beta\) it is clear that \( \alpha = \beta\). If \( (\alpha \vee \beta) = (\gamma \vee \delta)\) it can't be \(\alpha \neq \gamma\) because of 1.1.7, therefore \(\alpha = \gamma\) and from there \(\beta = \delta\).The same happens with the other binary connectives.

\(\boxed{3}\) Let \(\psi = \ldots \text{ p } \ldots \text{ p '} \ldots\) and \(\alpha = \text{ p } \ldots \text{ p '}\) , being \(\alpha\) and \(\psi\) wffs. If\(\text{ p"}\) is the right parenthesis corresponding to \(\text{ p }\) in the proper pairing of \(\psi\), then \(\text{ p } \ldots \text{ p''}\) is a wff \(\beta\) (Corollary 1.1.6, (2)), Then it has to be \(\text{ p' } = \text{ p" }\) because if not, \(\alpha\) would be initial part of \(\beta\), or viceversa, against the lemma 1.1.7. \(\square\)

Corollary 1.1.9.The five constructive operations \(E_{\neg}, E_{\wedge}, E_{\vee}, E_{\Rightarrow}, E_{\iff},\)restricted to the set of well-formed formulas, fulfill:

\(\boxed{1}\) Their images (or ranges) are disjoint two by two, and with the set of assertion symbols.

\(\boxed{2}\) they are one to one (injective functions, or, injective operations)

Proof.-\(\boxed{1}\) and \(\boxed{2}\) are equivalent to \(\boxed{1}\) and \(\boxed{2}\) of the previous theorem, expressed in another way. \(\square\)

Definition 1.1.10.It is called a

subformulaof a well-formed formula \(s_1 ... s_n\) to a part \(s_h s_{h + 1} ... s_{h + r}\) that is also a well-formed formula.

Theorem 1.1.11.Given a connective \(C\) of a well-formed formula \(\psi\), \(C\) appears in one and only a subformula of \(\psi\), of the form \(C \alpha\) or \((\alpha \space C \space \beta)\), according to \(C = \neg\) or binary \(C\).

Proof.-By definition (1.1.3), \(C\) appears in at least one subformula of the indicated well-formed formula \(\psi \). Let's prove that it appears only in one and only one.Case \(C = \neg\)) If there were two subformulas \(C \alpha\) and \(C \beta\), it would follow that \(\alpha\) is an initial part of \(\beta\), or viceversa, against lemma 1.1.7.

Case binary \(C\)) Let's suppose that there are two subformulas \((\alpha \space C \space \beta)\) and \((\gamma \space C \space \delta)\).Let's call \(p, p'\) the extreme parentheses of the first subformula and \(q, q'\) to those of the second subformula. Because of the repeated lemma 1.1.7, \(\beta = \delta\) and therefore \(p' = q'\). But then, due to theorem 1.1.8(3), it has to be \(p = q\) \(\square\).

Definition 1.1.12.Let \(C\) be a connective of a well-formed formula \(\psi\). If \(C = \neg\), it's called

the reach(or scope) of \(C\) in \(\psi\),to the wff \(\alpha\) of the subformula \(\neg \alpha\) of \(\psi\). If \(C\) is binary, it is calledthe (anterior and posterior) reaches(or scopes) of \(C\) in \(\psi\) to the wffs \(\alpha\) and \(\beta\) of the subformula \((\alpha \space C \space \beta)\) of \(\psi\).

Let \(\psi = ((A \iff (B \wedge C)) \vee (\neg B \Rightarrow C))\).

Subformulas of \(\psi\) : \( A, B , C, \neg B, (B \wedge C), (\neg B \Rightarrow C), (A \iff (B \wedge C))\) and \(\psi\).

Reach, (or scope) of \(\neg\) in \(\psi\): \(B\). \(\space \space \space \) Posterior reach (or scope) of \(\iff\) in \(\psi\): \((B \wedge C)\). \(\space \space \space \) Anterior reach of \(\iff\) in \(\psi\): \(A\)

**Conventions to suppress parentheses in a wff:**

Parentheses form an essential part of our definition of wff. Example: \(\neg (A \vee B) \neq (\neg A \vee B)\). In the desire to simplify language, one may ask whether the introduction of parentheses is superfluous. The answer is no, in most cases. Nonetheless, the writing of a well-formed formula \(\psi\) can be greatly simplified, establishing certain conventions set out below:

1) \(\alpha \text{ & } \beta\) represents \((\alpha \text{ & } \beta)\). (\( \alpha, \beta \) are wffs).

2) \(\alpha \wedge \beta \Rightarrow \gamma\) represents \((\alpha \wedge \beta) \Rightarrow \gamma\).

\(\alpha \vee \beta \Rightarrow \gamma\) represents \((\alpha \vee \beta) \Rightarrow \gamma\).

\(\alpha \Rightarrow \beta \wedge \gamma\) represents \(\alpha \Rightarrow (\beta \wedge \gamma)\).

\(\alpha \Rightarrow \beta \vee \gamma\) represents \(\alpha \Rightarrow (\beta \vee \gamma)\).

3) The same as 2) replacing \(\Rightarrow\) for \(\iff\).

4) The expression \(\alpha \Rightarrow \beta \Rightarrow \gamma\) represents \(\alpha \Rightarrow (\beta \Rightarrow \gamma)\). Analogously, replacing \(\Rightarrow\) for \(\wedge, \vee \) or \(\iff\).

\(A \wedge \neg B \Rightarrow \neg A \vee B\) represents \(((A \wedge \neg B) \Rightarrow (\neg A \vee B))\).

\(\neg(A \iff B) \iff (A \wedge \neg B) \vee B\) represents \((\neg(A \iff B) \iff ((A \wedge \neg B)) \vee B))\).

## \(\ \ \)2. Induction and Recursion

Let E be a set with some internal operations and \(B \subseteq E\). For convenience, we will assume that there are only two operations: a \(g: E \to E\) (unary) and a \(f : E \times E \to E\) (binary).

Definition 1.2.1.-It's said that a subset \(S\) of \(E\) isinductiveif it contains \(B\) and is closed for operations \(g\) and \(f\).(See example, chapter 0.6)

We'll call \(C^{\small 0} = \) Intersection of all inductive subsets of \(E\) = Minimum (the smalllest) inductive subset.

Definition 1.2.2.We will call

construction succession in\(E\) a finite sequence of element os \(E\), \(\langle \alpha_1, ... , \alpha_n \rangle\) in which each term \(\alpha_i\) fulfills one of these three conditions:1) \(\alpha_i \in B\).

2) \(\alpha_i = g(\alpha_j)\) with \(j < i\)

3) \(\alpha_i = f(\alpha_j , \alpha_k)\) with \(j < i\) and \(k< i\).

We will call \(C_{\small 0} = \{a / a \text{ is the last term of a construction succession in E}\} \).

Theorem 1.2.3.\(\space \space \space C^{\small 0} = C_{\small 0} \)

Proof.-First, let's see \(C^{\small 0} \subseteq C_{\small 0} \). It is enough to see that \(C_{\small 0} \) is inductive. Obviously, \(B \subseteq C_{\small 0} \). Furthemore, \(C_{\small 0} \) is closed for \(g\), because if \(a \in C_{\small 0} \) there is a construction succession in \(E\) of the form \(\langle \ldots, a \rangle\) \(\Rightarrow\) (" this implies that"(Chapter 0.2)) \(\langle \ldots, a, g(a) \rangle\) is a construction succession in \(E\), \(\Rightarrow\) \(g(a) \in C_{\small 0} \). \(C_{\small 0} \) is also closed for \(f\), because if \(a \in C_{\small 0} \) and \(b \in C_{\small 0} \) there are construction succesions in \(E\) of the form \(\langle \ldots, a \rangle\) and\(\langle \ldots, b \rangle\) \(\Rightarrow\) \(\langle \ldots, a, \ldots, b, f(a,b) \rangle\) is a construction succesion in \(E\), \(\Rightarrow\) \(f(a,b) \in C_{\small 0} \).Let's see \(C^{\small 0} \supseteq C_{\small 0} \). Let \(a \in C_{\small 0} \Rightarrow\) there exists a construction succession in \(E\), \(\langle a_1, ... , a_n = a \rangle\). By standard induction in \(n \ge 1\), we are going to prove that \(a_n \in C^{\small 0}\). Indeed, \(a_1 \in C^{\small 0}\) since \(a_1 \in B \). Let's suppose that \(a_1, ... , a_{n - 1} \in C^{\small 0}\), then, \(a_n \in C^{\small 0}\) because \(a_n \in B\), or \(a_n = g(a_i)\) with \(i < n\), or \(a_n = f(a_i, a_j)\) with \(i < n, \space j < n.\) Therefore, \[\ C^{\small 0} = C_{\small 0} \] \(\square\)

Definition 1.2.4.\(C = C^{\small 0} = C_{\small 0} \) is said to be generated by \(B\) by the operations \(g\) and \(f\).

1.2.5. Induction principle for \(C\):Let \(S\) be a subset of \(C\), which contains \(B\) and is closed for operations \(f\) and \(g\). Then, \(S = C\)

Proof:By hypothesis \(S \subseteq C\) and \(S\) is inductive by definition \(\Rightarrow C = C^{\small 0} \subseteq S\). Hence, \(S = C\) \(_\square\)

\(\boxed{\text{Induction principle for } \mathbb{N}}\): If \(S\) be a subset of \(\mathbb{N}\), which contains \(\{0\}\) and if it's satisfied \(n \in S\), then \(n + 1 \in S\). Then \(S = \mathbb{N}\).

\(E = \mathbb{R}\) (Chapter 0.1).

\(B = \{0\}\).

Operations: just \(g\) given by \(g(x) = x + 1\). Then, \(C = \{0, 1, 2, \ldots\} = \mathbb{N}\).

\(E = G\) a group.

\(B = \{a, b\}\)

Operations: \(g\) given by \(g(x) = x^{- 1}\), and \(f\) given by \(f(x, y) = x \cdot y\). Hence, \(C\) = subgroup of \(G\) generated by \(a, b\) (by the operations \(g\) and \(f\)) = minimum subgroup of .\(G\) containing \(a, b\).

\(E\) = Set of expressions of the language in the propositional calculation.

\(B\) = Set of assertion symbols.

Operations: \(E_{\neg}, E_{\wedge}, E_{\vee}, E_{\Rightarrow}, E_{\iff}\). Hence, \(C = W\) = set of wffs.

**Notation.-** \(C_n = \{ a \in C \text{ / } \textrm{ a is the last term in a construction succesion in E of n elements}\}\). \(\Rightarrow\) \(B = C_1 \subseteq C_2 \subseteq C_3 \subseteq \ldots\) and \[\displaystyle C = \bigcup_{i \ge 1} C_i\]

\(\underline{\text{Recursion:}}\)

Little love song.
So far, we have \(E, B , f, g, C\). Let \(R\) be a set, called **set of values**.

We will try to define "recursively" an application (function) \(\overline{v}: C \to R\) when it's known:

1)\(\overline{v} (b) = \text{v(b)} \textrm{ / } b \in B\) and \(v : B \to R\) is a known function.

2)The value \(\overline{v} (g(x))\) through (in function of) \(\overline{v} (x)\).

3)The value \(\overline{v} (f(x, y))\) through (in function of) \(\overline{v} (x)\) and \(\overline{v} (y)\)

Definition 1.2.6.-\(C = C^{\small 0} = C_{\small 0} \) (definition 1.2.4) is said to be

freely generatedby \(B\) by the operations \(g\) and \(f\), if the restrictions of \(g\) and \(f\) to \(C\) satisfy:\(\boxed{1}\) Their images (or ranges) are disjoint, and also with the set \(B\).

\(\boxed{2}\) they are one to one (injective functions)

\(C\) is a \((1, 2)\)-

free algebragenerated by \(B\) ( in algebraic language).

The set of well-formed formulas is freely generated by the assertion symbols by the connective operations.

Recursion theorem 1.2.7.Let \(C \subseteq E\) be freely generated by \(B \subseteq E\) by the operations \(g\) and \(f\). Let \(R\) be a set of values, and given the functions \(v : B \to R\), \(G : R \to R\), \(F: R \times R \to R\), then there exists one and only one application \(\overline{v}: C \to R\) such that:

a) \(\forall x \in B, \space \overline{v}(x) = v(x)\).

b) \(\forall x \in C, \space \overline{v}(g(x)) = G(\overline{v}(x))\).

c) \(\forall (x,y) \in C^2, \space \overline{v}(f(x, y)) = F(\overline{v}(x), \overline{v}(y))\).

Proof.-\(\boxed{\text{Existence}}\) \(D_1 = C_1 = B, \space D_n = C_n - C_{n - 1}, \text{ with } n \ge 2\). \(D_n = \) the set of elements \(z \in C\) that are the last term of a construction succession of \(n\) terms, but not \(n -1\) terms, (n = minimum number of terms of a construction succession of \(z\)). Given \(z \in C\) there is only one \(n\) such that \(z \in D_n\), and furthemore, \(z\) is in one and only one of the sets \(B \), \(\text{ Im (g) }\) or \(\text{ Im (f) }\) ( since \(C\) is a \((1, 2)\)-free algebra or \(C\) is freely generated).

Let's define \(\overline{v}: C \to R\) defining \(\overline{v}(z)\) by induction in \(n\), being \(z \in D_n\) .

If \(n = 1\), then \(\overline{v} (z) = v(z)\) \(\space (z \in B)\).

If \(n > 1 (\Rightarrow z \notin B)\),then \( z \in \text{ Im (g) }\) or \( z \in \text{ Im (f) }\). In this first case, \(z = g(x)\), \(x \in C\) and \(x \in D_m\) with \( m < n\) is determined by \(z\) since \(g\) is an injective function. It's known \(\overline{v}(x)\) by induction hypothesis, and we are going to define \(\overline{v}(z) = \overline{v}(g(x)) = G(\overline{v}(x))\).

In the second case,\(z = f(x,y)\), with \((x, y) \in C^2\) is univocally determined by \(z\) since \(f\) is an injective function. We are going to define \(\overline{v}(z) = \overline{v}(f(x, y)) = F(\overline{v}(x), \overline{v}(y))\), being known by induction \(\overline{v}(x)\) and \(\overline{v}(y)\), since \(x \in D_i\) and \(y \in D_j\) with \(i < n \) and \(j < n\) .

\(\boxed{\text{Unicity}}\) Let \(\overline{v}\), \(\overline{v_1}\) be two functions that satisfy a), b) and c) and let \(S = \{z \in C \text{ / } \overline{v} (z) = \overline{v_1} (z)\}\) \(\Rightarrow\) \(B \subseteq S\) and \(S\) is closed for \(g\) and \(f\) ( that is, \(S\) is inductive) \(\Rightarrow \) \(C \subseteq S\) \(\Rightarrow\) \(C = S\). \(\square\)

\(\underline{\textrm{Universal property of free algebras:}}\)

Let \(C\) be a free algebra generated by \(B\) with operations \(g\) and \(f\), and \(R\) one (1,2)-algebra with operations \(G\) and \(F\). Given an application \(v: B \to R\), there exists one and only one application \(\overline{v}: C \to R\), which extends a \(v\) and is a homeomorphism of (1,2)-algebras.

Recursion theorem 1.2.8. (bis)The theorem 1.2.7. retains(preserves) its validity if the function \(G\) is replaced by \(G_1\), \(F\) by \(F_1\), where \(G_1: R \times C \to R\), \(F_1 : R \times R \times C \times C \to R\) , and the conditions b), c) by:

b) \(\forall x \in C, \space \overline{v}(g(x)) = G_1 (\overline{v}(x), x)\).

c) \(\forall (x,y) \in C^2, \space \overline{v}(f(x, y)) = F_1 (\overline{v}(x), \overline{v}(y), x, y)\).

Proof.-Similar(repetition) to the previous theorem.

## \(\ \ \)3. True Ratings

Semantics from the ordinary language study the meaning of words, their variations, and problems related to their meaning.

In this section, we will try to give a "value" to each well-formed formula (wff). The definition and study of these "assessments" (which may come from "interpretations" of language) constitutes what is called the **semantics** of propositional language.

Let's consider such a language, and a set \(\{T, F\}\) of 2 elements that we'll call **truth values** (\(T\) = true and \(F\) = false).

Definition 1.3.1.A

truth (or semantic) assessmentof language, is an application \(v : S \to \{T, F\}\), being \(S \) = set of assertion symbols.

We want to extend \(v : S \to \{T, F\}\) to an application \(\overline{v} : W \to \{T, F\}\) that verifies: (\(W\) = set of wffs):

1) \(\space \space \overline{v} (\neg \alpha) = \text{T} \space \iff \space \overline{v} (\alpha) = \text{F}\), (being \(\alpha\) a well-formed formula (wff)).

2) \(\space \space \overline{v} (( \alpha \wedge \beta)) = \text{T} \space \iff \space ( \overline{v} (\alpha) = \text{T} \text{ and } \space \overline{v} (\beta) = \text{T}) \) (being \(\alpha, \beta\) well-formed formulas (wffs)).

3) \(\space \space \overline{v} (( \alpha \vee \beta)) = \text{F} \space \iff \space ( \overline{v} (\alpha) = \text{F} \text{ and } \space \overline{v} (\beta) = \text{F}) \) (being \(\alpha, \beta\) well-formed formulas (wffs)).

4) \(\space \space \overline{v} ( (\alpha \Rightarrow \beta)) = \text{F} \space \iff \space ( \overline{v} (\alpha) = \text{T} \text{ and } \space \overline{v} (\beta) = \text{F}) \) (being \(\alpha, \beta\) well-formed formulas (wffs)).

5) \(\space \space \overline{v} (( \alpha \iff \beta)) = \text{T} \space \iff \space ( \overline{v} (\alpha) = \overline{v} (\beta) ) \) (being \(\alpha, \beta\) well-formed formulas (wffs)).

Theorem 1.3.2.Given a truth assessment \(v : S \to \{T, F\}\), there is one and only one application \(\overline{v} : W \to \{T, F\}\), extension of \(v\), satisfying the previous conditions(1) - 5))

Proof.-In the set \(\{T, F\}\) we define a unary operation \(G_{\neg}\) and four binary operations \(G_{\wedge}, G_{\vee}, G_{\Rightarrow}, G_{\iff}\), in this way:

\(G_{\neg} (X) = T \iff \text{X = F}\).

\(G_{\wedge} (X, Y) = T \iff (\text{X = T} \text{ and } \text{Y = T})\).

\(G_{\vee} (X, Y) = F \iff (\text{X = F} \text{ and } \text{Y = F})\).

\(G_{\Rightarrow} (X, Y) = F \iff (\text{X = T} \text{ and } \text{Y = F})\).

\(G_{\iff} (X, Y) = F \iff (\text{X = Y})\).

Due to Recursion Theorem (1.2.7.) , and because of \(W\) is freely generated by S by a unary operation \(E_{\neg}\) and four binary operations \(E_{\wedge}, E_{\vee}, E_{\Rightarrow}, E_{\iff}\) (Corollary 1.1.9.) and the set of values \(\{T, F\}\) is equipped with the unary operation \(G_{\neg}\) and the four binary oberations \(G_{\wedge}, G_{\vee}, G_{\Rightarrow}, G_{\iff}\), there exists one and only one application (function) \(\overline{v}: W \to \{T, F\}\), extension of \(v\), satisfying:

1) \(\space \space \forall \alpha \in W\), \(\overline{v} ( E_{\neg} (\alpha)) = \) \(\overline{v} ( \neg \alpha) = \) \(G_{\neg} (\overline{v} ( \alpha))\).

2) \(\space \space \forall \alpha, \beta \in W^{2}\), \(\overline{v} ( E_{\wedge} (\alpha, \beta)) = \) \(\overline{v} (( \alpha \wedge \beta)) = \) \(G_{\wedge} (\overline{v} ( \alpha), \overline{v} ( \beta))\).

3) \(\space \space \forall \alpha, \beta \in W^{2}\), \(\overline{v} ( E_{\vee} (\alpha, \beta)) = \) \(\overline{v} (( \alpha \vee \beta)) = \) \(G_{\vee} (\overline{v} ( \alpha), \overline{v} ( \beta))\).

4) \(\space \space \forall \alpha, \beta \in W^{2}\), \(\overline{v} ( E_{\Rightarrow} (\alpha, \beta)) = \) \(\overline{v} (( \alpha \Rightarrow \beta)) = \) \(G_{\Rightarrow} (\overline{v} ( \alpha), \overline{v} ( \beta))\).

5) \(\space \space \forall \alpha, \beta \in W^{2}\), \(\overline{v} ( E_{\iff} (\alpha, \beta)) = \) \(\overline{v} (( \alpha \iff \beta)) = \) \(G_{\iff} (\overline{v} ( \alpha), \overline{v} ( \beta))\). \(\square\)

Given the construction succession \(\langle A, B, \neg A , (\neg A \rightarrow B), ((\neg A \rightarrow B) \vee B) \rangle\) for the well-formed formula (wff) \(\alpha = ((\neg A \rightarrow B) \vee B)\). Let's suposse \(v(A) = F\) and \(v(B) = F\), then \(\overline{v} (\neg A) = T\), \(\overline{v}((\neg A \rightarrow B)) = F \Rightarrow\) \[\overline{v} (\alpha) = F \]

You find yourself on the island of knights and knaves, where every inhabitant is of one of two types: a knight who always tells the truth, or a knave who always lie.

You know that Artemis is a knight.

Which of the island's inhabitants can say that "If I am a knave, then Artemis is a knight"?

Note: You are not an inhabitant of the island.

Inspiration.

\(\underline{\text{Truth tables}}\)

Given a well-formed formula (wff) \(\alpha\), each truth assessment,\(v\), assigns \(\alpha\) a value \(\overline{v}(\alpha)\). It is clear that if the set of assertion symbols \(S\) is infinitum , the list of all truth assesments would be infinitum. But since the value \(\overline{v} (\alpha)\) depends only on the values \(v(A_i)\) of the assertion symbols \(A_i\) contained in \(\alpha\), it suffices to know these values for each \(v\), which causes the list of assesments be reduced to the list of n-uples \(\langle v(A_1), ..., v(A_n) \rangle\) possibles, whose total number is \(2^{n}\).

Due to conventions to suppress parentheses in a wff (at the end of Chapter 1, section1), let's consider \(\alpha = A \vee B \Rightarrow C\), then we get:

This is a

truth tableof (for) \(\alpha\). (My apologies for the pic(GuillermoTemplado)). This is, with only 3 assertion symbols we can create \(2^3 = 8\) possible truth assesments.

Definition 1.3.3.A truth assessment \(v\) is said

to satisfya wff \(\alpha\) (\(v\)satisfies\(\alpha\)) when \(\overline{v} (\alpha)\) = T.

Let, now, \(\sum\) be a set of wffs and \(\alpha\) a wff.

Definition 1.3.4.We'll say that \(\sum\)

implies tautologically\(\alpha\) if every truth assessment that satisfies all wffs of \(\sum\), it also satisfies \(\alpha\). It's written \(\sum \vDash \alpha\). Otherwise, \(\sum \nvDash \alpha\).

\(\{\alpha, \alpha \Rightarrow \beta\} \vDash \beta\).

\(\{\alpha \vee \beta, \neg \beta\} \vDash \alpha\).

\(\{ A \vee B\} \nvDash B\).

**Particular cases:**

1) \(\sum = \emptyset\). In this case, \(\emptyset \vDash \alpha\) means \(\alpha\) is true for each truth assesment. In this case, \(\alpha\) is a

**tautological**well-formed formula, or \(\alpha\) is a**tautology.**It's also written \(\vDash \alpha\).2) There is no truth assesments that satisfies all well-formed formulas of \(\sum\). In this case, \(\sum \vDash \psi, \forall \psi\) wff (trivially.)

1)

\(\vDash \alpha \Rightarrow \alpha\).

\(\vDash \alpha \Rightarrow (\beta \Rightarrow \alpha)\).

2) \(\{A, \neg A\} \vDash \psi, \forall \psi\).

Definition 1.3.5.Two well-formed formulas \(\alpha\), \(\beta\) are said to be

tautologically equivalentwhen \(\alpha \vDash \beta\) and \(\beta \vDash \alpha\). In this case, it's written \(\alpha \iff \beta\).

Corollary 1.3.6.\(\alpha \vDash \beta\) if and only if \(\vDash \alpha \Rightarrow \beta\). The well-formed formulas(wffs) \(\alpha\), \(\beta\) are tautologically equivalent if and only if \(\vDash \alpha \iff \beta\).

Two tautologically equivalent well-formed formulas express the same fact, since one is true if and only if the other is true. In ordinary logic, they are usually said to be equivalent or the same to be stated as the other, although it would be better to say that they are semantically equivalent.

They are equivalent:

1) If Kristina cries, I get sad. \(\space A \Rightarrow B\)

2) It's impossible that Kristina cries and I don't get sad. \(\space \neg (A \wedge \neg B)\).

\(\underline{\text{Complete Systems of connectives}}\)

The set \(W\) of well-formed formulas (wffs) is generated by \(S\) (set of the assertion symbols), by the connective operations. Let's now consider the set \(W'\) of the (wffs) generated by \(S\) by two of the previous(above) operations, for example, \(E_{\neg}, E_{\Rightarrow}\). It is clear that \(W' \subseteq W\). It's said that the system \(\{\neg, \Rightarrow \}\) is a **complete system of connectives**, if for each wff \(\alpha\) of \(W\) there exists a wff \(\beta\) of \(W'\) which is tautologically equivalent to \(\alpha\).

**Exercise 1.3.1.** \(\{\neg, \Rightarrow \}\), \(\{\neg, \wedge \}\), \(\{\neg, \vee \}\) are complete systems of connectives.

**Exercise 1.3.2.** \(\{\wedge, \Rightarrow \}\) is not a complete system of connectives. (Hint: There doesn't exist a wff with only these connectives and tautologically equivalent to \(\neg (A \wedge B)\) ).

\(\underline{\text{Methods of verification of tautologies:}}\)

Given a well-formed formula \(\alpha \), it may be interesting whether it is a tautology or not. Let's indicate two ways for doing this:

**Long method:**Build a truth table of (for) \(\alpha\).**Short method:**Supposing the wff \(\alpha\) is false and concluding a contradiction or not.

Example 1 above, the wff \(A \vee \neg B \Rightarrow (B \Rightarrow A)\) is a tautology. Example 2 above, \(\alpha\) is not a tautology. (My apologies for the above pic (Guillermo Templado)).

\[ (A \iff B)\iff \neg (A \Rightarrow B)\Rightarrow \neg(B\Rightarrow A) \]

Two possible ways to prove if a statement is a tautology are these:

1. Build a truth table.

2. Supposing the statement is false and concluding a contradiction or not.

Is the mathematical statement at the top a tautology?

## 4. Compactness

This section is not only of interest for the study of mathematical logic, but also for the ordinary one. Before starting (first of all), it's interesting see Bertrand Russell's Paradox and

**-Russell's paradox-** Russell's paradox

Let \(A = \{X \text { / } X \notin X\}\), then \(A \in A \iff A \notin A\).

## References

\(\small \textrm{References: J. Sancho San Román: Lógica Matemática y computabilidad.}\)

**Cite as:**Mathematical Logic and Computability.

*Brilliant.org*. Retrieved from https://brilliant.org/wiki/mathematical-logic-and-computability/