Our aim is to bring back logic to philosophy in a broad sense, including the philosophy of language and the philosophy of mathematics. We intend to identify and analyse a selected number of central issues that the development of logic and the diverse logical calculi pose to philosophy. We connect them with general debates in philosophy, especially in semantics, metaphysics, epistemology and theory of rationality. These philosophical debates are involved in the different branches of the philosophy of science, the philosophy of mathematics included. Philosophy is most of the time a second-level activity that takes over conceptual, theoretical, and practical issues that human activities, including scientific ones, provoke. They usually concern the analysis of concepts that sciences and rational activities put to work, the identification of the hypotheses and principles on which they rest, and the epistemology that surrounds the debates about concepts and basic truths. We are commited to re-open the paths between philosophy and 20th-century logic. Logical systems and formal results must be studied in the context of the scientific, philosophical and social problems for whose resolution they were proposed. And there, they must be evaluated by their success in allowing a deeper understanding of human practices.

The general objective is pursued by focusing on three specific debates, whose analysis constitute the three specific objectives:

This issue directly affects the three main themes that lie at the basis of the core discussions in the philosophy of logic: (i) what is a logical constant, (ii) what is logical form, and (iii) how to define extra-systemic validity. Formalised valid arguments, i. e. arguments represented in logically correct artificial languages, display the grounds on which their validity rests by pinpointing the logical constants present in them. The distribution of the logical constants in an argument gives its logical formal and, according to the received view in the philosophy of logic, it is the logical form the level to which valid arguments own their validity. This is the connection between the three themes. In this objective, we pursue to analyse the reasons that logicians of different orientations (formalist, semanticist, pragmatist) have offered in favour of their theories of logicity, the definition of logical constants, their understanding of logical form and their analysis of validity. The aim is to surface the different assumptions about the nature of logic and determine whether the discrepancies can be explained as alternative answers to a single problem, or as involving homophonous debates that hide distinct approaches and different conceptions of the relationships of logic with inference.

We revisit some “limitation theorems”, i.e Gödel incompleteness theorems, the negative solution to the Entscheidungsproblem given by Turing and Church, the Lowenheim-Skolem Theorem including the Skolem paradox or the independence of Continuum hypothesis with respect to ZF, but we will analyse them taking into account more recent results in the philosophy of the second half of the past century, specifically those results in pragmatics that show how propositions, i.e. Fregean judgeable contents, cannot be captured by purely syntactic means. We will use results from the theory of relevance (Sperber and Wilson) and truth-conditional pragmatics (Carston, Recanati) and pursue the hypothesis that the bearers of truth and logical properties are essentially richer than what can be represented in formal languages. This is an outstanding result from the philosophy of language that should have a deep impact in the way we look at the formal results at the core of contemporary logic.

Would we call algorithms “intelligent agents”? Do algorithms deal with information in the sense that interests logicians, i.e. as bearer of logical properties? The emergence of new models for simulating intelligent human behaviour, Deep Learning and Machine Learning, forces us to rethink the role of logic in the foundations and development of contemporary AI. But it also offers new possibilities by forcing us to study from a formal point of view the way in which these types of algorithms interact with their users, transferring our biases and naive knowledge to the algorithms themselves. Sexism, racism, xenophobia or simple common mistakes reappear all too often in the responses of these new agents (ChatGPT, for example). What does Logic have to say about this? Can we offer answers that help us to avoid such behaviours? Can we describe with our tools the channels through which these types of agents feed their knowledge bases with our experience? Is the interaction with these types of entities a new objective for formal Logic in the 21st century?