T-norms driven loss functions for machine learning

Injecting prior knowledge into the learning process of a neural architecture is one of the main challenges currently faced by the artificial intelligence community, which also motivated the emergence of neural-symbolic models. One of the main advantages of these approaches is their capacity to learn competitive solutions with a significant reduction of the amount of supervised data. In this regard, a commonly adopted solution consists of representing the prior knowledge via first-order logic formulas, then relaxing the formulas into a set of differentiable constraints by using a t-norm fuzzy logic. This paper shows that this relaxation, together with the choice of the penalty terms enforcing the constraint satisfaction, can be unambiguously determined by the selection of a t-norm generator, providing numerical simplification properties and a tighter integration between the logic knowledge and the learning objective. When restricted to supervised learning, the presented theoretical framework provides a straight derivation of the popular cross-entropy loss, which has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. However, the proposed learning formulation extends the advantages of the cross-entropy loss to the general knowledge that can be represented by neural-symbolic methods. In addition, the presented methodology allows the development of novel classes of loss functions, which are shown in the experimental results to lead to faster convergence rates than the approaches previously proposed in the literature.


Introduction
Deep Neural Networks [1] have been a break-through for several classification problems involving sequential or high-dimensional data.However, deep neural architectures strongly rely on a large amount of labeled data to develop powerful feature representations.Unfortunately, it is difficult and labor intensive to annotate such large collections of data.In this regard, prior knowledge expressed by First-Order Logic (FOL) rules represents a natural solution to make learning efficient when the training data is scarce and some domain expert knowledge is available.The integration of logic inference with learning could also overcome another limitation of deep architectures, namely that they mainly act as black-boxes from a human perspective, making their usage difficult in safety critical applications, like in health or car industry applications [2].For these reasons, Neural-Symbolic (NeSy) approaches [3,4] integrating logic and learning have become one of the fundamental research lines for the machine learning and artificial intelligence communities.One of the most common approaches to exploit logic knowledge to train a deep neural learner relies on mapping the FOL knowledge into differentiable constraints using t-norms.Then, the constraints can be enforced using gradient-based optimization techniques, like done in [5,6].Most work in this area approached the problem of translating logic rules into a differentiable form by defining a collection of heuristics that often lack semantic consistency and have no clear motivation from a theoretical point of view.For instance, there is no agreement on the relation between the selected t-norm and the aggregation function corresponding to the logic quantifiers, nor even on the chosen loss to enforce the constraints.
This paper first traces back the properties of t-norm fuzzy logic operators down to the selection of a generator function.Then, we show that the loss function of a learning problem accounting for both supervised data and logic constraints can also be determined by the single choice of the t-norm generator.The generator determines the fuzzy relaxation of connectives and quantifiers occurring in the logic rules.As a result, a simplified and semantically consistent optimization problem can be formulated.In this framework, the classical fitting of supervised training data can be enforced by atomic logic constraints.Since the careful choice of loss functions has been crucial to the success of deep learning, this paper also investigates the relation between supervised training losses and generator choices.As a special case, we get a novel justification for the popular cross-entropy loss [7], that has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures.Contributions.This paper introduces a theoretical framework centered around the notion of t-norm generator, unifying the choice of the logic semantics and of the loss function in neural-symbolic learners.In particular, we extend the preliminary formalization sketched in [8], together with a more comprehensive experimental validation.This unification results in a simplified learning objective that is shown to be numerically more stable, while retaining the flexibility to customize the learning process on the considered applications.
The paper is organized as follows: Section 2 presents some prior work on the integration of learning and logic inference, Section 3 presents the basic concepts about t-norms, generators and aggregator functions and Section 4 introduces a general neural-symbolic framework used to extend supervised learning with logic rules.Section 5 presents the main results of the paper, showing the link between t-norm generators and loss functions and how these can be exploited in neural-symbolic approaches.Section 6 presents the experimental results and a discussion on the presented methodology is provided in Section 6.2.Finally, Section 8 draws some conclusions.

Related Works
Neural-symbolic approaches [9,10] aim at combining symbolic reasoning with (deep) neural networks, e.g. by exploiting additional logic knowledge when available.This knowledge can be either injected into the learner internal structure (e.g. by constraining the network architecture) or enforced on the learner outputs (e.g. by adding new loss terms).In this context, First-Order Logic is commonly chosen as the declarative framework to represent the knowledge because of its flexibility and expressive power.NeSy methodologies are rooted in previous work on Statistical Relational Learning (SRL) [3,11], which developed frameworks for performing logic inference in presence of uncertainty.For instance, Markov Logic Networks (MLN) [12] and Probabilistic Soft Logic (PSL) [13] integrate FOL and probabilistic graphical models by using the logic rules as potential functions defining a probability distribution.MLNs have received a lot of attention by the SRL community [14][15][16] and have been widely used in different tasks like information extraction, entity resolution and text mining [17,18].More recently, MLNs have also been extended to work with neural potential functions in [19], showing impressive results e.g. in generating molecular data.PSL can be considered a fuzzy extension of MLNs, as it exploits a fuzzy relaxation of the logic potentials by using Lukasiewicz Logic.
The framework proposed in this paper builds upon t-norm fuzzy logics, however it is not limited to any specific t-norm.Hence it could be also adopted to define alternative logic potential functions for PSL.
A common solution to integrate logic reasoning and deep learning relies on using deep neural networks to approximate the truth values (i.e.fuzzy semantics) or the probabilities (i.e.probabilistic semantics) of certain target predicates, and then apply logic or probabilistic inference on the network outputs [20].In the former case, the logic rules can be relaxed according to a differentiable fuzzy logic and then the overall architecture can be optimized end-to-end.This approach is followed with minor variants by Semantic-Based Regularization (SBR) [5], Lyrics [21] and Logic Tensor Networks (LTN) [6], especially for classification problems.On the other hand, some examples of NeSy approaches based on probabilistic logic are given by Semantic Loss [22], Differentiable Reasoning [23], Deep Logic Models [24], Relational Neural Machines [25] and DeepProbLog [26].Similarly, Lifted Relational Neural Networks [27] and Neural Theorem Provers [28,29] realize a soft forward or backward chaining via an end-to-end gradient based scheme.This paper investigates the bound between the selected logic semantics to represent the knowledge and the loss function in the learning task.This is a common problem for all NeSy approaches, that encode the logic knowledge into differentiable constraints used by a deep learner.

Learning with Fuzzy Logic Constraints
In general, if some FOL knowledge is available for a learning problem, this is expressed in Boolean form.To define a differentiable learning objective is then fundamental to establish a mapping to relax the logic formulas into differentiable functional constraints by means of an appropriate fuzzy logic.For instance, Serafini et al. [30] introduces a learning framework where the formulas are converted according to the t-norm and t-conorm of Lukasiewicz logic.Giannini et al. [31] also proposes to convert the formulas according to Lukasiewicz logic, however they exploit the weak conjunction in place of the t-norm, thus guaranteeing convex functional constraints.A more empirical approach has been considered in SBR, where all the fundamental t-norms have been evaluated on different learning settings to select the best t-norm on the single tasks [5].More recent studies on the learning properties of different fuzzy logic operators have also been proposed by Van Krieken et al. [32,33].By combining different logic semantics for the connectives, the authors achieved the most significant performance improvement, but the dependence between the connectives is no longer obeying any specific logic theory.
The relaxation of logic quantifiers has also been the subject of a wide range of studies.On the performance side, different quantifier conversions have been taken into account and validated.For instance, in Diligenti et al. [5] the arithmetic mean and the maximum operator have been used to convert the universal and existential quantifiers, respectively.Different possibilities have been considered for the universal quantifier in Donadello et al. [34], while the existential quantifier depends on this choice via the application of the strong negation using the DeMorgan law.However, the arithmetic mean operator has been shown to achieve better performances in the conversion of the universal quantifier [34], with the existential quantifier implemented by Skolemization.In spite of improving the performances, the universal and existential quantifiers should be thought of as a generalized AND and OR, respectively.Therefore, converting these quantifiers using a mean operator has no direct justification inside a logic theory, and spoil the original semantics.
There have been a few attempts in the literature to address the problem of choosing semantically driven loss functions to enforce the satisfaction of the logic constraints.However, these works are generally not fully semantically coherent or too specific.A unified principle to select a suitable loss function that can be logically interpreted according to the adopted fuzzy logic semantics is still missing.For instance, both SBR [5] and LTN [30] rely on minimizing the strong negation of each logic constraint, whereas Lyrics [21] also allows the usage of the negative logarithm.A different perspective is considered in Semantic Loss [22], where the authors propose a new loss function that is very close to the negative logarithm one and that is able to achieve (near) state-ofthe-art performances on semi-supervised learning tasks, by combining neural networks and logic constraints.In this paper, we show that these loss functions (and infinitely many more) are special cases of t-norm generators that can be uniquely determined by the choice of a fuzzy logic relaxation.

Background on T-Norm Fuzzy Logic
Many-valued logics have been introduced in order to extend the admissible set of truth values from true (1) and false (0) to a scale of truth-degree having absolutely true and absolutely false as boundary cases.A fuzzy logic is a manyvalued logic, whose set of truth values coincides with the real unit interval [0, 1].This section introduces the basic notions of fuzzy logic together with some illustrative examples.
T-norms [35] are a special kind of binary operations on the real unit interval [0, 1], representing an extension of the Boolean conjunction.
A fuzzy logic can be uniquely defined according to the choice of a certain tnorm T [36].A wide variety of operations corresponding to different fuzzy logic connectives are defined starting from T and the strong negation "¬", and their notation is introduced in Definition 2. Table 1 reports the algebraic semantics of these connectives for Gödel, Lukasiewicz and Product logics, which are referred as the fundamental fuzzy logics, because all the continuous t-norms can be obtained from them by ordinal sums [37].

Archimedean T-Norms
Continuous Archimedean t-norms [35] are special t-norms that can be constructed by means of unary monotone functions, called generators.
T is said to be strict if for all x ∈ (0, 1) we have 0 < T (x, x) < x, otherwise it is said to be nilpotent.
For example, Lukasiewicz (T L ) and Product (T P ) t-norms are nilpotent and strict respectively, while Gödel (T G ) t-norm is idempotent (i.e.∀x : T G (x, x) = x) and hence not even Archimedean.In addition, all the nilpotent and strict t-norms can be related to the Lukasiewicz and Product t-norms as follows.
Theorem 1 ( [35]).Any nilpotent t-norm is isomorphic to T L and any strict t-norm is isomorphic to T P .
The next theorem shows how to construct t-norms by additive1 generators [35].
According to Equation ( 1), the other fuzzy logic connectives deriving from the t-norm can be expressed with respect to the generator.For instance: x

Parameterized Classes of T-Norms
T-norm generators can also depend on a parameter, by consequently defining a parameterized class of t-norms.For instance, given a generator g of a t-norm T and λ > 0, then T λ denotes a class of increasing t-norms that correspond to the generator function g λ (x) = (g(x)) λ .In addition, let T D and T G denote the Drastic (T D (x, y) = (x = y = 1)?1 : 0) and Gödel t-norms respectively, we get: Over the years, several parameterized families of t-norms have been introduced and studied in the literature [35,38].In the following, we recall some prominent examples that we will exploit in the experimental evaluation.
Definition 4 (The Schweizer-Sklar family).For λ ∈ (−∞, +∞), consider: The t-norms corresponding to this generator are called Schweizer-Sklar tnorms, and they are defined according to: The Schweizer-Sklar t-norm T SS λ is Archimedean if and only if λ > −∞, continuous if and only if λ < +∞, strict if and only if −∞ < λ ≤ 0 and nilpotent if and only if 0 < λ < +∞.This t-norm family is strictly decreasing for λ ≥ 0 and continuous with respect to λ ∈ [−∞, +∞], in addition The t-norms corresponding to this generator are called Frank t-norms and they are strict if λ < +∞.The overall class of Frank t-norms is decreasing and continuous.

Background on the Integration of Learning and Logic Reasoning
According to the learning from logical constraints paradigm [20], the available prior knowledge is represented by a set of logic rules.which are relaxed into continuous and differentiable constraints over the task functions (implementing FOL predicates).Positive and negative supervised samples can also be seen as atomic constraints, and the learning process corresponds to finding the task functions that best satisfy the constraints.Example 3. Let us assume that the prior knowledge for an image classification task is expressed by the following sentences "lions live in savanna or in zoos" and "there are no walls in the savanna" (see Figure 1).This domain knowledge can be represented in FOL as "∀x Lion(x) → LiveIn(x, savanna)∨ LiveIn(x, zoo)" and "∀x W all(x) → ¬LiveIn(x, savanna)", being Lion, W all two unary predicates, LiveIn a binary predicate and savanna, zoo two constants.If a neural classifier is able to correctly detect the presence of a lion and a wall in Figure 1, it is also able to establish that the lion is living in a zoo by exploiting the symbolic knowledge.
In the following, we introduce more formally the framework where our work takes place.Let us consider a multi-task learning problem where B P = (P 1 , . . ., P J ) denotes the vector of real-valued functions (task functions) to be determined.Given the set X ⊆ R n of available data, a supervised learning problem can be generally formulated as min B P L(X , B P ) where L is a positivevalued functional denoting a certain loss.In our framework, we assume that the task functions are FOL predicates and all the available knowledge about these predicates, including supervisions, is collected into a knowledge base KB = {ψ 1 , . . ., ψ H } of FOL formulas.The learning task is then expressed as: The link between FOL knowledge and learning was also presented e.g. in [21] and it can be summarized as follows.
• Each Individual is an element of a specific domain, which can be used to ground the predicates defined on such a domain.Any replacement of variables with individuals for a certain predicate is called grounding.• Predicates express the truth degree of some property for an individual (unary predicate) or group of individuals (n-ary predicate).In particular, this paper will focus on learnable predicate functions implemented by (deep) neural networks, but other models can also be used.FOL functions can be included and learned in a similar fashion [39].However, in this presentation, functionfree FOL is used to keep the notation simpler.
Fig. 2 The expression tree corresponding to ∀x W all(x) → ¬LiveIn(x, savanna) for the domain of constants X = {x 1 , x 2 }.
• Knowledge Base (KB) is a collection of FOL formulas expressing the learning task.The integration of learning and logical reasoning is achieved by compiling the logical rules into continuous real-valued constraints correlating all the defined elements and enforcing some expected behavior on them.
Given any rule in KB, individuals, predicates, logical connectives and quantifiers can all be seen as nodes of an expression tree [40].Then, the translation into a functional constraint corresponds to a post-fix visit of the expression tree, consisting of the following steps: • visiting a variable substitutes the variable with the corresponding feature representation of the individual to which the variable is currently assigned; • visiting a predicate computes the output of the predicate with the current input groundings; • visiting a connective combines the grounded predicate values by means of the real-valued operation associated to the connective; • visiting a quantifier aggregates the outputs of the expressions obtained for the single individuals (variable groundings).
Thus, the compilation of the expression tree allows us to convert a formula into a real-valued function, represented by a computational graph.The different functions corresponding to predicates are composed (i.e.aggregated) by means of the truth-functions corresponding to connectives and quantifiers.Given a formula ϕ, we denote by f ϕ its corresponding real-valued functional representation.f ϕ tightly depends on the chosen t-norm driving the fuzzy relaxation.The expression tree corresponding to the FOL formula ∀x W all(x) → ¬LiveIn(x, savanna) is reported in Figure 2 as an example.
A special note concerns quantifiers.They aggregate the truth-values of predicates over their corresponding domains.For instance, according to [41], that first proposed a fuzzy generalization of FOL, the universal and existential quantifiers may be converted as the infimum and supremum over a domain variable (coinciding with minimum and maximum when dealing with finite domains).In particular, given a formula ϕ(x) depending on a certain variable x ∈ X , where X denotes the finite set of available samples for one of the involved predicates in ϕ, the fuzzy semantics of the quantifiers is given by: As shown in the next section, this quantifier relaxation is not convenient for all the t-norms and we propose a more principled approach for the translation.
Once all the formulas in KB are converted into real-valued functions, their distance from satisfaction (i.e.distance from 1-evaluation) can be computed according to a certain decreasing mapping L expressing the penalty for the violation of any constraint.In order to satisfy all the constraints, the learning problem can be formulated as the joint minimization over the single rules using the following loss function factorization: Here any β ψ denotes the weight for the logical constraint ψ in the KB, which can be selected via cross-validation or jointly learned [24,42], f ψ is the functional representation of the formula ψ according to a certain t-norm fuzzy logic and L is a decreasing function denoting the penalty associated to the distance from satisfaction of formulas, so that L(1) = 0.As described in Section 2, in this neural-symbolic scenario all the steps involved in the translation of FOL formulas into a loss function are treated separately, involving very heterogeneous choices.In the next section, we show instead that these steps are intrinsically connected and they can be uniformly derived from a unique global choice: the selection of a t-norm generator.

Loss Functions by T-Norms Generators
This section presents a generalization of the approach introduced in [8], which was limited to supervised learning.In this paper, we present a unified principle to translate the fuzzy relaxation of FOL formulas into the loss function of general machine learning tasks.In particular, we study the mapping of FOL formulas into functional constraints by means of continuous Archimedean tnorm fuzzy logics.We adopt the t-norm generator to penalize the violation of the constraints, i.e. we take L = g.Moreover, since the quantifiers can be seen as generalized AND and OR over the grounded expressions (see Remark 1), we show that by adopting the same fuzzy conversion for connectives and quantifiers, the overall loss function expressed in Equation 3 only depends on the chosen t-norm generator g.Remark 1.Given a formula ϕ(x) defined on the available set of samples X = {x 1 , . . ., x N }, the roles of the quantifiers have to be interpreted as follows: ϕ(x 1 ) AND . . .AND ϕ(x N ) ∃x ϕ(x) ϕ(x 1 ) OR . . .OR ϕ(x N )

General Formulas
Given a certain formula ϕ(x) depending on a variable x that ranges in the set X and its corresponding functional representation f ϕ (x, B P ), the conversion of any universal quantifier may be carried out by means of an Archimedean t-norm T , while the existential quantifier by a t-conorm.For instance, given the formula ψ = ∀x ϕ(x), we have: where g is a generator of the t-norm T .Since any generator function g is decreasing and g(1) = 0, a generator is a suitable choice to map the fuzzy conversion of a formula into a constraint loss to be minimized.By exploiting the same generator of T as loss function (i.e.taking L = g) for ψ = ∀x ϕ(x) expressed by Equation 4, we get the following term L f ψ (X , B P ) to be minimized: As a consequence, the following result can be provided with respect to the convexity of the loss L f ψ (X , B P ) .
Proposition 3. If g is a linear function and f ψ is concave then L f ψ (X , B P ) is convex.If g is a convex function and f ψ is linear then L f ψ (X , B P ) is convex.
Proof Both the arguments follow since, if f ψ is concave (we recall that a linear function is both concave and convex) and g is a convex non-increasing function defined over a univariate domain, then g • f ψ is convex.
Proposition 3 establishes a general criterion to define convex constraints according to a certain generator depending on the fuzzy conversion f ψ and, in turn, by the logical expression ψ.In the following of this section, we show some application cases of this proposition.So far, we did not make any hypothesis on the formula ϕ.In the following, different cases of interest for the main connective of ϕ are reported.Given an additive generator g for a t-norm T , additional connectives may be expressed with respect to g, as reported by Equation 2. If P 1 , P 2 are two unary predicate functions sharing the same input domain X , the following formulas yield the following penalty terms, where we supposed T strict for simplicity:

Examples of Derived Losses
According to the selection of the generator, the same FOL formula can be mapped to different loss functions.This enables us to design customized losses that are more suitable for a specific learning problem, or to provide a theoretical justification to the losses that are already commonly utilized by the machine learning community.Examples 5-8 show some application cases.In particular, also the cross-entropy loss (see Example 6) can be justified under the same logical perspective.
Example 5.If g(x) = 1 − x we get the Lukasiewicz t-norm, that is nilpotent.Hence, from Equation 5 we get: In case f ψ is concave (e.g. if ψ belongs to the concave fragment of Lukasiewicz logic [31]), this function is convex.Example 6.If g(x) = − log(x) we get the Product t-norm, that is strict.From Equation 5 we get a generalization of the cross-entropy loss: In case f ψ (x) is linear (e.g. a literal), this function is convex.
x − 1, with corresponding strict t-norm T (x, y) = xy x+y−xy , the penalty term that is obtained applying g to the formula

Simplication Property
An interesting property of the presented formulation consists in the fact that, in case of compound formulas, several occurrences of the generator may be simplified.For instance, the conversion f ψ (X , B P ) of the formula ψ =∀x P 1 (x) ⊗ P 2 (x) ⇒ P 3 (x) with respect to the selection of a strict t-norm generator g becomes: The simplification expressed on the lower side is general and can be applied to a wide range of logical operators, reducing the required number of applications of g −1 to just the one in front of the expression.In these cases, by applying L = g, the overall penalty of the formula can be determined by just evaluating g on the predicate functions and without applying g −1 .Since g and g −1 can be in general affected by numerical issues (e.g.g = − log), this property may allow the implementation of more numerically stable loss functions, totally preserving the initial semantics of the formula.However, this property does not hold for all the connectives that are definable upon a certain generated t-norm (see Definition 2).For instance, ∀x P 1 (x) ⊕ P 2 (x) becomes: This suggests to identify the connectives that allow, on one hand the simplification of any occurrence of g −1 in L f ψ (X , B P ) , and on the other hand the evaluation of g only on grounded predicates.For short, in the following we say that the formulas built upon such connectives have the simplification property.
Proof The proof is by induction with respect to the number l ≥ 0 of connectives occurring in ϕ.
• If l = 0, i.e. ϕ = P j (x i ) for a certain j ≤ J and x i ∈ X , then g(f ϕ ) = g(P j (x i )).Hence ϕ has the simplification property.∨, ⊗, ⇒, ∼, ⇔} and we have the following cases. - The claim follows by an inductive hypothesis on α, β whose number of involved connectives is less or equal than k.
The argument still holds replacing ∧ with ∨ and min with max.
As in the previous case, the claim follows by inductive hypothesis on α, β.
-The remaining of the cases can be treated in the same way and noting that ∼ α = α ⇒ 0.
The simplification property provides several advantages from an implementation point of view.First, it allows the evaluation of the generator function only on grounded predicate expressions and avoids an explicit computation of the pseudo-inverse g −1 .Second, this property provides a general method to implement n-ary t-norms, of which universal quantifiers can be seen as a special case since we only deal with finite domains (see Section 6.2).Moreover, it is worth to notice that this property does not rely on specific assumptions on the neural models adopted to implement the predicate functions nor on the chosen fuzzy logic exploited for the relaxation.As a result, Lemma 1 can be applied in a wide range of cases.
Finally, the simplification property yields an interesting analogy between truth-functions and loss functions.In logic, the truth degree of a formula is obtained by combining the truth degree of its sub-formulas by means of connectives and quantifiers.In the same way, the loss corresponding to a formula that satisfies the simplification property is obtained by combining the losses corresponding to its sub-formulas, while connectives and quantifiers combine losses rather than truth degrees.

Manifold Regularization: an example
Let us consider a simple multi-task classification problem where two objects A, B must be detected in a set of input images I, represented as a set of features.The learning task consists in determining the predicates P A (i), P B (i), which return true if and only if the input image i is predicted to contain the object A, B, respectively.The positive supervised examples are provided as two sets (or equivalently their membership functions) P A ⊂ I, P B ⊂ I with the images known to contain the object A, B, respectively.The negative supervised examples for A, B are instead provided as two sets N A ⊂ I, N B ⊂ I. Furthermore, the location where the images have been taken is assumed to be known, and a predicate SameLoc(i 1 , i 2 ) is used to express whether two images i 1 , i 2 have been taken in the same location.Finally, we assume that two images taken in the same location are likely to contain the same object.This knowledge about the environment can be enforced via Manifold Regularization, which regularizes the classifier outputs over the manifold built by the image co-location defined via the SameLoc predicate.
The overall knowledge on this learning task can be expressed using FOL via the statement declarations shown in Table 2, where it was assumed that images i 23, i 60 have been taken in the same location and it holds that P A = {i 10, i 101}, P B = {i 103}, N A = {i 11} and N B = ∅.The statements define the constraints that the learners must respect on all the available samples, expressed as FOL rules.Please note that also the fitting of the supervisions on specific input images are expressed as constraints.
Given the selection of a strict generator g and a set of images I⊆ I, the FOL knowledge in Table 2 is compiled into the following optimization task: where B P = {P A , P B }, each β i is a meta-parameter deciding how strongly the i-th contribution should be weighted, I sl is the set of image pairs having the same location The first four elements of the cost function express the fitting of the supervised data, while the latter two express manifold regularization over co-located images.

Experimental Results
The experimental results have been carried out using the Deep Fuzzy Logic (DFL) software2 which allows us to inject prior knowledge in form of a set of FOL formulas into a machine learning task.The formulas are compiled into differentiable constraints using the theory of generators as described in the previous sections.The learning task is then cast into an optimization problem like shown in Section 5.3 and, finally, optimized using the TensorFlow (TF) environment3 [43].In the following section, it is assumed that each FOL constant corresponds to a tensor storing its feature representation.Predicates are mapped to generic functions in the TF computational graph.If the function does not contain any learnable parameter in the graph, it is said to be given, otherwise the function/predicate is said to be learnable, and its parameters will be optimized to maximize the constraints satisfaction.Please note that any learner expressed as a TF computational graph can be transparently incorporated into DFL.

The Learning Task
The CiteSeer dataset [44] consists of 3312 scientific papers, each one assigned to one of six classes: Agents, AI, DB, IR, ML and HCI.The papers are not independent as they are connected by a citation network with 4732 links.This dataset defines a relational learning benchmark, where it is assumed that the representation of an input document is not sufficient for its classification without exploiting the citation network.The citation network can be used to inject useful information into the learning task, as it is often true that two papers connected by a citation belong to the same category.This knowledge can be expressed by providing a general rule of the form: ∀x ∀y Cite(x, y) ⇒ P (x) ⇐⇒ P (y) , where Cite is a binary predicate encoding the fact that x is citing y and P is a task function implementing the membership function of one of the six considered categories.This logical formula expresses a form of manifold regularization, which often emerges in relational learning tasks.Indeed, by linking the prediction of two distinct documents, the behavior of the underlying task functions is regularized enforcing smooth transition over the manifold induced by the Cite relation.Each paper is represented via its bag-of-words, which is a vector having the same size of the vocabulary with the i-th element having a value equal to 1 or 0, depending on whether the i-th word in the vocabulary is present or absent in the document, respectively.In particular, the dictionary in this task consists of 3703 unique words.The set of input document representations is indicated by X, which is split into a train and test set X tr and X te , respectively.The percentage of documents in the two splits is varied across the different experiments.The six task functions P i with i ∈ {Agents, AI, DB, IR, M L, HCI} are bound to the six outputs of a Multi-Layer-Perceptron (MLP) implemented in TF.The neural architecture has 3 hidden layers, with 100 ReLU units each, and softmax activation on the output.Therefore, the task functions share the weights of the hidden layers in such a way that all of them can exploit a common hidden representation.The Cite predicate is a given function, which outputs 1 if the document passed as first argument cites the document passed as second argument, otherwise it outputs 0. Furthermore, an additional given predicate P i is defined for each P i , such that it outputs 1 if and only if x is a positive example for the category i (i.e. it belongs to that category).P i is a supervision predicate, which easily allows us to introduce a supervised signal using FOL ( Section 5.1).A manifold regularization learning problem [46] can be defined by providing, ∀i ∈ {Agents, AI, DB, IR, M L, HCI}, the following two FOL formulas: ∀x ∀y Cite(x, y) ⇒ P i (x) ⇔ P i (y) ( 6) where only positive supervisions have been provided because the trained networks for this task employ a softmax activation function on the output layer, which has the effect of imposing mutually exclusivity among the task functions, reinforcing the positive class and discouraging all the others.DFL allows the user to specify the weights of the formulas, which are treated as hyperparameters.Since we use two formulas per predicate, the weight of the formula expressing the fitting of the supervisions (Equation 7) is set to a fixed value equal to 1, while the weight of the manifold regularization rule (Equation 6) is cross-validated from the grid of values {0.1, 0.01, 0.006, 0.003, 0.001, 0.0001}.

Results
The experimental results measure different aspects of the integration of the prior logic knowledge into a supervised learning task.In particular, different experiments have been designed to track the speed at which the training process converges to the best solution, and how the classification accuracy changes with a variable amount of training data.

Training Convergence Rate
This experimental setup aims at verifying the relation between the choice of the generator and the speed of convergence of the training process.In particular, a simple supervised learning setup is assumed for this experiment, where the learning task enforces the fitting of the supervised examples as defined by Equation 7. The training and test sets are composed of 90% and 10% of the total number of papers, respectively.Two parameterized families of t-norms have been considered: the SS family (Definition 4) and the Frank family (Definition 5).Their parameter λ was varied to construct classical t-norms for some special values of the parameter but also to evaluate intermediate ones.
In order to keep a clear intuition behind the results, optimization was initially carried out using simply a Gradient Descent schema with a fixed learning rate equal to η = 10 −5 .Results are shown in Figures (3-a) and (3-b): it is evident that strict t-norms tend to learn faster than nilpotent ones by penalizing more strongly highly unsatisfied ground formulas.This difference is significant, although slightly reduced, when leveraging the state-of-the-art dynamic learning rate optimization algorithm Adam [45] as shown in Figures 3-c and 3-d.This finding is consistent with the empirically well known fact that the crossentropy loss performs well in supervised learning tasks for deep architectures, because it is effective in avoiding gradient vanishing in deep architectures.The cross-entropy loss corresponds to a strict generator with λ = 0 and λ = 1 in the SS and Frank families, respectively.This selection corresponds to a fast and stable converging solution when paired with Adam, while there are faster converging solutions when using a fixed learning rate.Table 3 also shows the test accuracy when the parameter λ of the SS parametric family is selected from the grid {−1.5, −1, 0, 1, 1.5}, where values of λ ≤ 0 move across strict t-norms (with λ = 0 being the product t-norm), and values greater than 0 move across nilpotent t-norms (with λ = 1 being the Lukasiewicz t-norm).Strict t-norms seem to provide slightly better performances than nilpotent ones on supervised tasks for the vast majority of the splits.However, this does not hold in manifold regularization learning tasks and a limited number of supervisions, where nilpotent t-norms perform better.An explanation of this behavior can be found in the different nature of the two constraints.Indeed, while supervisions provide hard constraints that need to be strongly satisfied, manifold regularization is a general soft rule, which should allow exceptions.When the number of supervision is small and manifold regularization drives the learning process, the milder behavior of nilpotent t-norms performs better, as it more closely models the semantics of the prior knowledge.Finally, it is worth noticing that very strict t-norms (e.g.λ = −1.5 in the considered experiment) provide higher standard deviations compared to other t-norms, especially in the manifold regularization setup.This provides some evidence of a trade-off between the improved learning speed provided by strict t-norms and the introduced training instability due to their extremely non-linear behavior.

Competitive Evaluation
Table 4 compares the accuracy of the selected neural model (NN) trained only with the supervised constraint against other two content-based classifiers, namely logistic regression (LR) and Naive Bayes (NB).These baseline classifiers have been compared against collective classification approaches using the citation network data: Iterative Classification Algorithm (ICA) [47] and Gibbs Sampling (GS) [48] applied on top of the output of the LR and NB contentbased classifiers.Furthermore, the results are compared against the two top performers on this task: Loopy Belief Propagation (LBP) [49] and Relaxation Labeling through Mean-Field Approach (MF) [49].Finally, the results of DFL were built by training the same neural network with both supervision and manifold regularization constraints, for which it was used a generator from the SS family with λ = −1.The accuracy values are obtained as an average over 10-folds created by random splits of 90% and 10% of the data for the train and test sets, respectively.Unlike the other relational approaches that can only be executed at inference time (collective classification), DFL can distill the knowledge in the weights of the neural network.The accuracy results are the highest among all the tested methodologies, in spite of the fact that the neural network trained only on the supervisions performs slightly worse than the other content-based competitors.

Discussion and Practical Implications
The presented framework can be contextualized among a new class of learning frameworks, which exploits the continuous relaxation of FOL to integrate logic knowledge in the learning process [5,6,21,33].

Ease of design and numerical stability
Previous frameworks in this class require an a-priori definition of the operators of a given t-norm fuzzy logic.On the other hand, the presented framework requires only the generator to be defined.This provides two main advantages: a minimum design effort and an improved numerical stability.Indeed, it is possible to apply the generator only on grounded atoms by exploiting the simplification property to apply the penalty function (generator) to the atoms, whereas all compositions are performed via stable operators (e.g.min,max,sum).On the contrary, the previous FOL relaxations correspond to an arbitrary mix of non-linear operators, which can potentially lead to numerically unstable implementations.

Tensor-based integration
The presented framework provides a fundamental advantage in the integration with tensor-based machine learning frameworks like TensorFlow [43] or PyTorch [50].Modern deep learning architectures can be effectively trained by leveraging tensor operations performed via Graphics Processing Units (GPU).However, this ability is conditioned on the possibility of concisely express the operators in terms of parallelizable operations like sums or products over n arguments, which are often implemented as atomic operations in GPU computing frameworks, without requiring to resort to slow iterative procedures.Fuzzy logic operators can not be easily generalized to their n-ary form.For example, the Lukasiewicz conjunction T L (x, y) = max{0, x + y − 1} can be generalized to n-ary form as T L (x 1 , x 2 , . . ., x n ) = max{0, n i=1 (x i ) − n + 1}.
On the other hand, the general SS t-norm T SS λ (x, y) = (x λ + y λ − 1) 1 λ , with −∞ < λ < 0, does not have any (similarly simple) generalization and the implementation of the n-ary form must resort to an iterative application of the binary form, which is very inefficient in tensor-based computations.Previous frameworks like LTN and SBR had to limit the form of the formulas that can be expressed, or carefully select the t-norms in order to provide efficient n-ary implementations.However, the presented framework can express operators in n-ary form in terms of the generators.Thanks to the simplification property, n-ary operators for any continuous Archimedean t-norm can always be expressed as T (x 1 , x 2 , . . ., x n ) = g −1 (min{g(0 + ), n i=1 g(x i )}) in general, and T (x 1 , x 2 , . . ., x n ) = g −1 ( n i=1 g(x i )) if T is strict.

Limitations
Linking the loss function to the desired fuzzy semantics via the single choice of the t-norm generator guarantees logic coherence and simplification properties, but does not guarantee to achieve the highest accuracy for a given task.Another limitation of this approach is that it may not be directly applicable to neural-symbolic models not relaxing the Boolean formulas using t-norm fuzzy logic operators.

Conclusions
This paper presents a framework to embed prior knowledge expressed as logic statements into a learning task yielding several important contributions.First, we showed how human knowledge in the form of logical rules can be translated into differentiable loss functions used during learning.A critical aspect of our approach is that the translation from logic formulas to loss functions is uniquely defined by the choice of a unique operator, i.e. the generator of the corresponding t-norm.This feature clearly distinguishes our approach from the majority of related methods, which are often based on multiple specific choices for each of the fuzzy operators.Second, we have shown that the classical loss functions for supervised learning are naturally recovered within the theory, and that the use of parametric t-norm generators allows the definition of entire classes of loss functions with different convergence properties.The choice of the parameter can therefore be guided by the requirements of the specific applications.Third, the presented theory has driven to the implementation of a general software simulator, called Deep Fuzzy Logic (DFL), which bridges logic reasoning and deep learning using the unifying concept of t-norm generator, as general abstraction to translate any FOL declarative knowledge into an optimization problem solved in TensorFlow.Finally, we designed and implemented multiple experiments in DFL which show how the proposed method allows the definition of new loss functions with better performances both in terms of accuracy and training efficiency.Furthermore, by being able to incorporate logical knowledge seamlessly, our method outperforms several related works on the task of document classification in citation networks.

Fig. 1
Fig.1Image labeled with the presence of a lion and a wall.Classification tasks performed by sub-symbolic models can benefit from logic inference on additional symbolic knowledge.

Fig. 3
Fig.3Learning Dynamics in terms of test accuracy on a supervised task when choosing different t-norms generated by the parameterized SS and Frank families: (a.) and (b.) are learning processes optimized with standard gradient descent, while (c.) and (d.) are optimized with Adam[45].

Table 1
The truth functions for the t-norm, residuum, bi-residuum, weak conjunction, weak disjunction, residual negation, strong negation, t-conorm and material implication of the fundamental fuzzy logics.

Table 2
Example of a learning task expressed using FOL.∀i 1 , i 2

Table 4
Comparison of the test accuracy on the Citeseer dataset obtained by content based and relational classifiers against supervised and relational learning expressed using DFL.All reported results are computed as average over 10 random splits of the train and test data.The bold number indicates the best performer and a statistically significant improvement over the competitors.