1 Introduction

MV-algebras were introduced by Chang (1958) as the semantical counterpart of the Łukasiewicz many-valued propositional logic. This algebraic structure is currently being studied by many researchers, and it is natural that there are many results regarding entropy in this structure; for instance, we refer to Di Nola et al. (2005), Riečan (2005), cf. also Markechová et al. (2018a). An important case of MV-algebras is the so-called product MV-algebra introduced independently by Riečan (1999) and Montagna (2000), see also Di Nola and Dvurečenskij (2001) and Jakubík (2002). This notion generalizes some classes of fuzzy sets (Zadeh 1965); an example of product MV-algebra is a full tribe of fuzzy sets (see, e.g., Riečan and Neubrunn 2002).

In this paper, we continue to study entropy in product MV-algebras, which began in Petrovičová (2000), see also Petrovičová (2001), by defining and studying the R-norm entropy and R-norm divergence in this structure. We recall that the R-norm entropy (cf. Arimoto 1971; Boekke and Van Der Lubbe 1980) of a probability distribution \( P = \left\{ {p_{1} ,p_{2} , \ldots ,p_{n} } \right\} \) is defined, for a positive real number R not equal to 1, by the formula:

$$ H_{R} (P) = \frac{R}{R - 1}\left( {1 - \left[ {\sum\limits_{i = 1}^{n} {p_{i}^{R} } } \right]^{{\tfrac{1}{R}}} } \right). $$

Some results regarding the R-norm entropy measure and its generalizations can be found in Hooda and Ram (2002), Hooda and Sharma (2008), Hooda and Bajaj (2008), Kumar and Choudhary (2012), Kumar et al. (2014), Markechová et al. (2018b). We note that in the recently published paper Markechová and Riečan (2017), the results regarding the Shannon entropy of partitions in product MV-algebras were exploited to define the notions of Kullback–Leibler divergence and mutual information of partitions in this structure. The Kullback–Leibler divergence (K–L divergence, for short) was proposed in Kullback and Leibler (1951) as the distance measure between two probability distributions, and it is currently one of the most basic quantities in information theory (Gray 2009). We remark that the concepts of the R-norm entropy and R-norm divergence are extensions of the notions of Shannon entropy (Shannon 1948) and K–L divergence, respectively.

The aim of the present article is to study the R-norm entropy and R-norm divergence in product MV-algebras. The rest of the article is organized as follows. Sect 2 contains basic definitions, notations and some known facts that will be used in the succeeding sections. Our results are presented in Sects. 3 and 4. In Sect. 3, we define and study the R-norm entropy and conditional R-norm entropy of finite partitions in product MV-algebras and examine their properties. In Sect. 4, the notion of the R-norm divergence in product MV-algebras is introduced and the properties of this distance measure are studied. It is proved that the Shannon entropy and the K–L divergence in product MV-algebras can be derived from their R-norm entropy and R-norm divergence, respectively, as the limiting cases for \( R \to 1. \) We illustrate results with numerical examples. Finally, the last section provides brief closing remarks.

2 Basic definitions and related works

Let us begin by recalling the definitions of the basic terms and some known results that will be used in the following parts. We mention in this section also some works connected with the issue of this article, of course, with no claim for completeness.

For defining the notion of MV-algebra, several different (but equivalent) axiom systems have been used (cf., e.g., Cattaneo and Lombardo 1998; Gluschankof 1993; Riečan 1999). In this paper, we apply the definition of MV-algebra in accordance with the definition given by Riečan (2012), which is based on Mundici’s representation theorem (Mundici 1986; see also Mundici 2011). According to the Mundici theorem, MV-algebras can be viewed as intervals of a commutative lattice ordered groups (shortly l-group). We recall that by an l-group (Anderson and Feil 1988) we mean a triplet \( (G,\,\, + ,\,\, \le ), \) where \( (G,\,\, + ) \) is a commutative group, \( (G,\,\, \le ) \) is a partially ordered set being a lattice and \( x \le y \Rightarrow x + z \le y + z. \)

Definition 1

(Riečan 2012) An MV-algebra is an algebraic system \( (A,\,\, \oplus ,\,\, * ,\,\,0,\,\,u) \) satisfying the following conditions:

  1. (i)

    there exists an l-group \( (G,\,\, + ,\,\, \le ) \) such that \( A = [0,\,u] = \{ x \in G;\,\,0 \le x \le u\} , \) where 0 is the neutral element of \( (G,\,\, + ) \) and u is a strong unit of \( G \) (i.e., \( u \in G \) such that \( u > 0 \) and to each \( x \in G \) there exists a positive integer \( n \) with \( x \le nu \));

  2. (ii)

    \( \oplus ,\,\,\,\, * \) are binary operation on \( A \) satisfying the following identities: \( x \oplus y = (x + y) \wedge u, \)\( x * y = \)\( (x + y - u) \vee 0. \)

Definition 2

(Riečan and Mundici 2002) A state on an MV-algebra \( (A,\,\, \oplus ,\,\, * ,\,\,0,\,\,u) \) is a map \( s:A \to [0,\,1] \) with the properties: (i) \( s(u) = 1; \) (ii) if \( x,y \in A \) such that \( x + y \le u, \) then \( s(x + y) = s(x) + s(y). \)

Definition 3

(Riečan 2012) A product MV-algebra is an algebraic structure \( (A,\,\, \oplus ,\,\, * ,\,\, \cdot \,\,,\,\,0,\,\,u), \) where \( (A,\,\, \oplus ,\,\, * ,\,\,0,\,\,u) \) is an MV-algebra and \(\cdot \) is a commutative and associative binary operation on \( A \) with the following properties:

  1. (i)

    for every \( x \in A, \)\( u \cdot x = x; \)

  2. (ii)

    if \( x,y,z \in A \) such that \( x + y \le u, \) then \( z \cdot x + z \cdot y \le u, \) and \( z \cdot (x + y) = z \cdot x + z \cdot y. \)

For the sake of brevity, we write in the following \( (A,\,\, \cdot \,) \) instead of \( (A,\,\, \oplus ,\,\, * ,\,\, \cdot \,\,,\,\,0,\,\,u). \) A relevant probability theory for the product MV-algebras was developed in Riečan (2000), see also Kroupa (2005) and Vrábelová (2000). A suitable entropy theory of Shannon type for the product MV-algebras has been provided in Petrovičová (2000, 2001), Riečan (2005). The main idea and some results of this theory follow.

Following Petrovičová (2000), by a partition in a product MV-algebra \( (A,\,\, \cdot \,), \) we will mean any n-tuple \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \) of (not necessarily different) elements of \( A \) with the property \( x_{1} + x_{2} + \cdots + x_{n} = u. \) In the system of all partitions in a given product MV-algebra \( (A,\,\, \cdot \,), \) we define the refinement partial order \( \succ \) in a standard way (cf. Markechová et al. 2018a). If \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ), \) and \( \beta = (y_{1} ,y_{2} , \ldots ,y_{m} ) \) are two partitions in \( (A,\,\, \cdot \,), \) then we write \( \beta \succ \alpha \) (and we say that \( \beta \) is a refinement of \( \alpha \,), \) if there exists a partition \( \left\{ {I(1),I(2), \ldots ,I(n)} \right\} \) of the set \( \left\{ {1,2, \ldots ,m} \right\} \) such that \( x_{i} = \sum\nolimits_{j \in I\left( i \right)} {y_{j} } , \) for \( i = 1,2, \ldots ,n. \) Further, we define \( \alpha \vee \beta \) as a k-tuple (where \( k = n \cdot m) \) consisting of the elements \( x_{ij} = x_{i} \cdot y_{j} ,\, \)\( i = 1,2, \ldots ,n,\;j = 1,2, \ldots ,m. \) Since \( \sum\nolimits_{i = 1}^{n} {\sum\nolimits_{j = 1}^{m} {x_{i} \cdot y_{j} } } = \left( {\sum\nolimits_{i = 1}^{n} {x_{i} } } \right) \cdot \left( {\sum\nolimits_{j = 1}^{m} {y_{j} } } \right) = \)\( u \cdot u = u, \) the k-tuple \( \alpha \vee \beta \) is a partition in \( (A,\,\, \cdot \,); \) it represents an experiment consisting of a realization of \( \alpha \) and \( \beta . \)

Proposition 1

Let\( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \)be a partition in a product MV-algebra\( (A,\,\, \cdot \,) \)and s be a state on\( (A,\,\, \cdot \,). \)Then, for any element\( y \in A, \)it holds\( s(y) = \sum\nolimits_{i = 1}^{n} {s(x_{i} } \cdot y). \)

Proof

The proof can be found in Markechová et al. (2018a).

Proposition 2

If\( \alpha ,\,\,\beta \)are partitions in a product MV-algebra\( (A,\,\, \cdot \,) \)such that\( \beta \succ \alpha , \)then for every partition\( \gamma \)in\( (A,\,\, \cdot \,), \)it holds\( \beta \vee \gamma \succ \alpha \vee \gamma . \)

Proof

The proof can be found in Markechová et al. (2018a).

Definition 4

Let \( s \) be a state on a product MV-algebra \( (A,\,\, \cdot \,). \) We say that partitions \( \alpha , \)\( \beta \) in \( (A,\,\, \cdot \,) \) are statistically independent with respect to s, if \( s(x\,\, \cdot \,y) = s(x\,)\, \cdot \,s(y), \) for every \( x \in \alpha , \) and \( y \in \beta . \)

The following definition of entropy of Shannon type has been introduced in Petrovičová (2000).

Definition 5

Let \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \) be a partition in a product MV-algebra \( (A,\,\, \cdot \,) \) and s be a state on \( (A,\,\, \cdot \,). \) Then the entropy of \( \alpha \) with respect to s is defined by Shannon’s formula:

$$ H_{b}^{s} (\alpha ) = - \sum\limits_{i = 1}^{n} {F(s(x_{i} )} ), $$
(1)

where

$$ F:\,\left[ {0,} \right.\left. \infty \right) \to \Re ,F(x) = \left\{ {\begin{array}{*{20}l} {x\log_{b} x,} \hfill & {{\text{if}}\;x > 0;} \hfill \\ {0,} \hfill & {\text{if}\;x = 0.} \hfill \\ \end{array} } \right. $$

If \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ), \) and \( \beta = (y_{1} ,y_{2} , \ldots ,y_{m} ) \) are two partitions in \( (A,\,\, \cdot \,), \) then the conditional entropy of \( \alpha \) given \( \beta \) is defined by:

$$ H_{b}^{s} \left( {\alpha /\beta } \right) = - \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )} } \cdot \log_{b} \frac{{s(x_{i} \cdot y_{j} )}}{{s(y_{j} )}}. $$
(2)

The base b of the logarithm can be any positive real number; depending on the selected base b of the logarithm, information is measured in bits (b = 2), nats (b = e), or dits (b = 10). Note that we use the convention (based on continuity arguments) that \( 0\log_{b} \frac{0}{x} = 0 \) if \( x \ge 0. \)

The entropy and the conditional entropy of partitions in a product MV-algebra satisfy all properties corresponding to properties of Shannon’s entropy of measurable partitions in the classical case; for more details, see Petrovičová (2000). The notion of K–L divergence in product MV-algebras was defined in Markechová and Riečan (2017) as follows.

Definition 6

Let \( s,\,\,t \) be states defined on a given product MV-algebra \( (A,\,\, \cdot \,), \) and \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \) be a partition in \( (A,\,\, \cdot \,). \) Then, we define the K–L divergence \( D_{\alpha } (s\parallel t) \) by the formula:

$$ D_{\alpha } (s\parallel t) = \sum\limits_{i = 1}^{n} {s(x_{i} ) \cdot \log_{b} \frac{{s(x_{i} )}}{{t(x_{i} )}}} . $$
(3)

The logarithm in this formula is taken to the base \( b = 2 \) if information is measured in units of bits, to the base \( b = 10 \) if information is measured in dits, or to the base \( b = e \) if information is measured in nats. We use the convention that \( x\log_{b} \frac{x}{0} = \infty \) if \( x > 0, \) and \( 0\log_{b} \frac{0}{x} = 0 \) if \( x \ge 0. \)

3 The R-norm entropy of partitions in product MV-algebras

In this section, we shall introduce the concept of R-norm entropy in product MV-algebras and prove basic properties of this measure of information. It is shown that it has properties that correspond to properties of Shannon’s entropy of measurable partitions, with the exception of additivity. In particular, we prove that the R-norm entropy \( H_{R}^{s} (\alpha ) \) is a concave function on the family of all states defined on a given product MV-algebra \( (A,\,\, \cdot \,). \)

Definition 7

Let \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \) be a partition in a given product MV-algebra \( (A,\,\, \cdot \,). \) The R-norm entropy of \( \alpha \) with respect to a state s defined on \( (A,\,\, \cdot \,) \) is defined for \( R \in (0,\,\,1) \cup (1,\,\,\infty ) \) as the number:

$$ H_{R}^{s} (\alpha ) = \frac{R}{R - 1}\left( {1 - \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right). $$
(4)

Remark 1

For the sake of brevity, we write \( s(x_{i} )^{R} \) instead of \( \left( {s(x_{i} )} \right)^{R} . \)

Remark 2

It is easy to verify that the R-norm entropy \( H_{R}^{s} (\alpha ) \) is always nonnegative. Namely, for \( 0 < R < 1, \) it holds \( s(x_{i} )^{R} \ge s(x_{i} ), \) for \( i = 1,2, \ldots ,n, \) hence \( \sum\nolimits_{i = 1}^{n} {s(x_{i} )^{R} } \ge \sum\nolimits_{i = 1}^{n} {s(x_{i} )} = s(x_{1} + x_{2} + \cdots + x_{n} ) = s(u) = 1. \) It follows that \( \left[ {\sum\nolimits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} \ge 1. \) Since \( \frac{R}{R - 1} < 0 \) for \( 0 < R < 1, \) we get \( H_{R}^{s} (\alpha ) = \frac{R}{R - 1}\left( {1 - \left[ {\sum\nolimits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right) \ge 0. \) On the other hand, for \( R > 1, \) we have \( s(x_{i} )^{R} \le s(x_{i} ), \) for \( i = 1,2, \ldots ,n, \) hence \( \sum\nolimits_{i = 1}^{n} {s(x_{i} )^{R} } \le \sum\nolimits_{i = 1}^{n} {s(x_{i} )} = 1. \) This implies that \( \left[ {\sum\nolimits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} \le 1. \) Since \( \frac{R}{R - 1} > 0 \) for \( R > 1, \) it follows that \( H_{R}^{s} (\alpha ) = \)\( \frac{R}{R - 1}\left( {1 - \left[ {\sum\nolimits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right) \ge 0. \)

Definition 8

Let \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ), \) and \( \beta = (y_{1} ,y_{2} , \ldots ,y_{m} ) \) be two partitions in \( (A,\,\, \cdot \,) \) and s be a state defined on \( (A,\,\, \cdot \,). \) The conditional R-norm entropy of \( \alpha \) given \( \beta \) with respect to s is defined for \( R \in (0,\,\,1) \cup (1,\,\,\infty ) \) by the formula:

$$ H_{R}^{s} (\alpha /\beta ) = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } } \right]^{{\tfrac{1}{R}}} - \left[ {\sum\limits_{j = 1}^{m} {\sum\limits_{i = 1}^{n} {s(x_{i} \cdot y_{j} )^{R} } } } \right]^{{\tfrac{1}{R}}} } \right). $$
(5)

Remark 3

Consider any product MV-algebra \( (A,\,\, \cdot \,), \) and a state \( s:A \to [0,\,1]. \) It is easy to see that the set \( \varepsilon = \left\{ {\,u} \right\} \) is a partition in \( (A,\,\, \cdot ) \) with the property \( \alpha \succ \varepsilon , \) for any partition \( \alpha \) in \( (A,\,\, \cdot \,), \) and with the R-norm entropy \( H_{R}^{s} (\varepsilon ) = \frac{R}{R - 1}\left( {1 - \left[ {s(u)^{R} } \right]^{{\tfrac{1}{R}}} } \right) = 0. \) Evidently, \( H_{R}^{s} (\alpha /\varepsilon ) = H_{R}^{s} (\alpha ). \)

The following theorem shows that as the limiting case of the conditional R-norm entropy \( H_{R}^{s} (\alpha /\beta ) \) for \( R \to 1, \) we get the conditional Shannon entropy \( H_{b}^{s} \left( {\alpha /\beta } \right) \) expressed in nats.

Theorem 1

Let \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ), \) and \( \beta = (y_{1} ,y_{2} , \ldots ,y_{m} ) \) be two partitions in a product MV-algebra \( (A,\,\, \cdot \,) \) , and s be a state defined on \( (A,\,\, \cdot \,). \) Then:

$$ \mathop {\lim }\limits_{R \to 1} H_{R}^{s} (\alpha /\beta ) = - \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )} } \cdot \ln \frac{{s(x_{i} \cdot y_{j} )}}{{s(y_{j} )}}. $$

Proof

Put \( f(R) = \left[ {\sum\nolimits_{j = 1}^{m} {s(y_{j} )^{R} } } \right]^{{\tfrac{1}{R}}} - \left[ {\sum\nolimits_{j = 1}^{m} {\sum\nolimits_{i = 1}^{n} {s(x_{i} \cdot y_{j} )^{R} } } } \right]^{{\tfrac{1}{R}}} , \) and \( g(R) = 1 - \tfrac{1}{R}, \) for every \( R \in (0,\,\,\infty ). \) Then the functions \( f,\,\,g \) are differentiable, and for every \( R \in (0,\,\,1) \cup (1,\,\,\infty ), \) we can write:

$$ H_{R}^{s} (\alpha /\beta ) = \frac{1}{{1 - \tfrac{1}{R}}}\left( {\left[ {\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } } \right]^{{\tfrac{1}{R}}} - \left[ {\sum\limits_{j = 1}^{m} {\sum\limits_{i = 1}^{n} {s(x_{i} \cdot y_{j} )^{R} } } } \right]^{{\tfrac{1}{R}}} } \right) = \frac{f(R)}{g(R)}. $$

Obviously, \( \mathop {\lim }\limits_{R \to 1} g(R) = g(1) = 0. \) Further, since by Proposition 1, for \( j = 1,2, \ldots ,m, \) it holds \( \sum\nolimits_{i = 1}^{n} {s(x_{i} \cdot y_{j} ) = } \,s(y_{j} ), \) we get:

$$ \mathop {\lim }\limits_{R \to 1} f(R) = f(1) = \sum\limits_{j = 1}^{m} {s(y_{j} ) - } \sum\limits_{j = 1}^{m} {\sum\limits_{i = 1}^{n} {s(x_{i} \cdot y_{j} ) = } } \sum\limits_{j = 1}^{m} {s(y_{j} ) - } \sum\limits_{j = 1}^{m} {s(y_{j} ) = 1 - 1 = 0.} $$

Using L’Hôpital’s rule, this implies that

$$ \mathop {\lim }\limits_{R \to 1} H_{R}^{s} (\alpha /\beta ) = \frac{{\mathop {\lim }\limits_{R \to 1} f^{\prime}(R)}}{{\mathop {\lim }\limits_{R \to 1} g^{\prime}(R)}} $$

under the assumption that the right-hand side exists. Let us calculate the derivative of the function \( f(R) \):

$$ \begin{aligned} & \frac{d}{dR}f(R) = e^{{\frac{1}{R}\ln \sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } }} \cdot \left( { - \frac{1}{{R^{2} }} \cdot \ln \sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } + \frac{1}{R} \cdot \frac{1}{{\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } }}\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } \cdot \ln s(y_{j} )} \right) \\ & \quad - e^{{\frac{1}{R}\ln \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )^{R} } } }} \cdot \left( { - \frac{1}{{R^{2} }} \cdot \ln \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )^{R} } } + \frac{1}{R} \cdot \frac{1}{{\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )^{R} } } }}\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )^{R} } } \cdot \ln s(x_{i} \cdot y_{j} )} \right). \\ \end{aligned} $$

Since \( \mathop {\lim }\limits_{R \to 1} g^{\prime}(R) = \mathop {\lim }\limits_{R \to 1} \frac{1}{{R^{2} }} = 1, \) we get:

$$ \begin{aligned} & \mathop {\lim }\limits_{R \to 1} H_{R}^{s} (\alpha /\beta ) = \mathop {\lim }\limits_{R \to 1} f^{\prime}(R) = \sum\limits_{j = 1}^{m} {s(y_{j} ) \cdot \ln s(y_{j} )} - \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )} } \cdot \ln s(x_{i} \cdot y_{j} ) \\ & \quad = \sum\limits_{j = 1}^{m} {\sum\limits_{i = 1}^{n} {s(x_{i} \cdot y_{j} )} \cdot \ln s(y_{j} )} - \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )} } \cdot \ln s(x_{i} \cdot y_{j} ) \\ & \quad = - \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )} } \cdot \left( {\ln s(x_{i} \cdot y_{j} ) - \ln s(y_{j} )} \right) = - \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )} } \cdot \ln \frac{{s(x_{i} \cdot y_{j} )}}{{s(y_{j} )}}. \\ \end{aligned} $$

The following theorem states that the R-norm entropy \( H_{R}^{s} (\alpha ) \) converges for \( R \to 1 \) to the Shannon entropy \( H_{b}^{s} (\alpha ) \) expressed in nats.

Theorem 2

Let \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \) be a partition in a product MV-algebra \( (A,\,\, \cdot \,), \) and s be a state defined on \( (A,\,\, \cdot \,). \) Then:

$$ \mathop {\lim }\limits_{R \to 1} H_{R}^{s} (\alpha ) = - \sum\limits_{i = 1}^{n} {s(x_{i} ) \cdot \ln {\mkern 1mu} } s(x_{i} ). $$

Proof

The claim follows immediately from Theorem 1 by substituting \( \left\{ u \right\} \) for \( \beta . \)

In the following part, basic properties of the R-norm entropy \( H_{R}^{s} (\alpha ) \) are derived.

Theorem 3

Let s be a state defined on a product MV-algebra\( (A,\,\, \cdot \,). \)Then for arbitrary partitions\( \alpha ,\,\,\beta \)and\( \gamma \)\( (A,\,\, \cdot \,) \)it holds:

$$ H_{R}^{s} (\alpha \vee \beta /\gamma ) = H_{R}^{s} (\alpha /\gamma ) + H_{R}^{s} (\beta /\alpha \vee \gamma ). $$
(6)

Proof

Suppose that \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ),\beta = (y_{1} ,y_{2} , \ldots ,y_{m} ),\gamma = \left\{ {z_{1} ,z_{2} , \ldots ,z_{r} } \right\}. \) Let us calculate:

$$ \begin{aligned} & H_{R}^{s} (\alpha \vee \beta /\gamma ) = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{k = 1}^{r} {s(z_{k} )^{R} } } \right]^{{\tfrac{1}{R}}} - \left[ {\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {\sum\limits_{k = 1}^{r} {s(x_{i} \cdot y_{j} \cdot z_{k} )^{R} } } } } \right]^{{\tfrac{1}{R}}} } \right) \\ & \quad = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{k = 1}^{r} {s(z_{k} )^{R} } } \right]^{{\tfrac{1}{R}}} - \left[ {\sum\limits_{i = 1}^{n} {\sum\limits_{k = 1}^{r} {s(x_{i} \cdot z_{k} )^{R} } } } \right]^{{\tfrac{1}{R}}} } \right) \\ & \quad \quad + \frac{R}{R - 1}\left( {\left[ {\sum\limits_{i = 1}^{n} {\sum\limits_{k = 1}^{r} {s(x_{i} \cdot z_{k} )^{R} } } } \right]^{{\tfrac{1}{R}}} - \left[ {\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {\sum\limits_{k = 1}^{r} {s(x_{i} \cdot y_{j} \cdot z_{k} ))^{R} } } } } \right]^{{\tfrac{1}{R}}} } \right) \\ & = H_{R}^{s} (\alpha /\gamma ) + H_{R}^{s} (\beta /\alpha \vee \gamma ). \\ \end{aligned} $$

Using mathematical induction, we get the following generalization of Eq. (6).

Theorem 4

Let\( \alpha_{1} ,\alpha_{2} , \ldots ,\alpha_{n} \)and\( \gamma \)be partitions in a product MV-algebra\( (A,\,\, \cdot \,), \)and s be a state defined on\( (A,\,\, \cdot \,). \)Then, for\( n = 2,3, \ldots , \)the following equality holds:

$$ H_{R}^{s} \left( { \vee_{i = 1}^{n} \alpha_{i} /\gamma } \right) = H_{R}^{s} (\alpha_{1} /\gamma ) + \sum\limits_{i = 2}^{n} {H_{R}^{s} \left( {\alpha_{i} /\left( { \vee_{k = 1}^{i - 1} \alpha_{k} } \right) \vee \gamma } \right)} . $$
(7)

Remark 4

Let \( \alpha_{1} ,\alpha_{2} , \ldots ,\alpha_{n} \) be partitions in a product MV-algebra \( (A,\,\, \cdot \,). \) If we put \( \gamma = \left\{ u \right\} \) in Eq. (7), we get the following equality:

$$ H_{R}^{s} (\alpha_{1} \vee \alpha_{2} \vee \ldots \vee \alpha_{n} ) = H_{R}^{s} (\alpha_{1} ) + \sum\limits_{i = 2}^{n} {H_{R}^{s} \left( {\alpha_{i} / \vee_{k = 1}^{i - 1} \alpha_{k} } \right)} . $$
(8)

Putting \( n = 2 \) in (8), we obtain the following property of the R-norm entropy \( H_{R}^{s} (\alpha ). \)

Theorem 5

Let α and β be two partitions in a product MV-algebra\( (A,\,\, \cdot \,) \)and s be a state defined on\( (A,\,\, \cdot \,). \)Then:

$$ H_{R}^{s} (\alpha \, \vee \beta ) = H_{R}^{s} (\alpha ) + H_{R}^{s} (\beta /\alpha ). $$
(9)

To illustrate the result of the previous theorem, we provide the following example.

Example 1

Consider the measurable space \( ([0,\,1],\,\,{\mathcal{B}}\,), \) where \({\mathcal{B}}\) is the \( \sigma \)-algebra of all Borel subsets of the unit interval \( [0,\,1]. \) Let A be the family of all Borel measurable functions \( f:\,\,\,[0,\,1] \to [0,\,1], \) so-called full tribe of fuzzy sets (Riečan and Neubrunn 2002). The family A is closed also under the natural product of fuzzy sets and represents a special case of product MV-algebras. We define a state \( s:A \to [0,\,1] \) by the equality \( s(f) = \)\( \int_{0}^{1} {f(x)dx} , \) for any element \( f \) of \( A. \) Evidently, the pairs \( \alpha = (f_{1} ,\,\,\,f_{2} ) \) and \( \beta = (g_{1} ,\,\,\,g_{2} ), \) where \( f_{1} (x) = x, \)\( f_{2} (x) = 1 - x, \)\( g_{1} (x) = x^{2} ,\,\, \)\( g_{2} (x) = \,1 - x^{2} , \)\( x \in [0,\,1], \) are two partitions in \( (A,\,\, \cdot ) \) with the s-state values \( \tfrac{1}{2},\,\tfrac{1}{2} \) and \( \tfrac{1}{3},\,\tfrac{2}{3} \) of the corresponding elements, respectively. By simple calculations, we get that \( H_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{s} (\alpha ) = 1, \)\( H_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{s} (\beta )\dot{ = }0.94281, \)\( H_{2}^{s} (\alpha )\dot{ = }0.585786, \)\( H_{2}^{s} (\beta )\dot{ = } \)\( 0.509288. \) The join of partitions \( \alpha \) and \( \beta \) is the quadruple \( \alpha \vee \beta = (f_{1} \cdot g_{1} ,\,\,f_{1} \cdot g_{2} ,\,\,f_{2} \cdot g_{1} ,\,\,f_{2} \cdot g_{2} ) \) with the s-state values \( \tfrac{1}{4},\,\tfrac{1}{4},\tfrac{1}{12},\tfrac{5}{12} \) of the corresponding elements. Using the formula (4), it can be computed that \( H_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{s} (\alpha \, \vee \beta )\;\dot{ = }\;2.741023, \)\( H_{2}^{s} (\alpha \, \vee \beta )\;\dot{ = }\;0.894458, \) and using the formula (5), it can be computed that \( H_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{s} (\alpha \,/\beta )\;\dot{ = }\;1.798212, \)\( H_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{s} (\beta /\alpha )\;\dot{ = }\;1.741023, \)\( H_{2}^{s} (\alpha \,/\beta )\;\dot{ = }\;0.38517, \) and \( H_{2}^{s} (\beta /\alpha )\;\dot{ = }\;0.308672. \) It can be verified that:

$$ \begin{aligned} H_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{s} (\alpha \, \vee \beta ) & = H_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{s} (\alpha ) + H_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{s} (\beta /\alpha ) = H_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{s} (\beta ) + H_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{s} (\alpha /\beta ), \\ H_{2}^{s} (\alpha \, \vee \beta ) & = H_{2}^{s} (\alpha ) + H_{2}^{s} (\beta /\alpha ) = H_{2}^{s} (\beta ) + H_{2}^{s} (\alpha /\beta ). \\ \end{aligned} $$

Theorem 6

If\( \alpha ,\,\,\beta \)and\( \gamma \)are partitions in a product MV-algebra\( (A,\,\, \cdot \,) \)such that\( \beta \succ \alpha , \)then:

  1. (i)
    $$ H_{R}^{s} (\alpha ) \le H_{R}^{s} (\beta ); $$
  2. (ii)
    $$ H_{R}^{s} (\alpha /\gamma ) \le H_{R}^{s} (\beta /\gamma ). $$

Proof

(i) Assume that \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ), \)\( \beta = (y_{1} ,y_{2} , \ldots ,y_{m} ), \)\( \gamma = \left\{ {z_{1} ,z_{2} , \ldots ,z_{r} } \right\}, \)\( \beta \succ \alpha . \) Then there exists a partition \( \left\{ {I(1),I(2), \ldots ,I(n)} \right\} \) of the set \( \left\{ {1,2, \ldots ,m} \right\} \) such that \( x_{i} = \sum\nolimits_{j \in I\left( i \right)} {y_{j} } , \) for \( i = 1,2, \ldots ,n. \) It follows that \( s(x_{i} ) = s\left( {\sum\nolimits_{j \in I\left( i \right)} {y_{j} } } \right) \)\( = \sum\nolimits_{j \in I\left( i \right)} {s(y_{j} } ), \) for \( i = 1,2, \ldots ,n. \) For the case of \( R > 1, \) we obtain:

$$ s(x_{i} )^{R} = \left( {\sum\limits_{j \in I(i)} {s(y_{j} )} } \right)^{R} \ge \sum\limits_{j \in I(i)} {s(y_{j} )^{R} } ,\;{\text{for}}\;i = 1,2, \ldots ,n, $$

and consequently:

$$ \sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } \ge \sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } . $$

Hence

$$ \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\frac{1}{R}}} \ge \left[ {\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } } \right]^{{\frac{1}{R}}} . $$

Since \( \frac{R}{R - 1} > 0 \) for \( R > 1, \) we conclude that:

$$ H_{R}^{s} (\alpha ) = \frac{R}{R - 1}\left( {1 - \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right) \le \frac{R}{R - 1}\left( {1 - \left[ {\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right) = H_{R}^{s} (\beta ). $$

The case of \( 0 < R < 1 \) can be obtained using similar techniques.

(ii) Let \( \alpha ,\,\,\beta \) and \( \gamma \) be partitions in a product MV-algebra \( (A,\,\, \cdot \,) \) such that \( \beta \succ \alpha . \) Then, according to Proposition 2, we have \( \beta \vee \gamma \succ \alpha \vee \gamma . \) Therefore, by Theorem 5 and the property (i), we get:

$$ H_{R}^{s} (\alpha /\gamma ) = H_{R}^{s} (\alpha \vee \gamma ) - H_{R}^{s} (\gamma ) \le H_{R}^{s} (\beta \vee \gamma ) - H_{R}^{s} (\gamma ) = H_{R}^{s} (\beta /\gamma ). $$

Theorem 7

Let s be a state defined on a product MV-algebra\( (A,\,\, \cdot \,), \)and\( \alpha ,\,\,\beta \)be statistically independent partitions in\( (A,\,\, \cdot \,) \)with respect to s. Then:

$$ H_{R}^{s} (\alpha /\beta ) = H_{R}^{s} (\alpha ) - \frac{R - 1}{R}H_{R}^{s} (\alpha ) \cdot H_{R}^{s} (\beta ). $$

Proof

Let \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ), \)\( \beta = (y_{1} ,y_{2} , \ldots ,y_{m} ). \) By the assumption, \( s(x_{i} \cdot y_{j} ) = s(x_{i} ) \cdot s(y_{j} ), \) for \( i = 1,2, \ldots ,n,\,\,j = 1,2, \ldots ,m. \) Therefore, we can write:

$$ \begin{aligned} H_{R}^{s} (\alpha /\beta ) \, & = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } } \right]^{{\tfrac{1}{R}}} - \left[ {\sum\limits_{j = 1}^{m} {\sum\limits_{i = 1}^{n} {s(x_{i} \cdot y_{j} )^{R} } } } \right]^{{\tfrac{1}{R}}} } \right) \\ & = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } } \right]^{{\tfrac{1}{R}}} - \left[ {\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } } \right]^{{\tfrac{1}{R}}} \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right) \\ & = \frac{R}{R - 1}\left( {1 - \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} - 1 + \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right.\left. { + \left[ {\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } } \right]^{{\tfrac{1}{R}}} - \left[ {\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } } \right]^{{\tfrac{1}{R}}} \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right) \\ & = \frac{R}{R - 1}\left( {1 - \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right) - \frac{R - 1}{R}\left( {\frac{R}{R - 1}\left( {1 - \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right) \cdot \frac{R}{R - 1}\left( {1 - \left[ {\sum\limits_{j = 1}^{m} {s(y_{j} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right)} \right) \\ & = H_{R}^{s} (\alpha ) - \frac{R - 1}{R}H_{R}^{s} (\alpha ) \cdot H_{R}^{\mu } (\beta ). \\ \end{aligned} $$

One of the most important properties of Shannon entropy is its additivity: the entropy of combined experiment consisting of the realization of two independent experiments is equal to the sum of the entropies of these experiments. In the case of the R-norm entropy \( H_{R}^{s} (\alpha ), \) the following property (so-called pseudo additivity) applies.

Theorem 8

Let s be a state defined on a product MV-algebra\( (A,\,\, \cdot \,), \)and\( \alpha ,\,\,\beta \)be statistically independent partitions in\( (A,\,\, \cdot \,) \)with respect to s. Then:

$$ H_{R}^{s} (\alpha \vee \beta ) = H_{R}^{s} (\alpha ) + H_{R}^{s} (\beta ) - \frac{R - 1}{R}H_{R}^{s} (\alpha ) \cdot H_{R}^{s} (\beta ). $$

Proof

The claim follows by combining Theorem 5 with Theorem 7.

Let us denote by the symbol \({\mathcal{S}}\;(A) \) the class of all states defined on a given product MV-algebra \( (A,\,\, \cdot \,). \) It is very easy to verify that if \( s,\,\,t \in {\mathcal{S}}\;(A), \) then, for every real number \( \lambda \in [0,\,1], \)\( \lambda s + (1 - \lambda )t \in {\mathcal{S}}\;(A). \) In the following, we prove that the R-norm entropy \( H_{R}^{s} (\alpha ) \) is a concave function on the family \( {\mathcal{S}}\;(A). \) In the proof, we will use the known Minkowski inequality which states that for nonnegative real numbers \( a_{1} ,a_{2} , \ldots ,a_{n} ,\,\,b_{1} ,b_{2} , \ldots ,b_{n} , \) we have:

$$ \left[ {\sum\limits_{i = 1}^{n} {a_{i}^{R} } } \right]^{{\tfrac{1}{R}}} + \left[ {\sum\limits_{i = 1}^{n} {b_{i}^{R} } } \right]^{{\tfrac{1}{R}}} \ge \left[ {\sum\limits_{i = 1}^{n} {(a_{i} + b_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} ,\;{\text{for}}\;R > 1, $$

and

$$ \left[ {\sum\limits_{i = 1}^{n} {a_{i}^{R} } } \right]^{{\tfrac{1}{R}}} + \left[ {\sum\limits_{i = 1}^{n} {b_{i}^{R} } } \right]^{{\tfrac{1}{R}}} \le \left[ {\sum\limits_{i = 1}^{n} {(a_{i} + b_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} ,\;{\text{for}}\;0 < R < 1. $$

Theorem 9

Let\( \alpha \)be a partition in a product MV-algebra\( (A,\,\, \cdot \,). \)Then, for every\( s,\,\,t \in {\mathcal{S}}\;(A), \)and for every real number\( \lambda \in [0,\,1], \)the following inequality holds:

$$ \lambda H_{R}^{s} (\alpha ) + (1 - \lambda )H_{R}^{t} (\alpha ) \le H_{R}^{\lambda s + (1 - \lambda )t} (\alpha ). $$
(10)

Proof

Assume that \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ), \) and \( \lambda \in [0,\,1]. \) Putting \( a_{i} = \lambda s(x_{i} ), \) and \( b_{i} = (1 - \lambda )t(x_{i} ), \)\( i = 1,2, \ldots ,n, \) in the Minkowski inequality, we get for \( R > 1: \)

$$ \left[ {\sum\nolimits_{i = 1}^{n} {a_i{^R}} } \right]^{{\tfrac{1}{R}}} + \left[ {\sum\nolimits_{i = 1}^{n} {b_i{^R}} } \right]^{{\tfrac{1}{R}}} \ge \left[ {\sum\nolimits_{i = 1}^{n}} {(a_i + b_i){^R}} \right] ^{{\tfrac{1}{R}}},$$

and for \( 0 < R < 1: \)

$$ \left[ {\sum\nolimits_{i = 1}^{n} {a_i{^R}} } \right]^{{\tfrac{1}{R}}} + \left[ {\sum\nolimits_{i = 1}^{n} {b_i{^R}} } \right]^{{\tfrac{1}{R}}} \le \left[ {\sum\nolimits_{i = 1}^{n}} {(a_i + b_i){^R}} \right] ^{{\tfrac{1}{R}}}.$$

This means that the function \( s \mapsto \left[ {\sum\nolimits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} \) is convex in \( s \) for \( R > 1, \) and concave in \( s \) for \( 0 < R < 1. \) Therefore, the function \( s \mapsto 1 - \left[ {\sum\nolimits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} \) is concave in \( s \) for \( R > 1, \) and convex in \( s \) for \( 0 < R < 1. \) Evidently, \( \frac{R}{R - 1} > 0 \) for \( R > 1, \) and \( \frac{R}{R - 1} < 0 \) for \( 0 < R < 1. \) According to definition of the R-norm entropy \( H_{R}^{s} (\alpha ), \) we obtain that for every \( R \in (0,\,\,1) \cup (1,\,\,\infty ), \) the R-norm entropy \( s \mapsto H_{R}^{s} (\alpha ) \) is a concave function on the family \( {\mathcal{S}}\;(A). \) This means that the inequality (10) is valid.

4 The R-norm divergence in product MV-algebras

In this section, we define the concept of the R-norm divergence of states defined on a given product MV-algebra \( (A,\,\, \cdot \,). \) We will prove basic properties of this quantity. In order to avoid expressions like \( \tfrac{0}{0}, \) we will use in this section the following simplification: for any partition \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \) in \( (A,\,\, \cdot \,), \) we assume that \( s(x_{i} ) > \) 0, for \( i = 1,2, \ldots ,n. \)

Definition 9

Let \( s, \)\( t \) be two states defined on a given product MV-algebra \( (A,\,\, \cdot \,), \) and \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \) be a partition in \( (A,\,\, \cdot \,). \) The R-norm divergence of states \( s, \)\( t \) with respect to \( \alpha \) is defined for \( R \in (0,\,\,1) \cup (1,\,\,\infty ) \) as the number:

$$ d_{R}^{\alpha } (s\parallel t) = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} - 1} \right). $$
(11)

Remark 5

As can be easily seen, for any partition \( \alpha \) in a product MV-algebra \( (A,\,\, \cdot \,), \) the R-norm divergence \( d_{R}^{\alpha } (s\parallel s) \) is zero.

The following theorem shows that the K–L divergence \( D_{\alpha } (s\parallel t) \) measured in nats can be obtained as the limiting case of R-norm divergence \( d_{R}^{\alpha } (s\parallel t) \) for R going to 1.

Theorem 10

Let\( s, \)\( t \)be two states defined on a given product MV-algebra\( (A,\,\, \cdot \,), \)and\( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \)be a partition in\( (A,\,\, \cdot \,). \)Then\( \mathop {\lim }\limits_{R \to 1} d_{R}^{\alpha } (s\parallel t) = \sum\nolimits_{i = 1}^{n} {s(x_{i} )} \ln \frac{{s(x_{i} )}}{{t(x_{i} )}}. \)

Proof

For every \( R \in (0,\,\,1) \cup (1,\,\,\infty ), \) we can write:

$$ d_{R}^{\alpha } (s\parallel t) = \frac{1}{{1 - \tfrac{1}{R}}}\left( {\left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} - 1} \right) = \frac{f(R)}{g(R)}, $$

where \( f,\,\,g \) are continuous functions defined for \( R \in (0,\,\,\infty ) \) by the formulas:

$$ f(R) = \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} - 1,g(R) = 1{\mkern 1mu} {\mkern 1mu} - \tfrac{1}{R}. $$

By continuity of the functions \( f,\,\,g, \) we get that \( \mathop {\lim }\limits_{R \to 1} g(R) = g(1) = 0, \) and

$$ \mathop {\lim }\limits_{R \to 1} f(R) = f(1) = \sum\nolimits_{i = 1}^{n} {s(x_{i} )t(x_{i} )^{0} } - 1 = \sum\nolimits_{i = 1}^{n} {s(x_{i} )} - 1 = 1 - 1 = 0. $$

Using L’Hôpital’s rule, it follows that:

$$ \mathop {\lim }\limits_{R \to 1} d_{R}^{\alpha } (s\parallel t) = \frac{{\mathop {\lim }\limits_{R \to 1} f^{\prime}(R)}}{{\mathop {\lim }\limits_{R \to 1} g^{\prime}(R)}} $$

under the assumption that the right-hand side exists. Let us calculate the derivative of the function \( f(R) \):

$$ \frac{d}{dR}f(R) = e^{{\frac{1}{R}\ln \sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } }} \cdot \left( { - \frac{1}{{R^{2} }} \cdot \ln \sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right.\left. { + \frac{1}{R} \cdot \frac{1}{{\sum\nolimits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } }}\sum\limits_{i = 1}^{n} {\left( { - s(x_{i} )^{R} t(x_{i} )^{1 - R} \cdot \ln t(x_{i} ) + t(x_{i} )^{1 - R} s(x_{i} )^{R} \cdot \ln s(x_{i} )} \right)} } \right). $$

Since \( \mathop {\lim }\limits_{R \to 1} g^{\prime}(R) = \mathop {\lim }\limits_{R \to 1} \frac{1}{{R^{2} }} = 1, \) we get:

$$ \begin{aligned} & \mathop {\lim }\limits_{R \to 1} d_{R}^{\alpha } (s\parallel t) = \mathop {\lim }\limits_{R \to 1} f^{\prime}(R) = \sum\limits_{i = 1}^{n} {\left( {s(x_{i} ) \cdot \ln s(x_{i} ) - s(x_{i} ) \cdot \ln t(x_{i} )} \right)} \\ & \quad = \sum\limits_{i = 1}^{n} {s(x_{i} ) \cdot \ln \frac{{s(x_{i} )}}{{t(x_{i} )}}\,} . \\ \end{aligned} $$

Let \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \) be a partition in \( (A,\,\, \cdot \,). \) In Markechová and Riečan (2017), it has been shown that for the K–L divergence \( D_{\alpha } (s\parallel t) \) it holds the inequality \( D_{\alpha } (s\parallel t) \ge 0. \) The equality holds if and only if \( s(x_{i} ) = t(x_{i} ), \) for \( i = 1,2, \ldots ,n. \) We remark that the previous inequality is known in information theory as the Gibbs inequality. An analogous result also applies to the case of the R-norm divergence \( d_{R}^{\alpha } (s\parallel t), \) as shown in the following theorem.

Theorem 11

Let\( s, \)\( t \)be two states defined on a given product MV-algebra\( (A,\,\, \cdot \,), \)and\( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \)be a partition in\( (A,\,\, \cdot \,). \)Then\( d_{R}^{\alpha } (s\parallel t) \ge 0 \)with the equality if and only if\( s(x_{i} ) = t(x_{i} ), \)for\( i = 1,2, \ldots ,n. \)

Proof

In the proof, we use the Jensen inequality for the function \( \psi \) defined by \( \psi (x) = x^{1 - R} , \) for every \( x \in [0,\,\,\infty ). \) We shall consider two cases: the case of \( R > 1, \) and the case of \( 0 < R < 1. \)

Consider the case of \( R > 1. \) The assumption that \( R > 1 \) implies \( 1 - R < 0, \) hence the function \( \psi \) is convex. Therefore, applying the Jensen inequality, we obtain:

$$ 1 = \left( {\sum\limits_{i = 1}^{n} {t(x_{i} )} } \right)^{1 - R} = \left( {\sum\limits_{i = 1}^{n} {s(x_{i} )\frac{{t(x_{i} )}}{{s(x_{i} )}}} } \right)^{1 - R} \le \sum\limits_{i = 1}^{n} {s(x_{i} )\left( {\frac{{t(x_{i} )}}{{s(x_{i} )}}} \right)^{1 - R} } = \sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } , $$
(12)

and consequently:

$$ \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} \ge 1. $$

Since \( \frac{R}{R - 1} > 0 \) for \( R > 1, \) it follows that

$$ d_{R}^{\alpha } (s\parallel t) = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} - 1} \right) \ge 0. $$

For \( 0 < R < 1, \) the function \( \psi \) is concave. Hence, using the Jensen inequality, we obtain:

$$ \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} \le 1. $$

Since \( \frac{R}{R - 1} < 0 \) for \( 0 < R < 1, \) this yields that:

$$ d_{R}^{\alpha } (s\parallel t) = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} - 1} \right) \ge 0. $$

The equality in (12) holds if and only if \( \frac{{t(x_{i} )}}{{s(x_{i} )}} \) is constant, for \( i = 1,2, \ldots ,n, \) i.e., if and only if \( t(x_{i} ) = cs(x_{i} ), \) for \( i = 1,2, \ldots ,n. \) Taking the sum over all \( i = 1,2, \ldots ,n, \) we get the equality \( \sum\nolimits_{i = 1}^{n} {t(x_{i} )} = \)\( c\sum\nolimits_{i = 1}^{n} {s(x_{i} )} , \) which implies that \( c = 1. \) Therefore \( t(x_{i} ) = s(x_{i} ), \) for \( i = 1,2, \ldots ,n. \) This means that \( d_{R}^{\alpha } (s\parallel t) = 0 \) if and only if \( s(x_{i} ) = t(x_{i} ), \) for \( i = 1,2, \ldots ,n. \)

In the following example, it is shown that the triangle inequality for the R-norm divergence \( d_{R}^{\alpha } (s\parallel t) \) does not hold, in general, which means that the R-norm divergence \( d_{R}^{\alpha } (s\parallel t) \) is not a metric.

Example 2

Consider the product MV-algebra \( (A,\,\, \cdot \,) \) from Example 1 and the real functions \( F_{1} ,\,\,\,F_{2} ,\,\,\,F_{3} \) defined by \( F_{1} (x) = x,\,\,\,F_{2} (x) = x^{2} ,\,\,\,F_{3} (x) = x^{3} , \) for every \( x \in [0,\,1]. \) On the product MV-algebra \( (A,\,\, \cdot \,), \) we define the states \( s_{1} ,\,\,\,s_{2} ,\,\,\,s_{3} \) by the formulas \( s_{i} (f) = \int_{0}^{1} {f(x)dF_{i} (x)} , \)\( i = 1,2,3, \) for any element \( f \) of \( A. \) In addition, we will consider the partition \( \alpha \) in \( (A,\,\, \cdot \,) \) from Example 1. It can be easily calculated that it has the \( s_{1} \)-state values \( \tfrac{1}{2},\,\tfrac{1}{2}; \) the \( s_{2} \)-state values \( \tfrac{2}{3},\,\tfrac{1}{3}; \) and the \( s_{3} \)-state values \( \tfrac{3}{4},\,\,\tfrac{1}{4} \) of the corresponding elements. We get:

$$ \begin{aligned} d_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{\alpha } (s_{1} \parallel s_{2} ) & = 1 - \left( {\sqrt {\tfrac{1}{2} \cdot \tfrac{2}{3}} + \sqrt {\tfrac{1}{2} \cdot \tfrac{1}{3}} } \right)^{2} \;\dot{ = }\;0.02860; \\ d_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{\alpha } (s_{1} \parallel s_{3} ) & = 1 - \left( {\sqrt {\tfrac{1}{2} \cdot \tfrac{3}{4}} + \sqrt {\tfrac{1}{2} \cdot \tfrac{1}{4}} } \right)^{2} \;\dot{ = }\;0.066987; \\ d_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{\alpha } (s_{2} \parallel s_{3} ) & = 1 - \left( {\sqrt {\tfrac{2}{3} \cdot \tfrac{3}{4}} + \sqrt {\tfrac{1}{3} \cdot \tfrac{1}{4}} } \right)^{2} \;\dot{ = }\;0.008418. \\ \end{aligned} $$

Evidently,

$$ d_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{\alpha } (s_{1} \parallel s_{3} ) > d_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{\alpha } (s_{1} \parallel s_{2} ) + d_{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}^{\alpha } (s_{2} \parallel s_{3} ). $$

Theorem 12

Let\( s, \)\( t \)be two states defined on a given product MV-algebra\( (A,\,\, \cdot \,), \)and\( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \)be a partition in\( (A,\,\, \cdot \,). \)In addition, let\( t \)be uniform over\( \alpha , \)i.e.,\( t(x_{i} ) = \tfrac{1}{n}, \)for\( i = 1,2, \ldots ,n. \)Then, it holds:

$$ H_{R}^{s} (\alpha ) = \frac{R}{R - 1}\left( {1 - n^{{\tfrac{1 - R}{R}}} } \right) - n^{{\tfrac{1 - R}{R}}} \cdot d_{R}^{\alpha } (s\parallel t). $$
(13)

Proof

Let us calculate:

$$ \begin{aligned} & d_{R}^{\alpha } (s\parallel t) \, = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} - 1} \right) \\ & \quad = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} n^{R - 1} } } \right]^{{\tfrac{1}{R}}} - 1} \right) = \frac{R}{R - 1} \cdot n^{{\tfrac{R - 1}{R}}} \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} - \frac{R}{R - 1} \\ & \quad = - {\mkern 1mu} n^{{\tfrac{R - 1}{R}}} \cdot \frac{R}{R - 1}\left( {1 - \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} } } \right]^{{\tfrac{1}{R}}} } \right) + n^{{\tfrac{R - 1}{R}}} \cdot \frac{R}{R - 1} - \frac{R}{R - 1} \\ & \quad = - n^{{\tfrac{R - 1}{R}}} \cdot H_{R}^{s} (\alpha ) + \frac{R}{R - 1}\left( {n^{{\tfrac{R - 1}{R}}} - 1} \right). \\ \end{aligned} $$

From this, it follows Eq. (13).

By combining the results of Theorems 11 and 12, we obtain the following property of R-norm entropy.

Corollary 1

Let\( s \)be a state defined on a product MV-algebra\( (A,\,\, \cdot \,). \)Then for any partition\( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \)in\( (A,\,\, \cdot \,), \)it holds:

$$ H_{R}^{s} (\alpha ) \le \frac{R}{R - 1}\left( {1 - n^{{\tfrac{1 - R}{R}}} } \right) $$

with the equality if and only if the state\( s \)is uniform over\( \alpha \).

Theorem 13

Let\( s \)be a state defined on a product MV-algebra\( (A,\,\, \cdot \,). \)Then, for every partition\( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \)in\( (A,\,\, \cdot \,), \)it holds:

  1. (i)

    \( 0 < R < 1 \) implies \( d_{R}^{\alpha } (s\parallel t) \le D_{\alpha } (s\parallel t); \)

  2. (ii)

    \( R > 1 \) implies \( d_{R}^{\alpha } (s\parallel t) \ge D_{\alpha } (s\parallel t), \)

where

$$ D_{\alpha } (s\parallel t) = \sum\limits_{i = 1}^{n} {s(x_{i} )} \ln \frac{{s(x_{i} )}}{{t(x_{i} )}}. $$

Proof

By using the inequality \( \ln x \le x - 1, \) that applies for all real numbers \( x > 0, \) we get:

$$ \ln \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} \le \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} - 1. $$
(14)

Suppose that \( 0 < R < 1. \) Then \( \frac{R}{R - 1} < 0. \) Therefore, using the inequality (14) and the Jensen inequality for the concave function \( \psi \) defined by \( \psi (x) = \ln x, \)\( x \in (0,\,\,\infty ), \) we can write:

$$ \begin{aligned} & d_{R}^{\alpha } (s\parallel t) \, = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} - 1} \right) \\ & \quad \quad \le \frac{R}{R - 1}\ln \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} = \frac{1}{R - 1}\ln \sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } \\ & \quad = \frac{1}{R - 1}\ln \sum\limits_{i = 1}^{n} {s(x_{i} )\left( {\frac{{s(x_{i} )}}{{t(x_{i} )}}} \right)^{R - 1} } \le \frac{1}{R - 1}\sum\limits_{i = 1}^{n} {s(x_{i} )} \ln \left( {\frac{{s(x_{i} )}}{{t(x_{i} )}}} \right)^{R - 1} \\ & \quad = \sum\limits_{i = 1}^{n} {s(x_{i} )} \ln \frac{{s(x_{i} )}}{{t(x_{i} )}}. \\ \end{aligned} $$

The case of \( R > 1 \) can be obtained using similar techniques. □

To illustrate the result of previous theorem, let us consider the following example which is a continuation to Examples 1 and 2.

Example 3

Consider the product MV-algebra \( (A,\,\, \cdot \,) \) from Example 1 and the real functions \( F_{1} ,\,\,\,F_{2} \) defined by \( F_{1} (x) = x,\,\,\,F_{2} (x) = x^{2} , \) for every \( x \in [0,\,1]. \) On the product MV-algebra \( (A,\,\, \cdot \,), \) we define two states \( s_{1} ,\,\,\,s_{2} \) by the formulas \( s_{i} (f) = \int_{0}^{1} {f(x)dF_{i} (x)} , \)\( i = 1,2, \) for any element \( f \) of \( A. \) In addition, we consider the partition \( \alpha = \left( {\chi_{{\left[ {0,\,\tfrac{1}{2}} \right]}} ,\,\,\,\chi_{{\left( {\tfrac{1}{2},\,1} \right]}} } \right) \) in \( (A,\,\, \cdot \,). \) It can be easily calculated that it has the \( s_{1} \)-state values \( \tfrac{1}{2},\,\tfrac{1}{2} \) of the corresponding elements, and the \( s_{2} \)-state values \( \tfrac{1}{4},\,\tfrac{3}{4} \) of the corresponding elements. By simple calculations, we obtain: \( D_{\alpha } (s_{1} \parallel s_{2} )\;\dot{ = }\;0.14384 \) nats, \( D_{\alpha } (s_{2} \parallel s_{1} )\;\dot{ = }\;0.13081 \) nats, \( d_{{{1 \mathord{\left/ {\vphantom {1 3}} \right. \kern-0pt} 3}}}^{\alpha } (s_{1} \parallel s_{2} )\;\dot{ = }\;0.04343, \)\( d_{{{1 \mathord{\left/ {\vphantom {1 3}} \right. \kern-0pt} 3}}}^{\alpha } (s_{2} \parallel s_{1} )\;\dot{ = }\;0.04478, \)\( d_{2}^{\alpha } (s_{1} \parallel s_{2} )\;\dot{ = }\;0.309402, \)\( d_{2}^{\alpha } (s_{2} \parallel s_{1} )\;\dot{ = }\;0.236068. \) As can be seen, for \( R = \tfrac{1}{3} \) we have \( d_{R}^{\alpha } (s_{1} \parallel s_{2} )\text{ < }D_{\alpha } (s_{1} \parallel s_{2} ), \)\( d_{R}^{\alpha } (s_{2} \parallel s_{1} )\text{ < }D_{\alpha } (s_{2} \parallel s_{1} ), \) and for \( R = 2 \) we have \( d_{R}^{\alpha } (s_{1} \parallel s_{2} )\text{ > } \)\( D_{\alpha } (s_{1} \parallel s_{2} ), \)\( d_{R}^{\alpha } (s_{2} \parallel s_{1} )\text{ > }D_{\alpha } (s_{2} \parallel s_{1} ), \) which is consistent with the claim of Theorem 13. Based on the previous results, we see that the K–L divergence \( D_{\alpha } (s\parallel t) \) and the R-norm divergence \( d_{R}^{\alpha } (s\parallel t) \) are not symmetrical.

Definition 10

Let \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ), \) and \( \beta = (y_{1} ,y_{2} , \ldots ,y_{m} ) \) be two partitions in a given product MV-algebra \( (A,\,\, \cdot \,). \) Then we define the conditional R-norm divergence of states \( s,\,\,t \in {\mathcal{S}}\;(A) \) with respect to \( \beta \) assuming a realization of \( \alpha , \) for \( R \in (0,\,\,1) \cup (1,\,\,\infty ), \) as the number:

$$ d_{R}^{\beta /\alpha } (s\parallel t) = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )^{R} t(x_{i} \cdot y_{j} )^{1 - R} } } } \right]^{{\tfrac{1}{R}}} - \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} } \right). $$

Theorem 14

Let\( \alpha ,\,\,\beta \)be two partitions in a given product MV-algebra\( (A,\,\, \cdot \,). \)Then

$$ d_{R}^{\alpha \vee \beta } (s\parallel t) = d_{R}^{\alpha } (s\parallel t) + d_{R}^{\beta /\alpha } (s\parallel t). $$

Proof

Assume that \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ), \)\( \beta = (y_{1} ,y_{2} , \ldots ,y_{m} ). \) Then we have:

$$ \begin{aligned} & d_{R}^{\alpha } (s\parallel t) + d_{R}^{\beta /\alpha } (s\parallel t) \, = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} - 1} \right) \\ & \quad + \frac{R}{R - 1}\left( {\left[ {\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )^{R} t(x_{i} \cdot y_{j} )^{1 - R} } } } \right]^{{\tfrac{1}{R}}} - \left[ {\sum\limits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} } \right) \\ & = \frac{R}{R - 1}\left( {\left[ {\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {s(x_{i} \cdot y_{j} )^{R} t(x_{i} \cdot y_{j} )^{1 - R} } } } \right]^{{\tfrac{1}{R}}} - 1} \right) = d_{R}^{\alpha \vee \beta } (s\parallel t). \\ \end{aligned} $$

Finally, we prove that the R-norm divergence is a convex function on the family \( {\mathcal{S}}\;(A). \)

Theorem 15

Let α be a partition in a product MV-algebra\( (A,\,\, \cdot \,). \)Then, for every\( s_{1} ,\,\,s_{2} ,\,\,t \in {\mathcal{S}}\;(A), \)and for every real number\( \lambda \in [0,\,1], \)the following inequality holds:

$$ d_{R}^{\alpha } (\lambda s_{1} + (1 - \lambda )s_{2} \parallel t) \le \lambda d_{R}^{\alpha } (s_{1} \parallel t) + (1 - \lambda )d_{R}^{\alpha } (s_{2} \parallel t). $$
(15)

Proof

Assume that \( \alpha = (x_{1} ,x_{2} , \ldots ,x_{n} ) \) and \( \lambda \in [0,\,1]. \) Putting \( a_{i} = \lambda s_{1} (x_{i} )t(x_{i} )^{{\tfrac{1 - R}{R}}} , \) and \( b_{i} = (1 - \lambda )s_{2} (x_{i} )t(x_{i} )^{{\tfrac{1 - R}{R}}} , \)\( i = 1,2, \ldots ,n, \) in the Minkowski inequality, we get for \( R > 1{:} \)

$$ \begin{aligned} & \left[ {\sum\limits_{i = 1}^{n} {(\lambda s_{1} (x_{i} ) + (1 - \lambda )s_{2} (x_{i} ))^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} = \left[ {\sum\limits_{i = 1}^{n} {\left( {(\lambda s_{1} (x_{i} ) + (1 - \lambda )s_{2} (x_{i} ))t(x_{i} )^{{\tfrac{1 - R}{R}}} } \right)^{R} } } \right]^{{\tfrac{1}{R}}} \\ & \quad = \left[ {\sum\limits_{i = 1}^{n} {\left( {\lambda s_{1} (x_{i} )t(x_{i} )^{{\tfrac{1 - R}{R}}} + (1 - \lambda )s_{2} (x_{i} )t(x_{i} )^{{\tfrac{1 - R}{R}}} } \right)^{R} } } \right]^{{\tfrac{1}{R}}} \le \left[ {\sum\limits_{i = 1}^{n} {\left( {\lambda s_{1} (x_{i} )t(x_{i} )^{{\tfrac{1 - R}{R}}} } \right)^{R} } } \right]^{{\tfrac{1}{R}}} + \left[ {\sum\limits_{i = 1}^{n} {\left( {(1 - \lambda )s_{2} (x_{i} )t(x_{i} )^{{\tfrac{1 - R}{R}}} } \right)^{R} } } \right]^{{\tfrac{1}{R}}} \\ & \quad = \lambda \left[ {\sum\limits_{i = 1}^{n} {s_{1} (x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} + (1 - \lambda )\left[ {\sum\limits_{i = 1}^{n} {s_{2} (x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} , \\ \end{aligned} $$

and for \( 0 < R < 1{:} \)

$$ \begin{aligned} & \left[ {\sum\limits_{i = 1}^{n} {(\lambda s_{1} (x_{i} ) + (1 - \lambda )s_{2} (x_{i} ))^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} \\ & \quad \ge \lambda \left[ {\sum\limits_{i = 1}^{n} {s_{1} (x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} + (1 - \lambda )\left[ {\sum\limits_{i = 1}^{n} {s_{2} (x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} . \\ \end{aligned} $$

This means that the function \( s \mapsto \left[ {\sum\nolimits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} \) is convex in \( s \) for \( R > 1, \) and concave in \( s \) for \( 0 < R < 1. \) The same applies for the function \( s \mapsto \left[ {\sum\nolimits_{i = 1}^{n} {s(x_{i} )^{R} t(x_{i} )^{1 - R} } } \right]^{{\tfrac{1}{R}}} - 1. \) Since \( \frac{R}{R - 1} > 0 \) for \( R > 1, \) and \( \frac{R}{R - 1} < 0 \) for \( 0 < R < 1, \) we conclude that the function \( s \mapsto d_{R}^{\alpha } (s\parallel t) \) is convex on the family \( {\mathcal{S}}\;(A), \) which means that the inequality (15) holds.

5 Conclusions

In the paper, we have extended the results concerning the Shannon entropy and K–L divergence in product MV-algebras to the case of R-norm entropy and R-norm divergence. We introduced the notion of R-norm entropy of finite partitions in product MV-algebras and derived its basic properties. In addition, we introduced the notion of R-norm divergence in product MV-algebras and we proved basic properties of this quantity. In particular, it was shown that the K–L divergence and Shannon’s entropy of partitions in a given product MV-algebra can be obtained as the limits of their R-norm divergence and R-norm entropy, respectively. We have provided some numerical examples to illustrate the results as well. As mentioned above, the full tribe of fuzzy sets represents a special case of product MV-algebras; so the obtained results can therefore be immediately applied to this important case of fuzzy sets.