1 Introduction

Formulated by Heisenberg [1], the uncertainty relation gives insight into differences between classical and quantum mechanics. According to the relation, simultaneous measurements of some non-commuting observables of a particle cannot be predicted with arbitrary precision.

Numerous studies over the uncertainty relations led to entropic formulation by Białynicki-Birula and Mycielski [2,3,4], as a sum of two continuous Shannon entropies, for probability distributions of position and momentum. As our goal is to consider general observables, let us choose two Hermitian non-commuting operators X and Y. The first uncertainty relation that holds for a pair of arbitrary observables was derived by Deutsch [5]

$$\begin{aligned} H(X)+H(Y)\ge -2 \log \frac{1+c}{2}=B_D, \end{aligned}$$
(1)

where H(X) and H(Y) denote the Shannon entropies of the probability distributions obtained during measurements of X and Y, respectively. If \(|\phi _j\rangle \), \(|\psi _k\rangle \) are the eigenvectors of X and Y, then \(c=\max _{j,k}|\langle {\psi _j}|{\phi _k}\rangle |\). Kraus conjectured [6] and Maassen and Uffink [7] proved a stronger result

$$\begin{aligned} H(X) + H(Y) \ge -2\log c=B_{MU}, \end{aligned}$$
(2)

where H(X), H(Y) and c are the same as in relation proposed by Deutsch.

The entropic uncertainty relations are a very active field of scientific inquiry [8, 9]. One of the reasons is the applications in quantum cryptography [10,11,12]. Another area where entropic uncertainty relations are widely used is studies of quantum phenomena such as correlations and non-locality [13,14,15]. Some results were generalized; hence, entropic formulations of the uncertainty relation in terms of Rényi entropies are included in [16]. Uncertainty relations for mutually unbiased bases and symmetric informationally complete measurements in terms of generalized entropies of Rényi and Tsallis can be found [17].

In [18] it was shown that entropic uncertainty relations can be derived for binary observables from effective anti-commutation, which can be important in device-independent cryptography. This result was generalized in [19] for entropic uncertainty relations in the presence of quantum memory.

The majorization-based bounds of uncertainty relation were first introduced by Partovi in [20], which was generalized in [21, 22]. In [21], majorization techniques were applied to obtain lower bound of the uncertainty relation, which can give the bound stronger than the well-known result of Massen and Uffink. The formulation of strong majorization uncertainty relation presented in [23] is involved, but in the case of qubits it can be expressed as

$$\begin{aligned} H(X) + H(Y) \ge -c \log c - (1-c) \log (1-c) = B_{Maj2}. \end{aligned}$$
(3)

The asymptotic analysis of entropic uncertainty relations for random measurements has been provided in [24] with the use majorization bounds. Some interesting results along these lines are included in [25,26,27].

In [28], Berta et al. considered the uncertainty relation for a system with the presence of a quantum memory. In this setup, the system is described by a bipartite density matrix \(\rho _{AB}\). Quantum conditional entropy can be defined as

$$\begin{aligned} S(A|B) = S(A, B) - S(B), \end{aligned}$$
(4)

where S(B) denotes the von Neumann entropy of the state \(\rho _B=\mathrm {Tr}_A \rho _{AB}\). Equation (4) is also known as the chain rule. We also introduce the states \(\rho _{XB}\) and \(\rho _{YB}\) as

$$\begin{aligned} \begin{aligned} \rho _{XB}&= \sum _i \left( |\psi _i\rangle \langle \psi _i| \otimes \mathbb {1}\right) \rho _{AB} \left( |\psi _i\rangle \langle \psi _i| \otimes \mathbb {1}\right) \\ \rho _{YB}&= \sum _i \left( |\phi _i\rangle \langle \phi _i| \otimes \mathbb {1}\right) \rho _{AB} \left( |\phi _i\rangle \langle \phi _i| \otimes \mathbb {1}\right) , \end{aligned} \end{aligned}$$
(5)

which are post-measurement states, when the measurements were performed on the part A. Berta et al. [28] showed that a bound on the uncertainties of the measurement outcomes depends on the amount of entanglement between measured particle and the quantum memory. As a consequence, they formulated a conditional uncertainty relation given as

$$\begin{aligned} S(X|B) + S(Y|B) \ge B_{MU} + S(A|B)=B_{BCCRR}. \end{aligned}$$
(6)

Entropy S(A|B) quantifies the amount of entanglement between the particle and the memory. The bound of Berta et al. [28] was improved by Coles and Piani in [29] through replacing the state-dependent value \(B_{MU}\) with larger parameter. The result of Coles and Piani was improved in [30]. This relation was also generalized for Rényi entropies, and several important result can be found in [31,32,33]. The uncertainty relation is also considered in the context of quantum-to-classical randomness extractors (QC-extractors) [34]. It is proved that QC-extractors give rise to uncertainty relation with the presence of a quantum memory.

In the absence of the quantum memory bound (6) reduces to (2) for pure \(\rho _{AB}\). The results by Berta et al. [28] and by Li et al. [35] can be applied to witnessing entanglement. This is a consequence of the fact that S(A|B) is negative for an entangled state \(\rho _{AB}\). Another field of application of entropic uncertainty relations with the presence of quantum memory is quantum cryptography [9]. The bound quantified by Berta et al. [28] was experimentally validated [36].

In this paper we aim at finding entanglement-dependent entropic uncertainty relations in terms of von Neumann and Tsallis entropies. Our results apply to states with a fixed amount of entanglement, described by parameter \(\lambda \). This allows us to find non-trivial bounds for the entropic uncertainty relation. Otherwise we would obtain a lower bound equal to zero. This bound is achieved in the case of the maximally entangled state. Notice that Berta et al. formulated the bound in a similar way. In their approach the information about entanglement was hidden in terms of H(A|B). In this case the bound is also zero for the maximally entangled state.

Let us now recall the notion of Tsallis entropy [37] which is a non-additive generalization of von Neumann entropy, and for a state \(\rho _X\), it is defined as

$$\begin{aligned} T_q(X) = \frac{1}{q-1}\left( 1-\sum _i \nu _i^q \right) , \end{aligned}$$
(7)

where \(\nu _i\) are the eigenvalues of \(\rho _X\) and \(q\in [0, \infty )\). Tsallis entropy is identical to the Havrda–Charvát structural \(\alpha \) entropy [38] in information theory. Note that when \(q\rightarrow 1\) we have \(T_q(X) \rightarrow S(X)\). The chain rule applies to Tsallis entropies, hence

$$\begin{aligned} T_q(A|B) = T_q(A, B) - T_q(B). \end{aligned}$$
(8)

We will use the following notation for Tsallis point entropy

$$\begin{aligned} t_q(x) = \frac{1}{q-1}\left( 1- x^q - (1-x)^q\right) . \end{aligned}$$
(9)

In the limit \(q \rightarrow 1\) we recover

$$\begin{aligned} h(x) = \eta (x) + \eta (1-x), \end{aligned}$$
(10)

where \(\eta (x)=-x\log x\).

2 Qubit conditional uncertainty relations

Without a loss of generality let us assume that we start with an entangled state \(\rho _{AB}=|\psi _{AB}\rangle \langle \psi _{AB}|\), where \(|\psi _{AB}\rangle =\sqrt{\lambda }|00\rangle +\sqrt{1-\lambda }|11\rangle \). In this case, the parameter \(\lambda \) describes the entanglement between the parties A and B. We chose the eigenvectors of X and Y as \(|\phi _i\rangle = O(\theta )|i\rangle \) and \(|\psi _i\rangle = O(\theta +\epsilon )|i\rangle \), where

$$\begin{aligned} O(\theta ) = \left( \begin{matrix} \cos \theta &{} -\sin \theta \\ \sin \theta &{} \cos \theta \end{matrix}\right) \in SO(2) \end{aligned}$$
(11)

is a real rotation matrix. Hence, instead of optimizing the uncertainty relation over all possible states \(\rho _{AB}\), we will instead optimize over \(\theta \). Hereafter we assume \(\theta ,\varepsilon \in [0, \pi /2]\). In this case we have

$$\begin{aligned} c = {\left\{ \begin{array}{ll} |\cos \varepsilon |,&{} \varepsilon \le \pi /4 \\ |\sin \varepsilon |,&{} \varepsilon > \pi /4. \end{array}\right. } \end{aligned}$$
(12)

It is important to notice that we can restrict our attention to real rotation matrices. This follows from the fact that any unitary matrix is similar to real rotation matrix. Matrices are similar, \(U \sim V\), if for some permutation matrices \(P_1,P_2\) and diagonal unitary matrices \(D_1,D_2\), we have \(V=P_1D_1 U D_2 P_2\) [21]. Next we note that the eigenvalues of states \(\rho _{XB}\) are invariant with respect to the equivalence relation.

We should also note here that the two-qubit scenario, simple as it is, may be easily generalized to an arbitrary dimension of system B.

As we are interested in binary measurements, the states \(\rho _{XB}\) and \(\rho _{YB}\) are rank-2 operators. The nonzero eigenvalues of \(\rho _{XB}\) can be easily obtained as

$$\begin{aligned} \begin{aligned} \mu ^{XB}_1 =&\lambda \sin ^2(\theta ) + (1-\lambda )\cos ^2(\theta ),\\ \mu ^{XB}_2 =&\lambda \cos ^2(\theta ) + (1-\lambda )\sin ^2(\theta ). \end{aligned} \end{aligned}$$
(13)

To obtain the eigenvalues of \(\rho _{YB}\) we need to replace \(\theta \) with \(\theta +\varepsilon \).

2.1 Analytical minima

Using eigenvalues of \(\rho _{XB}\) and \(\rho _{YB}\), we arrive at

$$\begin{aligned} T_q(X|B) + T_q(Y|B) = t_q(\mu _1^{XB}) + t_q(\mu _1^{YB}) -2 t_q(\lambda ). \end{aligned}$$
(14)

Let us perform detailed analysis on the case when \(q\rightarrow 1\), i.e., the von Neumann entropy case. We get

$$\begin{aligned} S(X|B)+S(Y|B) = h(\mu _1^{XB})+h(\mu _1^{YB}) - 2h(\lambda ) \end{aligned}$$
(15)

In order to obtain an uncertainty relation, we need to minimize this quantity over the parameter \(\theta \). This is a complicated task even in the case \(\lambda =0\) and has been studied earlier [39]. We guess that \(\theta = \pi /2-\varepsilon /2\) is an extremal point of (14). Unfortunately, this point is the global minimum only when

$$\begin{aligned} -c \tanh ^{-1} ((1-2\lambda )c)+\frac{(2\lambda -1)(1-c^2)}{c^2(1-2\lambda )^2 -1} < 0. \end{aligned}$$
(16)

A numerical solution of this inequality is shown in Fig. 1. When this condition is satisfied, the uncertainty relation is

$$\begin{aligned} \begin{aligned}&S(X|B)+S(Y|B) \\&\quad \ge \log 4+\eta (1+c-2\lambda c)+\eta (1-c+2\lambda c) -2h(\lambda ). \end{aligned} \end{aligned}$$
(17)

When the condition in Eq. (16) is not satisfied, our guessed extreme point becomes a maximum and two minima emerge, symmetrically to \(\theta =\pi /2-\varepsilon /2\). The reasoning can be generalized to \(T_q\) in a straightforward, yet cumbersome way. The details are presented in “Appendix.” The solutions of inequality (16) along with inequality (32) for various values of q are shown in Fig.1

Fig. 1
figure 1

Numerical solution of inequality (16) (\(q=1\)) as a function of \(\lambda \) along with solution of a corresponding inequality for chosen other values of q

2.2 Bounding the conditional entropies

In order to study the case of general Tsallis entropies \(T_q\), we introduce the following proposition

Proposition 1

Let \(\alpha \in [0,1]\) and \(q \in [0, 2] \cup [3,\infty )\), then

$$\begin{aligned} \begin{aligned}&t_q\big (\alpha p+(1-\alpha )(1-p)\big )\\&\quad \ge 4 \frac{\left( \alpha ^q+(1-\alpha )^q-2^{1-q}\right) }{\alpha ^q+(1-\alpha )^q+q-2} p(1-p) \big (1-t_q(\alpha )\big ) + t_q(\alpha ). \end{aligned} \end{aligned}$$
(18)

In the cases \(q=2\) and \(q=3\) we have an equality.

Proof

We define

$$\begin{aligned} \begin{aligned} f(p)&= t_q\big (\alpha p+(1-\alpha )(1-p)\big )\\&\quad - 4 \frac{\left( \alpha ^q+(1-\alpha )^q-2^{1-q}\right) }{\alpha ^q+(1-\alpha )^q+q-2} p(1-p) \big (1-t_q(\alpha )\big ) - t_q(\alpha ). \end{aligned} \end{aligned}$$
(19)

Next we note that \(f(0) = f(\frac{1}{2})=0\). We will show that f has no other zeros on interval \((0, \frac{1}{2})\). We calculate

$$\begin{aligned} \begin{aligned} f{'''}(p)&=(2 \alpha -1)^3 (q-2) q \big ((\alpha -2 \alpha p+p)^{q-3}\\&\quad -(-\alpha +(2 \alpha -1) p+1)^{q-3}\big ), \end{aligned} \end{aligned}$$
(20)

which is positive for \(q\in [0,2)\cup (3,\infty )\) and \(p \in [0, \frac{1}{2}]\). Therefore we obtain that \(f'(x)\) is strictly convex on \((0, \frac{1}{2})\) and \(f'(1/2)=0\).

Now let us assume that for \(x_0 \in (0,1/2)\) we have \(f(x_0) = 0\). Then by Rolle’s theorem, there exist points \(0< y_0< x_0< y_1 < \frac{1}{2}\) such that \(f'(y_0) = f'(y_1) = 0\). Together with fact that \(f'(1/2)=0\) we obtain a contradiction with the convexity of \(f'\) on \((0,\frac{1}{2})\).

Last thing to show is that for some \(\varepsilon \in (0,\frac{1}{2})\) we have \(f(\varepsilon ) >0\). To show it we write

$$\begin{aligned} \begin{aligned} f'(0)&= \frac{\alpha ^{q-1} (2 \alpha (q-2)-q)}{q-1}\\&\quad +\frac{(4 \alpha -2 \alpha q+q-4) (1-\alpha )^{q-1}+2^{3-q}}{q-1} =: g(\alpha ). \end{aligned} \end{aligned}$$
(21)

Now we note that \(g(\alpha )\) is positive for \(\alpha \in (0,1) \setminus \left\{ \frac{1}{2} \right\} \) and \(q \in [0,2)\cup (3,\infty )\). This follows from convexity of g on these sets and the fact, that it has a minimum, \(g\left( \frac{1}{2}\right) = 0\). From this fact there exist \(\varepsilon >0\) such that \(f(\varepsilon )>0\).

The equalities in the case \(q=2,3\) follow from a direct inspection. \(\square \)

Now we are ready to state and prove the main result of this work

Theorem 1

Let \(\rho _{AB}=|\psi _{AB}\rangle \langle \psi _{AB}|\), where \(|\psi _{AB}\rangle =\sqrt{\lambda }|00\rangle +\sqrt{1-\lambda }|11\rangle \). Let us choose two observables X and Y with eigenvectors \(|\phi _i\rangle = O(\theta )|i\rangle \) and \(|\psi _i\rangle = O(\theta +\epsilon )|i\rangle \), where \(O(\theta )\) is as in Eq. (11). Then, the Tsallis entropic conditional uncertainty relation is

$$\begin{aligned} T_q(X|B)+T_q(Y|B) \ge 2\frac{\lambda ^q+(1-\lambda )^q-2^{1-q}}{\lambda ^q+(1-\lambda )^q+q-2} (1-t_q(\lambda )) (1-c^2). \end{aligned}$$
(22)

Proof

Applying Proposition 1 to Eq. (14) we get

$$\begin{aligned} \begin{aligned}&T_q(X|B)+T_q(Y|B)\\&\quad \ge \frac{\lambda ^q+(1-\lambda )^q-2^{1-q}}{\lambda ^q+(1-\lambda )^q+q-2} (1-t_q(\lambda )) (\sin ^2(2\theta +2\epsilon )+\sin ^2 2\theta ) \end{aligned} \end{aligned}$$
(23)

The right-hand side achieves a unique minimum \(\theta =\pi /2-\varepsilon /2\) for \(\varepsilon \le \pi /4\) and \(\theta =\pi /4-\varepsilon /2\) for \(\varepsilon > \pi /4\). Inserting this value we recover Eq. (22). \(\square \)

Remark 1

In the limit \(q\rightarrow 1\) we get the following uncertainty relation for Shannon entropies

$$\begin{aligned} S(X|B)+S(Y|B) \ge 2(\log 2 - h(\lambda )) (1-c^2) = B_{KPP}. \end{aligned}$$
(24)

Remark 2

Using the concavity of the conditional von Neumann entropy, we may generalize bound (24) to mixed states \(\rho _{AB}\). We get

$$\begin{aligned} S(X|B) + S(Y|B) \ge 2(\log 2 - S(B))(1-c^2). \end{aligned}$$
(25)

In order to see it, we consider a system in a mixed state \(\rho _{AB}\) and its decomposition into pure states \(\rho _{AB} = \sum p_i \rho ^{(i)}_{AB}\). We will use a lower index which will indicate the state of the system. The post-measurement states \(\rho _{XB},\rho _{YB}\) are defined as in Eq. (5). Now we write

$$\begin{aligned} \begin{aligned}&S(X|B)_{\rho _{XB}} + S(Y|B)_{\rho _{YB}} \ge \sum _i p_i S(X|B)_{\rho ^{(i)}_{XB}} +\sum _i p_i S(X|B)_{\rho ^{(i)}_{YB}}\\&\quad \ge 2\log 2 (1-c^2) - 2(1-c^2) \sum _{i}p_i S(B)_{\rho ^{(i)}_B}\\&\quad \ge 2\log 2 (1-c^2) - 2(1-c^2) S(B)_{\rho _B}. \end{aligned} \end{aligned}$$
(26)

The first inequality above follows from the concavity of the conditional von Neumann entropy. The second one is the usage of (24), while the third one follows from the concavity of the von Neumann entropy.

Remark 3

The state-dependent entropic uncertainty relation for \(q\rightarrow 1\) reads

$$\begin{aligned} \begin{aligned} S(X|B) + S(Y|B)&\ge 2(\log 2 - h(\lambda ))(\sin ^2(2\theta + 2\varepsilon )+\sin ^2 2\theta ) \\&= B(\theta ). \end{aligned} \end{aligned}$$
(27)

A comparison with the known entropic uncertainty relations for \(\lambda =0\) and \(q\rightarrow 1\) is shown in Fig. 2. As can be seen, our result gives a tighter bound than the one obtained by Massen and Uffink for all values of \(\varepsilon \). The bound is also tighter than \(B_{Maj2}\) when \(\varepsilon \) is in the neighborhood of \(\pi /4\).

A comparison of exact value (15), state-dependent lower bound  (27) and \(B_{BCCRR}\) for different parameters \(\lambda \), \(\theta \) and \(\epsilon \) is presented in Figs. 3 and 4.

Fig. 2
figure 2

Comparison of our result with known bounds in the case \(\lambda =0\). Blue solid line is the (numerical) optimal solution, dashed green is the \(B_{MU}\) bound, black dashed-dotted is \(B_{Maj2}\), and red dotted is \(B_{KPP}\) (Color figure online)

Fig. 3
figure 3

Comparison of our state-dependent result with \(B_{BCCRR}\) and the exact value of conditional entropies for different parameters \(\lambda \) and \(\epsilon \) as a function of \(\theta \). a \(\lambda =0.1\), \(\epsilon =\pi /4.2\). b \(\lambda =0.1\), \(\epsilon =\pi /6\)

Fig. 4
figure 4

Comparison of \(B(\theta )\) with \(B_{BCCRR}\) and exact values of conditional entropies as a function of \(\lambda \). Here \(\varepsilon =\pi /8\), \(\theta =\pi /2-\varepsilon /2\)

3 Security of quantum key distribution protocols

One of the possible applications of the uncertainty relation is quantum cryptography, where the relation allows us to bound of the amount of key the parties are able to extract per state.

Assume that an eavesdropper creates a quantum system \(\rho _{ABE}\). Next, parts A and B are distributed to Alice and Bob. The generation of a secret key is based on measurements XY and \(X',Y'\) performed by Alice and Bob, respectively. Subsequently, Alice and Bob inform each other of their choices of measurements. The security of the key depends on the correlation between the measurement outcomes.

According to the investigations of Devetak and Winter [40], the amount of extractable key is quantified as \(K\ge H(X|E)-H(X|B)\). Using our bound we are able to bound the amount of extractable key in terms of von Neumann entropies by

$$\begin{aligned} K \ge 2 (1-c^2)(\log 2 -S(B)) - S(A|B) - S(X|X') -S(Y|Y'). \end{aligned}$$
(28)

In the above \(S(X|X')\) is the conditional entropy of the state shared by Alice and Bob, when both parties execute the measurement schemes \(X,X'\), respectively. This relates our result to [41]. In our case Alice and Bob need to upper bound entropies \(S(A|B), S(X|X')\) and \(S(Y|Y')\). The former entropies can be bounded by quantities such as frequency of the agreement of the outcomes.

4 Conclusion

In this paper, we have derived new entanglement-dependent uncertainty relations in terms of von Neumann and Tsallis entropies for qubits and binary observables with respect to quantum side information. Our bounds were compared with well-known bounds derived by Massen and Uffink [7], Rudnicki et al. [23] and Berta et al. [28]. This paper can be also treated as a generalization of results included in [39].

Presented results are expected to have application to witnessing entanglement or in quantum cryptography as a measure of information in quantum key distribution protocols. Verification of our results in potential applications seems to be interesting task.