Without a loss of generality let us assume that we start with an entangled state \(\rho _{AB}=|\psi _{AB}\rangle \langle \psi _{AB}|\), where \(|\psi _{AB}\rangle =\sqrt{\lambda }|00\rangle +\sqrt{1-\lambda }|11\rangle \). In this case, the parameter \(\lambda \) describes the entanglement between the parties A and B. We chose the eigenvectors of X and Y as \(|\phi _i\rangle = O(\theta )|i\rangle \) and \(|\psi _i\rangle = O(\theta +\epsilon )|i\rangle \), where
$$\begin{aligned} O(\theta ) = \left( \begin{matrix} \cos \theta &{} -\sin \theta \\ \sin \theta &{} \cos \theta \end{matrix}\right) \in SO(2) \end{aligned}$$
(11)
is a real rotation matrix. Hence, instead of optimizing the uncertainty relation over all possible states \(\rho _{AB}\), we will instead optimize over \(\theta \). Hereafter we assume \(\theta ,\varepsilon \in [0, \pi /2]\). In this case we have
$$\begin{aligned} c = {\left\{ \begin{array}{ll} |\cos \varepsilon |,&{} \varepsilon \le \pi /4 \\ |\sin \varepsilon |,&{} \varepsilon > \pi /4. \end{array}\right. } \end{aligned}$$
(12)
It is important to notice that we can restrict our attention to real rotation matrices. This follows from the fact that any unitary matrix is similar to real rotation matrix. Matrices are similar, \(U \sim V\), if for some permutation matrices \(P_1,P_2\) and diagonal unitary matrices \(D_1,D_2\), we have \(V=P_1D_1 U D_2 P_2\) [21]. Next we note that the eigenvalues of states \(\rho _{XB}\) are invariant with respect to the equivalence relation.
We should also note here that the two-qubit scenario, simple as it is, may be easily generalized to an arbitrary dimension of system B.
As we are interested in binary measurements, the states \(\rho _{XB}\) and \(\rho _{YB}\) are rank-2 operators. The nonzero eigenvalues of \(\rho _{XB}\) can be easily obtained as
$$\begin{aligned} \begin{aligned} \mu ^{XB}_1 =&\lambda \sin ^2(\theta ) + (1-\lambda )\cos ^2(\theta ),\\ \mu ^{XB}_2 =&\lambda \cos ^2(\theta ) + (1-\lambda )\sin ^2(\theta ). \end{aligned} \end{aligned}$$
(13)
To obtain the eigenvalues of \(\rho _{YB}\) we need to replace \(\theta \) with \(\theta +\varepsilon \).
Analytical minima
Using eigenvalues of \(\rho _{XB}\) and \(\rho _{YB}\), we arrive at
$$\begin{aligned} T_q(X|B) + T_q(Y|B) = t_q(\mu _1^{XB}) + t_q(\mu _1^{YB}) -2 t_q(\lambda ). \end{aligned}$$
(14)
Let us perform detailed analysis on the case when \(q\rightarrow 1\), i.e., the von Neumann entropy case. We get
$$\begin{aligned} S(X|B)+S(Y|B) = h(\mu _1^{XB})+h(\mu _1^{YB}) - 2h(\lambda ) \end{aligned}$$
(15)
In order to obtain an uncertainty relation, we need to minimize this quantity over the parameter \(\theta \). This is a complicated task even in the case \(\lambda =0\) and has been studied earlier [39]. We guess that \(\theta = \pi /2-\varepsilon /2\) is an extremal point of (14). Unfortunately, this point is the global minimum only when
$$\begin{aligned} -c \tanh ^{-1} ((1-2\lambda )c)+\frac{(2\lambda -1)(1-c^2)}{c^2(1-2\lambda )^2 -1} < 0. \end{aligned}$$
(16)
A numerical solution of this inequality is shown in Fig. 1. When this condition is satisfied, the uncertainty relation is
$$\begin{aligned} \begin{aligned}&S(X|B)+S(Y|B) \\&\quad \ge \log 4+\eta (1+c-2\lambda c)+\eta (1-c+2\lambda c) -2h(\lambda ). \end{aligned} \end{aligned}$$
(17)
When the condition in Eq. (16) is not satisfied, our guessed extreme point becomes a maximum and two minima emerge, symmetrically to \(\theta =\pi /2-\varepsilon /2\). The reasoning can be generalized to \(T_q\) in a straightforward, yet cumbersome way. The details are presented in “Appendix.” The solutions of inequality (16) along with inequality (32) for various values of q are shown in Fig.1
Bounding the conditional entropies
In order to study the case of general Tsallis entropies \(T_q\), we introduce the following proposition
Proposition 1
Let \(\alpha \in [0,1]\) and \(q \in [0, 2] \cup [3,\infty )\), then
$$\begin{aligned} \begin{aligned}&t_q\big (\alpha p+(1-\alpha )(1-p)\big )\\&\quad \ge 4 \frac{\left( \alpha ^q+(1-\alpha )^q-2^{1-q}\right) }{\alpha ^q+(1-\alpha )^q+q-2} p(1-p) \big (1-t_q(\alpha )\big ) + t_q(\alpha ). \end{aligned} \end{aligned}$$
(18)
In the cases \(q=2\) and \(q=3\) we have an equality.
Proof
We define
$$\begin{aligned} \begin{aligned} f(p)&= t_q\big (\alpha p+(1-\alpha )(1-p)\big )\\&\quad - 4 \frac{\left( \alpha ^q+(1-\alpha )^q-2^{1-q}\right) }{\alpha ^q+(1-\alpha )^q+q-2} p(1-p) \big (1-t_q(\alpha )\big ) - t_q(\alpha ). \end{aligned} \end{aligned}$$
(19)
Next we note that \(f(0) = f(\frac{1}{2})=0\). We will show that f has no other zeros on interval \((0, \frac{1}{2})\). We calculate
$$\begin{aligned} \begin{aligned} f{'''}(p)&=(2 \alpha -1)^3 (q-2) q \big ((\alpha -2 \alpha p+p)^{q-3}\\&\quad -(-\alpha +(2 \alpha -1) p+1)^{q-3}\big ), \end{aligned} \end{aligned}$$
(20)
which is positive for \(q\in [0,2)\cup (3,\infty )\) and \(p \in [0, \frac{1}{2}]\). Therefore we obtain that \(f'(x)\) is strictly convex on \((0, \frac{1}{2})\) and \(f'(1/2)=0\).
Now let us assume that for \(x_0 \in (0,1/2)\) we have \(f(x_0) = 0\). Then by Rolle’s theorem, there exist points \(0< y_0< x_0< y_1 < \frac{1}{2}\) such that \(f'(y_0) = f'(y_1) = 0\). Together with fact that \(f'(1/2)=0\) we obtain a contradiction with the convexity of \(f'\) on \((0,\frac{1}{2})\).
Last thing to show is that for some \(\varepsilon \in (0,\frac{1}{2})\) we have \(f(\varepsilon ) >0\). To show it we write
$$\begin{aligned} \begin{aligned} f'(0)&= \frac{\alpha ^{q-1} (2 \alpha (q-2)-q)}{q-1}\\&\quad +\frac{(4 \alpha -2 \alpha q+q-4) (1-\alpha )^{q-1}+2^{3-q}}{q-1} =: g(\alpha ). \end{aligned} \end{aligned}$$
(21)
Now we note that \(g(\alpha )\) is positive for \(\alpha \in (0,1) \setminus \left\{ \frac{1}{2} \right\} \) and \(q \in [0,2)\cup (3,\infty )\). This follows from convexity of g on these sets and the fact, that it has a minimum, \(g\left( \frac{1}{2}\right) = 0\). From this fact there exist \(\varepsilon >0\) such that \(f(\varepsilon )>0\).
The equalities in the case \(q=2,3\) follow from a direct inspection. \(\square \)
Now we are ready to state and prove the main result of this work
Theorem 1
Let \(\rho _{AB}=|\psi _{AB}\rangle \langle \psi _{AB}|\), where \(|\psi _{AB}\rangle =\sqrt{\lambda }|00\rangle +\sqrt{1-\lambda }|11\rangle \). Let us choose two observables X and Y with eigenvectors \(|\phi _i\rangle = O(\theta )|i\rangle \) and \(|\psi _i\rangle = O(\theta +\epsilon )|i\rangle \), where \(O(\theta )\) is as in Eq. (11). Then, the Tsallis entropic conditional uncertainty relation is
$$\begin{aligned} T_q(X|B)+T_q(Y|B) \ge 2\frac{\lambda ^q+(1-\lambda )^q-2^{1-q}}{\lambda ^q+(1-\lambda )^q+q-2} (1-t_q(\lambda )) (1-c^2). \end{aligned}$$
(22)
Proof
Applying Proposition 1 to Eq. (14) we get
$$\begin{aligned} \begin{aligned}&T_q(X|B)+T_q(Y|B)\\&\quad \ge \frac{\lambda ^q+(1-\lambda )^q-2^{1-q}}{\lambda ^q+(1-\lambda )^q+q-2} (1-t_q(\lambda )) (\sin ^2(2\theta +2\epsilon )+\sin ^2 2\theta ) \end{aligned} \end{aligned}$$
(23)
The right-hand side achieves a unique minimum \(\theta =\pi /2-\varepsilon /2\) for \(\varepsilon \le \pi /4\) and \(\theta =\pi /4-\varepsilon /2\) for \(\varepsilon > \pi /4\). Inserting this value we recover Eq. (22). \(\square \)
Remark 1
In the limit \(q\rightarrow 1\) we get the following uncertainty relation for Shannon entropies
$$\begin{aligned} S(X|B)+S(Y|B) \ge 2(\log 2 - h(\lambda )) (1-c^2) = B_{KPP}. \end{aligned}$$
(24)
Remark 2
Using the concavity of the conditional von Neumann entropy, we may generalize bound (24) to mixed states \(\rho _{AB}\). We get
$$\begin{aligned} S(X|B) + S(Y|B) \ge 2(\log 2 - S(B))(1-c^2). \end{aligned}$$
(25)
In order to see it, we consider a system in a mixed state \(\rho _{AB}\) and its decomposition into pure states \(\rho _{AB} = \sum p_i \rho ^{(i)}_{AB}\). We will use a lower index which will indicate the state of the system. The post-measurement states \(\rho _{XB},\rho _{YB}\) are defined as in Eq. (5). Now we write
$$\begin{aligned} \begin{aligned}&S(X|B)_{\rho _{XB}} + S(Y|B)_{\rho _{YB}} \ge \sum _i p_i S(X|B)_{\rho ^{(i)}_{XB}} +\sum _i p_i S(X|B)_{\rho ^{(i)}_{YB}}\\&\quad \ge 2\log 2 (1-c^2) - 2(1-c^2) \sum _{i}p_i S(B)_{\rho ^{(i)}_B}\\&\quad \ge 2\log 2 (1-c^2) - 2(1-c^2) S(B)_{\rho _B}. \end{aligned} \end{aligned}$$
(26)
The first inequality above follows from the concavity of the conditional von Neumann entropy. The second one is the usage of (24), while the third one follows from the concavity of the von Neumann entropy.
Remark 3
The state-dependent entropic uncertainty relation for \(q\rightarrow 1\) reads
$$\begin{aligned} \begin{aligned} S(X|B) + S(Y|B)&\ge 2(\log 2 - h(\lambda ))(\sin ^2(2\theta + 2\varepsilon )+\sin ^2 2\theta ) \\&= B(\theta ). \end{aligned} \end{aligned}$$
(27)
A comparison with the known entropic uncertainty relations for \(\lambda =0\) and \(q\rightarrow 1\) is shown in Fig. 2. As can be seen, our result gives a tighter bound than the one obtained by Massen and Uffink for all values of \(\varepsilon \). The bound is also tighter than \(B_{Maj2}\) when \(\varepsilon \) is in the neighborhood of \(\pi /4\).
A comparison of exact value (15), state-dependent lower bound (27) and \(B_{BCCRR}\) for different parameters \(\lambda \), \(\theta \) and \(\epsilon \) is presented in Figs. 3 and 4.