1 Introduction

Quantum mechanics admits the existence of physical systems which demonstrate correlations that cannot be explained on the basis of classical physics [1]. In this context, we usually consider compound quantum systems which feature nonlocal quantum correlations that can be verified experimentally by detecting multiparticle quantum interference [2]. In particular, quantum entanglement, which is a specific form of non-classical correlations, has been excessively studied with respect to creation, detection, and applications [3, 4]. Nowadays, we know that multipartite systems are not necessary to exhibit entanglement since one photon is sufficient to encode a Bell state [5]

Entangled states are considered a key resource in quantum information theory [6]. Particularly, entangled pairs of photons can be used for quantum key distribution (QKD) [7]. Other well-known applications of entangled states relate to: superdense coding [8, 9], quantum teleportation [10, 11], quantum computing [12], quantum interferometric optical lithography [13], etc. For these reasons, the ability to characterize entanglement based on measurements plays a crucial role in practical realizations of quantum protocols.

In the case of photons, quantum information can be encoded by exploiting different degrees of freedom, in particular: polarization, spectral, spatial, and temporal mode. Each approach requires distinct measurement schemes for state identification. In this work, we focus on two-photon polarization-entangled states.

State reconstruction of polarization-entangled photons, generated by a spontaneous-down-conversion photon source, was performed efficiently by polarization measurements [14]. Then, photonic state tomography was developed in terms of both theory and experiments [15, 16]. Because each measurement is inherently inaccurate, we need to apply methods that produce reliable estimates of actual quantum states, such as maximum likelihood estimation (MLE) [17, 18]. Different methods of quantum state estimation can be compared with respect to their accuracy [19].

For years, the problem of quantum state tomography (QST) has been intensively researched into, see, e.g., Ref. [20,21,22,23]. Regardless of the particular QST method, we consider a total number of measurements as a resource. Therefore, economic frameworks, which aim at reducing the number of distinct measurement settings, are gaining in popularity. For example, dynamical maps can be utilized to generate an informationally complete set of measurement operators [24]. There have also been proposals involving continuous measurement defined in the time domain, for example, by performing a weak (non-projective) continuous measurement on an ensemble of systems [25,26,27], including numerical optimization algorithms to reduce the influence of experimental noise [28].

In this work, we consider time-continuous measurements generated by unitary dynamics from one operator. We gather data for state reconstruction by selecting a discrete set of time instants that correspond to moments of measurement. Since the precision of measurements is limited by the time resolution of the detector, we study the efficiency of the framework versus the amount of time uncertainty. Temporal uncertainty of the detector is considered a limiting factor in quantum state engineering. Therefore, theoretical models describing its formalism have been recently proposed [29, 30].

In Sect. 2, we define our measurements in the time domain. For a given unitary dynamics, we present possible trajectories which can be generated. Next, we discuss the impact of the detector jitter on the measurement operators. In Sect. 3, we introduce the QST framework with time-continuous measurements. Finally, in Sect. 4, we present the results of numerical simulations for qubit reconstruction and entangled photon pairs tomography. In particular, for different numbers of photons per measurement, we investigate the efficiency of the framework versus the amount of time uncertainty.

2 Time-continuous measurements

In the standard approach, one would need to perform a series of tomographically complete measurements to determine an unknown polarization state of a photon described by a \(2 \times 2\) density matrix \(\rho (0)\). An experimental setup would require: a polarizer, a quarter-wave plate (QWP), and a half-wave plate (HWP). The angles of the waveplates can be set arbitrarily, which allows the experimenter to measure the beam corresponding to selected polarizations. Traditionally, we utilize six polarization states: horizontal, vertical, diagonal, antidiagonal, right-circular, and left-circular, denoted by \(\{\mathinner {|{H}\rangle }, \mathinner {|{V}\rangle }, \mathinner {|{D}\rangle }, \mathinner {|{A}\rangle }, \mathinner {|{R}\rangle }, \mathinner {|{L}\rangle }\}\). If the pair \(\{\mathinner {|{V}\rangle }, \mathinner {|{H}\rangle }\}\) forms the standard basis, i.e.,

$$\begin{aligned} \mathinner {|{H}\rangle } = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \mathinner {|{V}\rangle } = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \end{aligned}$$
(1)

then the other states can be expressed as: \(\mathinner {|{D}\rangle } = (\mathinner {|{H}\rangle } + \mathinner {|{V}\rangle })/2\), \(\mathinner {|{A}\rangle } = (\mathinner {|{H}\rangle } - \mathinner {|{V}\rangle })/2\), \(\mathinner {|{R}\rangle } = (\mathinner {|{H}\rangle } + i \mathinner {|{V}\rangle })/2\), and \(\mathinner {|{L}\rangle } = (\mathinner {|{H}\rangle } - i \mathinner {|{V}\rangle })/2\). These six states can be divided into three pairs of orthogonal vectors, which correspond to mutually unbiased bases in the two-dimensional Hilbert space [31, 32]. From these vectors, we can also generate a positive-operator valued measure (POVM), which can be considered an overcomplete measurement scheme.

In our framework, we assume that the photon undergoes a unitary evolution, which implies that:

$$\begin{aligned} \rho (t) = U(t) \,\rho (0)\, U^{\dagger } (t), \end{aligned}$$
(2)

where U(t) stands for a time-continuous unitary operator, i.e., \(\forall \,t\ge 0\) we have \(U^{\dagger } (t) = U^{-1} (t)\) and the initial condition is satisfied: \(U(0) = {\mathbbm {1}}_{2}\).

A unitary evolution of a photonic state can be realized by utilizing the effects of stress-induced birefringence to create changes in the polarization of light traveling through a fiber. In particular, in-line polarization controllers create stress-induced birefringence within a single-mode fiber by mechanically compressing the fiber. This acts like a variable, rotatable waveplate. Both the angle and retardance of the waveplate can be continuously, independently adjusted, which allows any arbitrary input polarization state to be converted to any desired output polarization state. In other words, this method enables polarization control over the entire Bloch sphere. Therefore, by applying fiber polarization controllers, the polarization state entering the fiber can be transformed unitarily to any other polarization state on the Bloch sphere upon leaving the fiber. Subsequently, this implies the ability to implement an arbitrary continuous evolution of a polarization-encoded qubit by varying the amount of mechanical stress applied and rotation of the axis of birefringence.

Then, if initially we are able to perform one measurement characterized by an operator \(M_{\xi } \ge 0\), we obtain, based on the Born rule, a time-dependent formula for the probability associated with this measurement:

$$\begin{aligned} p_{\xi } (t) = \mathrm{tr}\left( M_{\xi } \,U(t) \,\rho (0)\, U^{\dagger } (t) \right) = \mathrm{tr}\left( U^{\dagger } (t) \,M_{\xi } \,U(t) \,\rho (0) \right) , \end{aligned}$$
(3)

where the latter expression is due to the cyclic property of the matrix trace. According to the Heisenberg representation, this implies that we can consider Eq. 3 as a measurement result for an evolving operator. Since \(M_{\xi }\) is positive semi-definite, \(\forall \,t\ge 0\) we get \(M_{\xi } (t) \equiv U^{\dagger } (t) \,M_{\xi } \,U(t) \ge 0\), which means that \(M_{\xi } (t)\) fits to the concept of generalized measurements for any \(t \ge 0\).

In general, an arbitrary \(2 \times 2\) unitary matrix U can be represented as [6]:

$$\begin{aligned} U = e^{i \alpha } \begin{pmatrix} e^{-i \frac{\beta }{2}} &{} 0 \\ \\ 0 &{} e^{i \frac{\beta }{2}} \end{pmatrix} \begin{pmatrix} \cos \frac{\gamma }{2} &{} - \sin \frac{\gamma }{2} \\ \\ \sin \frac{\gamma }{2} &{} \cos \frac{\gamma }{2} \end{pmatrix} \begin{pmatrix} e^{-i \frac{\delta }{2}} &{} 0 \\ \\ 0 &{} e^{i \frac{\delta }{2}} \end{pmatrix}, \end{aligned}$$
(4)

where \(\alpha , \beta , \gamma \) and \(\delta \) denote real parameters. The decomposition Eq. 4 provides an exact description of single-qubit operations by means of three rotation operators and a global phase shift. Since the global phase does not affect the probabilities, this part shall be omitted. In our model, we assume that the angles \(\beta , \gamma \), and \(\delta \) are linear functions of time, which allows us to define unitary evolution governed by an operator:

$$\begin{aligned} U(t) := \begin{pmatrix} e^{-i \frac{\omega _{1} t}{2}} &{} 0 \\ \\ 0 &{} e^{i \frac{\omega _{1} t}{2}} \end{pmatrix} \begin{pmatrix} \cos \frac{\omega _{2} t}{2} &{} - \sin \frac{\omega _{2} t}{2} \\ \\ \sin \frac{\omega _{2} t}{2} &{} \cos \frac{\omega _{2} t}{2} \end{pmatrix} \begin{pmatrix} e^{-i \frac{\omega _{3} t}{2}} &{} 0 \\ \\ 0 &{} e^{i \frac{\omega _{3} t}{2}} \end{pmatrix}, \end{aligned}$$
(5)

where \(\omega _{i} = 2 \pi / T_i\).

With Eq. 5 defining the time-evolution of a measurement operator \(M_{\xi }\), we can follow the trajectory of \(M_{\xi } (t)\) on the Bloch sphere for specific examples. Let us assume that \(T_1 = 4 T,\) \(T_2 = T\) and \(T_3 = 2 T\), where T denotes a period characterizing the dynamics. In Fig. 1, one can observe the trajectories of \(M_{\xi } (t)\) for three different initial operators \(M_{\xi }\) in the time interval: \(t \in [0, 2 T]\).

Fig. 1
figure 1

Trajectories of time-continuous measurement operator \(M_{\xi } (t)\) that can be generated from a given initial operator \(M_{\xi }\) and a time-dependent unitary dynamics of the form Eq. 5. The illustration was inspired by the presentation of POVMs in Ref. [33]

In particular, if \(M_{H} = \mathinner {|{H}\rangle }\!\mathinner {\langle {H}|}\), we have:

$$\begin{aligned} M_{H} (t) = \begin{pmatrix} \cos ^2 \frac{\pi t}{T} &{} -\frac{1}{2} e^{i \pi t/T} \sin \frac{2 \pi t}{T} \\ \\ -\frac{1}{2} e^{-i \pi t/T} \sin \frac{2 \pi t}{T} &{} \sin ^2 \frac{\pi t}{T} \end{pmatrix}. \end{aligned}$$
(6)

One can observe that \(M_{H} (t)\) spans the Hilbert space. Furthermore, we can prove numerically that:

$$\begin{aligned} \int _{0}^{2T} M_{H} (t) \, d t = {\mathbbm {1}}_{2}, \end{aligned}$$
(7)

which is sufficient to conclude that \(M_{H} (t)\) can be considered a time-continuous informationally complete POVM. From an experimental point of view, it implies that the apparatus should be in one setting corresponding to the horizontal polarization while the photon counting should be performed with respect to the arrival time of photons. In particular, if we select a discrete subset of six specific measurement operators, we can notice that:

$$\begin{aligned} \begin{aligned} \frac{1}{3} M_{H} (0)+ {}&\frac{1}{3} M_{H} (0.25 \,T)+ \frac{1}{3}M_{H} (0.5 \,T) + \frac{1}{3} M_{H} (0.75 \,T) +\\ {}&+\frac{1}{3} M_{H} (1.25\, T)+ \frac{1}{3} M_{H} (1.75\, T) = \mathbbm {1}_2, \end{aligned} \end{aligned}$$
(8)

which means that these six operators constitute an informationally complete POVM. The set of time instants selected in Eq. 8, arranged in the increasing order, shall be denoted as \(\mathcal {T} = \{t_1, t_2, \dots , t_6\}\). One can notice that the operators included in Eq. 8 are pairwise orthogonal, which additionally implies that they represent a scheme that is equivalent to the measurement of the six polarization states: \(\{ \mathinner {|{H}\rangle }, \mathinner {|{V}\rangle }, \mathinner {|{D}\rangle }, \mathinner {|{A}\rangle }, \mathinner {|{R}\rangle }, \mathinner {|{L}\rangle } \}\).

Fig. 2
figure 2

Trajectories of realistic measurement operators \(\widetilde{M}_{H} (t) \) for selected values of \(\sigma _j\). The form was inspired by the presentation of POVMs in Ref. [33]

In a realistic scenario, we have to take into account errors connected with measurements. In the case of measurements that are performed in the time domain, one needs to consider the time uncertainty associated with the detector. Every detector features a timing jitter that has a detrimental impact on the time-resolved photon counting. This process can be modeled by a Gaussian distribution given as [33]:

$$\begin{aligned} q_j (t) := \frac{\exp \left( - \frac{t}{2 \sigma _j ^2} \right) }{\sqrt{ 2 \pi \sigma _j ^2}}, \end{aligned}$$
(9)

where \(\sigma _j\) stands for the timing jitter. Then, the measurement operator distorted by the timing uncertainty, denoted by \(\widetilde{M}_{\xi } (t)\), can be expressed by a convolution [33]:

$$\begin{aligned} \widetilde{M}_{\xi } (t) = \int _{- \infty }^{\infty } M_{\xi } (\tau )\, q_j (t- \tau ) \,d \tau . \end{aligned}$$
(10)

The impact of the timing uncertainty depends on the value of \(\sigma _j\). If we possess state-of-the-art technology, we can perform reliable and precise measurements. To demonstrate the distortion of the measurement operators due to the jitter, we consider \(M_{H} (t)\) since this operator represents an informationally complete POVM. In Fig. 2, one can observe how the quality of the measurement degenerates as we increase the value of \(\sigma _j\). The trajectory of \(\widetilde{M}_{H} (t)\) gets squeezed, and the purity of the operators decreases if we add more time uncertainty. For \(\sigma _j = 0.75 \,T\), the trajectory resembles only a small ball located in the center. The figure shrinks to a single point if we further increase \(\sigma _j\).

In addition, the temporal shape of photons is another quantity that could influence the process of photon counting in this scenario. The temporal probability density of a single photon can be modeled by a Gaussian distribution, cf. Ref. [33]. Thus, the impact of this element on the measurement scheme could be considered analogously as in Eq. 9-10. However, in the present work, we neglect the consequences of the temporal width of photons because for a femtosecond laser, the ratio between the timing jitter and the standard deviation of the photon’s temporal distribution could come up to \(10^3\). Therefore, the detector’s timing uncertainty is considered the key limiting factor that affects the framework.

3 Framework for quantum state tomography

3.1 Methods: qubits

In our QST framework, we assume that the experimental setup is adjusted so that we measure the photon count corresponding to the horizontal polarization, i.e., the apparatus performs the projective measurement defined by the operator \(M_{H} = \mathinner {|{H}\rangle }\!\mathinner {\langle {H}|}\). The setting is fixed, and we neglect any errors due to angular uncertainties. However, due to fiber polarization controllers prior to the measurement, the polarization state of photon changes unitarily according to Eq. 2. Thus, the photon counting has to be performed in the time domain, which implies that we utilize a time-resolved detector for photons with different arrival times. Each act of measurement is performed for a beam consisting of, on average, \(\mathcal {N}\) photons generated in the same polarization state. In other words, the source produces identical photons, but as they travel through a fiber, they undergo a unitary evolution according to Eq. 5.

Based on the notation introduced in Sect. 2, we can write a formula for the expected photon count:

$$\begin{aligned} n_E (t) = \mathcal {N} \, \mathrm{tr}\left( M_{H} (t) \, \rho \right) , \end{aligned}$$
(11)

where \(\rho \) stands for an unknown density matrix that represents the quantum state of photons. Since we have no a priori knowledge about the state, we follow the Cholesky decomposition, which allows us to write [15, 16]:

$$\begin{aligned} \rho = \frac{W^{\dagger } W}{\mathrm{tr}\, (W^{\dagger } W)},\quad \text {where}\quad W=\begin{pmatrix}w_1 &{} 0 \\ w_3 + i\,w_4 &{} w_2 \end{pmatrix}. \end{aligned}$$
(12)

Cholesky factorization guarantees that the result of QST framework shall be physical, i.e., \(\rho \) is positive semidefinite, Hermitian, of trace one. To reconstruct the density matrix \(\rho \), one needs to determine the real parameters \(\mathcal {W} \equiv \{w_1, w_2, w_3, w_4\}\).

On the other hand, the measured photon counts are computed by implementing the operator \(\widetilde{M}_{H} (t) \) distorted by the time uncertainty. Additionally, since the framework is founded on photon-counting, we impose the Poisson noise [34], which is typically taken into account in such a scenario, see, e.g., Ref. [33, 35]. Thus, we obtain a formula for the measured photon counts:

$$\begin{aligned} n_M (t) = \mathcal {N}' \, \mathrm{tr}\left( \widetilde{M}_{H} (t) \, \rho _{in} \right) , \end{aligned}$$
(13)

where \(\mathcal {N}' \) for each act of measurement is generated randomly from the Poisson distribution characterized by the mean value \(\mathcal {N}\), i.e., \(\mathcal {N}' \in \mathrm {Pois} (\mathcal {N})\). The input states, \(\rho _{in}\), are constructed from a general representation of the density matrix of a qubit:

$$\begin{aligned} \rho _{in} = \frac{1}{2} \begin{pmatrix} 1 + r \,\cos \theta &{} r \,\Delta \\ \\ r \,\overline{\Delta } &{} 1 - r \,\cos \theta \end{pmatrix}, \end{aligned}$$
(14)

where \(\Delta = \sin \theta \cos \phi - i \sin \theta \sin \phi \) and \(\overline{\Delta }\) denotes the complex conjugate of \(\Delta \). In our framework, we are able to generate numerically experimental data for any input state. We shall consider a sample of input states such that the full range of all parameters is covered, i.e., \(0\le r \le 1\), \(0 \le \phi < 2 \pi \) and \(0\le \theta \le \pi \). Having a sample of input states is necessary to evaluate the average performance of the method.

Each input state goes through the tomographic scheme, and its density matrix is reconstructed by maximum likelihood estimation (MLE) [17, 18]. We minimize the likelihood function of the form [36]:

$$\begin{aligned} \begin{aligned} \mathcal {L}(\mathcal {W}) = \sum _{k=1}^{6} \left[ \frac{\left( n_M (t_k) - n_E (t_k) \right) ^2}{ n_E (t_k) } + \ln n_E (t_k) \right] , \end{aligned} \end{aligned}$$
(15)

where we applied the discrete set of time instants \(\mathcal {T}\), cf. Equation 8.

3.2 Methods: entangled qubits

We assume that our source can produce polarization-entangled photons in spontaneous parametric down-conversion (SPDC) process that converts one photon into a pair of photons [37, 38]. Then, by \(\mathcal {N}\) we denote the average number of photon pairs produced by the source. Each photon travels in one arm of the experimental setup and can be measured by the detector with a different arrival time. Thus, for an entangled photon pair, we apply two-qubit measurement operators that are the tensor products of single-qubit operators. We obtain:

$$\begin{aligned} M^{2q}_{H} (t_i, t_j) := M_{H} (t_i) \otimes M_{H} (t_j), \end{aligned}$$
(16)

where \(t_i, t_j \in \mathcal {T}\). Analogously, we define a two-qubit measurement operator burdened with time uncertainty, denoted by \(\widetilde{M}^{2q}_{H} (t_i, t_j)\). Since we consider all combinations of time instants from \(\mathcal {T}\), we get 36 two-qubit measurement operators, which allows us to generate coincidence counts burdened with time uncertainty. An overcomplete set with 36 measurement operators is typically used for practical implementations of QST protocols, see, e.g., Ref. [39].

The formula for expected photon counts for entangled qubits is defined in the same vein as for qubits, cf. Equation 11. However, the lower triangular matrix W, which appears in the Cholesky decomposition, now comprises 16 real parameters:

$$\begin{aligned} W = \begin{pmatrix} w_1 &{} 0 &{} 0 &{}0 \\ w_5 + i\, w_6 &{} w_2 &{} 0 &{}0 \\ w_{11} + i \,w_{12} &{} w_7 + i\, w_8 &{} w_3 &{}0 \\ w_{15} + i\, w_{16} &{} w_{13} + i\, w_{14} &{} w_9 + i\, w_{10} &{} w_4 \end{pmatrix}. \end{aligned}$$
(17)

On the other hand, measured photon counts are calculated analogously to Eq. 13 with \(\rho _{in}\) standing for a maximally entangled Bell-type state, i.e., \(\rho _{in} = \mathinner {|{ \Phi (\alpha )}\rangle }\! \mathinner {\langle {\Phi (\alpha )}|}\), where:

$$\begin{aligned} \mathinner {|{\Phi (\alpha )}\rangle } = \frac{1}{\sqrt{2}} \left( \mathinner {|{00}\rangle } + e^{i \alpha } \mathinner {|{11}\rangle } \right) , \end{aligned}$$
(18)

where \(\{\mathinner {|{00}\rangle }, \mathinner {|{01}\rangle }, \mathinner {|{10}\rangle }, \mathinner {|{11}\rangle }\}\) denotes the standard basis in four-dimensional Hilbert space and \(\alpha \in [0, 2 \pi )\) stands for the relative phase. In our model, we select a sample of input states of the form Eq. 18 with the relative phase covering the full range. Then, for each input state, we generate measured photon counts from 36 measurement operators \(\widetilde{M}^{2q}_{H} (t)\) with the Poisson noise. Finally, each state is reconstructed—by minimizing the likelihood function, we obtain the values of the parameters \(w_1, w_2, \dots , w_{16}\) that fit optimally to the simulated experimental data.

3.3 Performance analysis

The goal of the research is to quantify the efficiency of the measurement scheme introduced in Sect. 2. First, we test it on qubits and then on entangled qubits. Each time, we select a sample of quantum states. For every input state \(\rho _{in}\), we generate noisy measurements and perform tomographic reconstruction by MLE. Then, the original state is compared with its estimate, \(\rho _{out}\), by computing the value of quantum fidelity [40,41,42]:

$$\begin{aligned} \mathcal {F} := \left( \mathrm{tr}\sqrt{\sqrt{\rho _{out}} \, \rho _{in} \, \sqrt{\rho _{out}}} \right) ^2. \end{aligned}$$
(19)

Finally, we calculate the average fidelity over the sample, denoted by \(\mathcal {F}_{av}\), which is the figure of merit quantifying the accuracy of the measurement scheme, cf. Ref. [33, 35, 43]. In addition, the sample standard deviation (SD) is provided to quantify the amount of statistical dispersion.

To better characterize the efficiency of the framework in the case of qubits, we consider a sample of pure pairs of orthogonal states: \(\{\rho _{in}, \rho _{in}^{\perp }\}\). Each state goes individually through the framework, and we obtain a pair of corresponding estimates: \(\{\rho _{out}, \rho _{out}^{\perp }\}\). For the outcomes of the QST framework, we compute the trace distance [6, 42]:

$$\begin{aligned} D (\rho _{out}, \rho _{out}^{\perp }) = \frac{1}{2} \mathrm{tr}\Vert \rho _{out} - \rho _{out}^{\perp }\Vert , \end{aligned}$$
(20)

where \(\Vert X\Vert \equiv \sqrt{X X^{\dagger }}\). Since we consider a sample of pure orthogonal states, we know that \(D (\rho _{in}, \rho _{in}^{\perp })=1\). Thus, the value of Eq. 20 quantifies the distinguishability of the quantum states delivered by the QST framework. Finally, for the sample of pairs, we calculate the mean trace distance, \(D_{av}\), which indicates how well, on average, orthogonality is preserved by the measurement scheme.

For entangled states, we compute the concurrence, \(C[\rho ]\), which quantifies the amount of entanglement [44, 45]. This figure is related to another entanglement measure—entanglement of formation [46]. For any quantum state \(\rho \), the concurrence satisfies: \(C[\rho ] \in [0, 1]\), where \(C[\rho ] =0\) for separable states and \(C[\rho ] = 1\) for maximally entangled states. The fact that the concurrence can be considered an entanglement monotone implies that it can be applied to quantify the amount of entanglement detected by a measurement scheme, see, e.g., Ref. [47,48,49,50]. In our framework, we consider a sample of maximally entangled states of the form Eq. 18. Then, for each density matrix resulting from the tomographic scheme, \(\rho _{out}\) we calculate the concurrence: \(C[\rho _{out}]\). Next, the average concurrence is computed over the sample, denoted as \(C_{av}\), in order to evaluate in general the performance of the framework in entanglement detection (each result is accompanied by the corresponding SD to measure the variance in the sample).

4 Results and analysis

4.1 Qubit tomography

First, we select a sample of 8820 qubits according to Eq. 14 with parameters \(r, \theta \), and \(\phi \) covering the whole Bloch ball. To each state from the sample, we apply the QST framework. Then, for the sample, we compute the average fidelity and the corresponding SD. In Table 1, in the columns with the heading “Mixed,” one finds the results for different values of the jitter (\(\sigma _j\)) and numbers of photons (\(\mathcal {N}\)).

We observe three tendencies that were anticipated. First, if we analyze the columns, we notice that the average fidelity declines as we increase the jitter, whereas the corresponding SD grows. This feature is attributed to the fact that the measurement operators are squeezed toward the center of the Bloch ball for a greater time uncertainty, see Fig. 2. Second, by comparing the figures in rows, we see that the results improve if we increase the number of photons per measurement. This was also anticipated since the number of photons is negatively correlated with the influence of the Poisson noise. Thirdly, we observe that the results obtained based on low-photon statistics exhibit a large variance, which implies that the fidelities are more scattered.

One may argue that the decline in the accuracy of state estimation is not so significant when compared with the collapse of the measurement operators, as presented in Fig. 2. This comes from the fact that, with the squeezed measurement operators, we can still properly estimate the states which lie inside of the Bloch ball. However, in many applications, we do not implement states of a great entropy but utilize only pure states, e.g., to encode information or in QKD protocols. Therefore, it appears justifiable to measure the accuracy of the framework for a sample of pure states.

Table 1 Average fidelity with SD in QST of qubits in different measurement scenarios

In Table 1, in the columns with the heading “Pure,” one finds the results for a sample 420 pure states, i.e., we substitute \(r=1\) to Eq. 14 and the other parameters (\(\theta \) and \(\phi \)) take a set of values to cover the Bloch sphere. Apart from the same tendencies as already described, we can formulate two other conclusions. If we analyze \(\sigma _j = 0\) (ideal measurements), then we see that the QST framework is more accurate in pure state reconstruction for the lowest number of photons (i.e., \(\mathcal {N} =10\)) than it is for mixed states. However, if we consider a nonzero value of the detector jitter, we notice that the quality of pure state estimation is substantially lower than in the case of a general sample of qubits. These results confirm the hypothesis that as the measurement operators shrink, they become less efficient at pure state estimation.

Fig. 3
figure 3

Plots of \(D_{av} (\sigma _j)\) for a sample of 210 pairs of orthogonal qubits. Three different numbers of photons per measurement were considered. Error bars correspond to one SD

Next, the sample of 420 pure states is divided into 210 pairs of orthogonal states. For each pair of estimates resulting from the tomographic technique, we compute the trace distance. In Fig. 3, one can observe the average trace distance for the sample, \(D_{av} (\sigma _j)\), versus the amount of time uncertainty quantified by \(\sigma _j\). The plots were generated for three different numbers of photons produced by the source per measurement (ensemble size). The most reliable plot corresponds to the greatest number of photons since, in such a case, the errors related to photon-counting are the least severe. For \(\mathcal {N} = 1\,000\), we can observe how the average trace distance between the estimated states decreases as we add more time uncertainty into the measurements. In particular, if \(\sigma _j=0\), we observe that the orthogonality is perfectly preserved. On the other hand, for \(\sigma _j = 0.75 \,T\) the average trace distance is very close to zero. This means that the framework cannot differentiate between orthogonal input states for higher values of the detector jitter. These numerical results are in agreement with Fig. 2, where it was shown that for \(\sigma _j = 0.75 \,T\) the trajectory of \(\widetilde{M}_{H} (t) \) shrinks into a point in the center of the Bloch sphere. Thus, such a measurement scheme produces estimates which lie in the region of the maximally mixed state and, for this reason, the distance between the estimates of orthogonal states is close to zero.

Finally, a separate remark should be made concerning the performance of the measurements when \(\mathcal {N}=10\). We observe that the plot for this case lies above the other two. One would think that this implies that the scenario with the lowest number of photons has an advantage over the other approaches when it comes to the distinguishability between quantum states. However, this effect is deceptive. There is no rational reason to assume that if the measurement operators are shrunk into a point, we can perform better by utilizing fewer photons. In such a case, the plot for \(\mathcal {N}=10\) should converge to zero as the other two. In addition, we observe that the results corresponding to \(\mathcal {N}=10\) feature a great deal of variance, which is indicated in Fig. 2 by error bars representing one SD. Therefore, this effect should be attributed to the Poisson noise, which distorts the measured counts significantly and leads to dysfunctional state estimation.

4.2 Entangled qubits tomography

We select a sample of 200 entangled qubits in the form Eq. 18 that differ in the value of the relative phase. Each input state is reconstructed based on the measurement scheme described in Sect. 3.2. Then, in order to quantify the amount of entanglement detected by the framework, we compute the average concurrence for the output states. In Fig. 4, we can observe the plots of \(C_{av} (\sigma _j)\) with error bars (one SD) presented as a function of \(\sigma _j\) for three numbers of photon pairs generated by the source (per measurement).

Fig. 4
figure 4

Plots of \(C_{av} (\sigma _j)\) for a sample of 200 two-qubit entangled states. Three different numbers of photon pairs per measurement were considered

First, we can observe the detrimental impact of the detector jitter on entanglement detection. We notice that \(C_{av} (0.25\, T) \approx 0\), regardless of the number of photon pairs involved in each measurement. All three plots start at one point, and they converge since for \(\sigma _j \ge 0.25\,T\), we obtain zero concurrence. Along the whole interval of \(\sigma _j\) the plots corresponding to \(\mathcal {N}=100\) and \(\mathcal {N}=1\,000\) coincide. However, for \(\mathcal {N}=10\), the average concurrence exceeds the other two when \(0.025\,T< \sigma _j < 0.225\,T\). In particular, the discrepancy between the plots is most substantial for the middle values of the analyzed interval, i.e., for \(0.1\,T \le \sigma _j \le 0.175 \,T\) the difference in the average concurrence between \(\mathcal {N}=10\) and the other two scenarios is more than 0.15. The simulations have been repeated several times, and very similar results were obtained. It should be stressed that the concurrence for the states obtained from the low-photon statistics features a great amount of variance, which is presented by error bars. Thus, the results are scattered along a wide range. This implies that with \(\mathcal {N}=10\), we cannot guarantee a high-quality entanglement detection since, for a given state, the particular outcome may vary significantly.

Fig. 5
figure 5

Plots present the average fidelity, \(\mathcal {F}_{av} (\sigma _j)\), in QST of entangled qubits for a sample of 200 input states of the form Eq. 18

For practical implementations, we are usually interested in detecting such amount of entanglement that is sufficient to announce the violation of the Bell-CHSH inequality [2, 51]. As far as the concurrence is concerned, one can guarantee the detection of non-classical correlations in the system described by a density matrix \(\rho \) if \(C[\rho ] > 1/\sqrt{2}\) [52, 53]. Based on this condition, we can evaluate the efficiency of the framework in entanglement detection. We know that \(99.74\%\) of the values lie within three SDs of the mean, which implies that we need to extend the error bars to guarantee that almost all states satisfy the condition for the Bell-CHSH inequality violation. Then, for \(\sigma _j/T = 0.065\), we obtain the following intervals: \(C[\rho _{10}] = 0.85\pm 0.14\), \(C[\rho _{100}] = 0.77\pm 0.06\), and \(C[\rho _{1000}] = 0.74\pm 0.02\), where \(\rho _{\mathcal {N}}\) denotes a density matrix obtained from \(\mathcal {N}\) photon pairs per measurement. Thus, for all \(0 \le \sigma _j/T \le 0.065\), we can guarantee entanglement detection with any number of photon pairs considered in this model. As one can notice, the three-sigma interval for \(\mathcal {N}=1\,000\) is relatively narrow, which allows one to accurately predict the concurrence, whereas the low-photon scenario leads to more significant statistical dispersion. For \(\sigma _j/T = 0.07\), it was checked that within three SDs of the average concurrence, we drop below the threshold, irrespective of the number of photon pairs involved in a single measurement. Therefore, \([0, 0.065\,T]\) can be considered an admissible noise interval of the detector jitter, \(\sigma _j\), for entanglement detection.

To further investigate the performance of the QST framework, we computed the average fidelity, which is presented with error bars in Fig. 5. The plots indicate a modest difference in the average fidelity in favor of the low-photon statistics. However, as we increase \(\sigma _j\), we observe significant growth of the SD corresponding to \(\mathcal {N}=10\). This implies that state estimation with a low number of photon pairs involves more variability.

5 Discussion and summary

In the article, we introduced a QST framework based on time-continuous measurements generated by unitary dynamics. We considered a realistic scenario with measurement results distorted by both time uncertainty and the Poisson noise. The framework was tested numerically on qubits and entangled qubits.

As for qubits, it was demonstrated that the framework could perform efficiently for a sample of states belonging to the Bloch ball. However, the tomographic scheme is less accurate if we consider only pure states. Since the measurement points collapse inward the Bloch ball, the average fidelity for pure states tomography declines rapidly. In addition, by comparing the results for different numbers of photons per measurement, we discovered that if we utilize 10 photons, the fidelity in a sample of quantum states features a large variance.

Furthermore, we studied how well two orthogonal states can be distinguished with the framework. Here, we identified a deceptive effect associated with a low number of photons per measurement. Due to the Poisson noise and time uncertainty, the plot of the average fidelity corresponding to the single-photon scenario does not converge, which might be incorrectly interpreted as better performance.

In the case of entangled qubits, we studied the precision of state reconstruction quantified by the average fidelity. Furthermore, the amount of entanglement detected by the framework was expressed by the average concurrence. With respect to both aspects, it was discovered that the single-photon scenario provides higher figures, but at the same time, the results feature more variance. Numerical simulations were re-evaluated several times, but the results turned out to be repeatable.

For entangled photon pairs, it was evident that the figures of merit decrease rapidly as we add more time uncertainty. Nonetheless, we identified a noise interval that is admissible for the violation of the Bell-CHSH inequality. If the boundary value of the detector jitter is not exceeded, we can guarantee, within the three-sigma accuracy, the detection of nonlocal correlations in spite of the Poisson noise and time uncertainty.

Last but not least, the framework should be discussed with reference to current technological capabilities. The accuracy of the measurements in the time domain is strictly connected with the quality of single-photon detectors. In our approach, we quantified the temporal uncertainty of the detector by the timing jitter, which was expressed as a relative ratio, compared to the period characterizing the dynamics. In practice, one can apply superconducting nanowire single-photon detectors (SNSPDs) for which the jitter is about 25 ps [54, 55]. Such off-the-shelf instruments may not be sufficient, but the temporal resolution of photon detectors has constantly been improving, and for state-of-the-art equipment, the jitter can drop down below 3 ps [56]. Thus, when thinking of practical applications of the framework, it should be tested how an available detector will influence state reconstruction for a given polarization evolution that can be generated and controlled in a laboratory.

To conclude, we can state that the framework presented in the article is in accordance with current technological capabilities because both continuous polarization controllers and time-resolved photon detectors are available on the market. In the future, the model can be extended by incorporating the effects caused by dispersion in the fiber. Furthermore, different quantum state estimation techniques can be applied and compared in terms of their efficiency.