1 Introduction

Fluorescence microscopy lays the foundation of modern optical microscopy in the application of live-cell imaging [1]. The birth of fluorescence super-resolution (SR) microscopy [2,3,4,5] even breaks the diffraction limit of conventional optical microscopy. Although in principle the spatial resolution of SR can reach single-molecule resolution, in practice it is limited by the detectable fluorescent photon flux. To overcome this problem, on the one hand, preference is given to the development of dyes with higher efficiency and photostability, along with more sensitive and faster detection techniques [6,7]. On the other hand, computational image enhancement techniques [8,9,10,11,12,13,14,15,16,17] are highly welcome as they carry the potential to improve the imaging duration and spatiotemporal resolution by recovering the fluorescence signal from the abundant noise with a low photon budget.

Deconvolution is one of the most effective schemes to recover fluorescence signals from optical blurring and noise. Classical inverse problem solvers such as Wiener filtering and Richardson-Lucy (RL) iteration work at moderate noise levels but are feeble for low-SNR images. The regularization-based deconvolution method with noise-robustness shows better performance. Total-variation (TV) [8] and Hessian [9] regularized deconvolutions characterizing the continuity feature were developed for low-SNR image compensation in structured illumination microscopy (SIM), enabling imaging with faster speed and longer duration. Sparse deconvolution [10] based on the image sparsity and Hessian continuity was further proposed to enhance the standard RL iteration, which substantially improves the resolution of SIM. However, limited by naïve assumptions about fluorescence images in conventional algorithms, losses in spatiotemporal details are usually inevitable. Moreover, due to the artifact-proneness feature [18,19], the fidelity of the statistical maximum likelihood estimation (MLE) method such as RL iteration remains doubted. The alternative physics-rooted constrained inverse filtering such as the fast iteration soft-thresholding algorithm (FISTA) [20] is not that effective for fluorescence images due to its fragility to noise and discordance with variance-based regularization.

As an improvement of the conventional data-processing method, the wavelet transform was proposed by mathematicians in the early twentieth century [21], and was first applied in engineering for seismic wave data analysis until 1984 [22]. Later, the theory of multi-resolution analysis (MRA) [23,24,25] that decomposes the signal into a series of resolution levels was proposed, which provides a novel idea for signal space segmentation. The traditional orthogonal wavelet [26,27] is the first developed MRA tool that achieves optimal results in a variety of image-processing tasks [28,29,30,31,32,33]. As the importance of sparsity in signal processing was clarified [34], some redundant multiscale-bases [35,36,37,38,39] that can provide a sparser representation of signals were further developed, which show superior performance [40,41,42,43]. Due to the distinct difference between macro-scale and fluorescence images, the MRA approach for fluorescence images remains an underexplored domain so far.

Here to elevate the fidelity of the algorithmic approach, we propose an MRA approach to characterize two major characteristics of fluorescence images: high contrast across the edge, and high continuity along the edge. MRA outperforms the state-of-the-art variance-based regularization with conspicuously improved signal–noise discrimination ability. More importantly, it seamlessly aligns with the tenets of the physics-rooted iterative inverse filtering, thereby achieving computational-SR results that exhibit a notable similarity and comparable error rates to those obtained through physical means. We further devise sectioning MRA (SecMRA) to deal with heavy background conditions, which shows superior performance than conventional schemes. We demonstrate our methods can accomplish more challenging imaging tasks with ensured fidelity in various fluorescence microscopies, such as discerning ~ 60-nm resolution features with SIM, capturing the dynamics of endoplasmic reticulum (ER), and supporting the long-term observation of organelle interaction.

2 Results

2.1 MRA deconvolution algorithm

Different from traditional variance-based regularizations encapsulating the continuity attributes of fluorescence images (Note S1), we propose a MRA framework for noise-control in the deconvolution process. The major drawback of variance-based regularizations is that they tangle the continuity within the biological structure and its border with the background (Fig. S1). Mathematically, they simply designate components with large spatial derivatives as noise. We instead focus on two cardinal attributes of fluorescence images to differentiate noise: (1) high contrast across the edge, and (2) high continuity along the edge. The heightened specificity of most fluorophores to specific biological structures, as opposed to the surrounding background, engenders the sharp edge within fluorescence images. High continuity is also an important feature because of the connectivity of the biological specimen, and spatial sampling nature of fluorescence microscopes. But this feature only exists within the biological structures, not globally. Consequently, this imparts fluorescence images with abundant anisotropic information, namely continuous structures along edges in different directions. Based on the above analysis, we employ framelet [37] and curvelet [35,36] (Note S2) for noise-control in fluorescence images. Framelet and curvelet transform can effectively detect and code the across-edge contrast and along-edge continuity information into high-value coefficients, respectively. By thresholding the framelet and curvelet coefficients, the noise can be removed while preserving the two important information outlined above.

To verify the above analysis, we examined the framelet and curvelet sparsity of various organelle images at different noise levels. The results show that the noiseless fluorescence images show high sparsity in both the framelet and curvelet domain, which gradually decreases with the increase of noise level (Fig. S2). Increasing the sparsity of the framelet and curvelet coefficients through hard thresholding, the sharp edge and along-edge continuity feature of the fluorescence image can be extracted from the noise contamination (Note S3 and Fig. S3). Based on the above assumptions and verifications, we propose the co-sparsity of the framelet and curvelet coefficients as the regularization for the fluorescence image deconvolution model, termed MRA deconvolution:

$$\mathop {\arg \min }\limits_{x} \left\| {f - Ax} \right\|_{2}^{2} + \lambda_{1} \left\| {Wx} \right\|_{1} + \lambda_{2} \left\| {Cx} \right\|_{1}$$
(1)

where f is the input degenerated image, A denotes the blur kernel in the matrix form, x is the recovered image, W denotes the framelet transform, C denotes the curvelet transform, λ1 and λ2 are two regularization parameters, ||·||1 and ||·||2 notations denote the l1 and l2 norm, respectively. Reducing the value of the first term, also called the fidelity term, recovers high-frequency information. The second and third terms are the sparsity in the framelet and curvelet domain respectively, which mainly controls the noise. FISTA is employed to minimize this optimization problem (Note S4.1). In contrast to mainstream statistical MLE deconvolutions such as Sparse [10] that solves the statistical likelihood argminx -log p ( f | x), we seek the model solution of the fidelity penalty through gradient descent.

To enhance the ability of MRA for deconvolving time-lapse or 3D images with good continuity, we develop a spatiotemporal continuity denoising scheme based on the soft-thresholding of 3D framelet and 3D dual-tree complex wavelet (DTCW) [44,45] coefficients (Note S5), which contribute to the extraction of sharp edge and along-edge continuity feature in the third dimension, respectively. The principle and workflow of the MRA deconvolution algorithm are shown in Fig. 1a, which can be applied to all kinds of fluorescence imaging techniques as a post-processing method.

Fig. 1
figure 1

MRA regularization demonstrates excellent noise-control performance. a The diagram of the MRA deconvolution algorithm. b The SIM images of actin in a U2OS cell under different illumination intensities are shown in the left column, and the corresponding MRA deconvolution results are shown in the right. c The magnified blue boxed region in b (Left), and the intensity profile plots of raw SIM images and MRA deconvolved images along the lines indicated by the left column dotted lines (Right). d The PSNR (Top) and SBR (Bottom) of the original SIM and MRA deconvolved images. The red-boxed region in b was used to estimate noise and background for PSNR calculation. The blue-boxed region in b was used to estimate signal and background for SBR calculation. e, h The GT data are the first frame of the lysosome and microtubule time-lapse images (n = 20). Noisy data are raw images contaminated with 50% Gaussian + 50% Poisson mixed noise. The corresponding MRA and Hessian deconvolution results are shown at the bottom. The comparison at different noise levels is shown in Fig. S7. f, i The SNR and FWHM of the structures pointed by two white arrows of the images displayed in d and g. The SNR calculation includes the whole image stack. The FWHM measurement ensures the parameters employed in Hessian and MRA generated images with an equivalent resolution level. The results obtained with different parameters in Hessian are shown in Fig. S6. g, j Pearson correlation coefficient between MRA deconvolved data and raw data under multiple noise levels (n = 20). ****p < 0.0001. Scale bars: 5 μm (b), 0.5 μm (c), 1 μm (e), 2 μm (h)

2.2 MRA reveals superior noise-control performance

We first validate the efficacy of MRA for noise-control in some synthesized geometrics with known ground-truth (GT) (Note S6, Fig. S4). The results show that the MRA deconvolution can faithfully recover these geometrical structures under severe noise. To validate its performance in real fluorescence images, we captured raw SIM images of actin filaments with gradient SNR by tuning illumination power (Fig. 1b). Two parallelly aligned actin filaments can be resolved by the MRA deconvolution even with 3% illumination power, whereas conventional SIM reconstruction can barely resolve it until the illumination power reaches 20% (Fig. 1c, d). The high-SNR reference clearly reveals the fidelity of MRA, which shows the noise-robust feature. Conversely, when resolving this sample conventional regularization faces the over-smoothing problem with the improvement of SNR (Fig. S1). The unique design of across-edge and along-edge information extraction constitutes the advantage of MRA (Fig. S5).

Subsequently, we compared the performance of MRA with state-of-the-art variance-based penalty on time-lapse images with low SNR (Fig. 1e–h). The results show that with equivalent resolution level, the SNR improvement provided by MRA is ~ 10 dB and ~ 5 dB higher than Hessian deconvolution on the lysosome and microtubule time-lapse images, respectively (Fig. 1f, i). Notably, at heightened noise levels, Hessian regularization fails to effectively discriminate between noise and signal, as it only emphasizes continuous information (Fig. S6). MRA excels because important image edge information is well preserved in high-value coefficients. Across various SNR conditions, MRA consistently outperforms Hessian regularization (Fig. S7), and shows noise robustness characteristics (Fig. 1g, j and Videos S1, 2).

With remarkable noise-control proficiency, MRA can assist live-cell imaging with a faster speed and longer duration by compensating for photon-limited fluorescence images. As an example, we demonstrate that MRA can help capture the rapid dynamics of ER [46,47], a large organelle that plays a crucial role in various life activities. Employing a widefield microscope, we acquire the ER tubule time-lapse images (Fig. S8a and Video 3) with a temporal resolution of ~ 417 Hz and duration spanning ~ 10 min. Due to the ultralow SNR, the ER network in the original image is almost invisible and cannot be segmented even by well-trained TWS machine-learning model [48], which can be finely resolved after noise reduction through MRA deconvolution. Consequently, the rapid movement of vesicles and variation of ER network can be finely visualized (Fig. S8b).

2.3 MRA improves fluorescence imaging resolution with ensured fidelity

In MRA, we adopt the model solution to the inverse problem for resolution improvement, which ensures the reduction of fidelity penalty. The alternative statistical solution, such as RL iteration, is more commonly employed in fluorescence imaging due to its MLE nature, rendering it more robust to noise. However, this conversely renders it an inherently imprecise solution to fidelity penalty, which hampers the fidelity and induces artifacts [18,19]. We demonstrate in simulation that RL would not recover the blurred image to GT even with zero-noise, generating considerable artifacts and false high-frequency details (Fig. S9). Conversely, iterative gradient descent to the fidelity penalty attains convergence to GT guaranteed by the convexity of the fidelity term. Fortunately, when processing practical fluorescent images contaminated by noise, our MRA regularization aids the noise-control. MRA finely aligns with the tenets of physics-rooted model solution since it does not constrain global continuity (Fig. S10). Owing to the inherent non-smoothness of high-resolution images, the conventional variance-based penalty is contradictory with the iterative inverse filtering in model solution. Additionally, we introduce a modification to FISTA iteration to accelerate the reduction of the fidelity cost in high-SNR scenarios, which benefits the inference of high-frequency information (Note S4.3, Fig. S11).

With the aforementioned design, we contend that MRA prioritizes fidelity in computational resolution extension, which means truly improving resolution without producing notable artifacts. To assess the fidelity of our MRA and conventional mainstream MLE methods, we employ metrics focusing on two aspects: the reversibility to low-resolution initiations and the resemblance to high-resolution GT. On a confocal mitochondria image, all methods exhibit resolution improvement (Fig. 2a, b). Notably, noticeable artificial structures can be found in MLE methods upon re-blurring (not seen in MRA). Sparse and commercial Huygens deconvolution (Scientific Volume Imaging company) did not resolve the subtle cristae cluster structure, which was finely unveiled after ~ 1.5-fold resolution improvement provided by MRA. Sparse and Huygens deconvolved images possess higher resolution measured by decorrelation analysis [49], which is largely due to the simple amplification of the high-frequency part in the MLE process but lacks fidelity. This is also reflected in the observation that MLE deconvolved images show several times higher resolution-scaled error [50] than MRA that authentically reduces the fidelity penalty (Fig. 2c, d). Another deficit of the MLE method is that it can arbitrarily increase the sharpness. At typical iteration times, MLE methods over-infer structures’ full-width-at-half-maxima (FWHM) far below the theoretically unblurred value (Fig. S12). In contrast, with increasing iterations, MRA’s inference approaches the theoretical value with respect to PSF.

Fig. 2
figure 2

Verification of MRA’s fidelity-ensured computational resolution extension. a The confocal image of mitochondria in a COS-7 cell. b Magnified blue boxed region in a. The convolved back image was obtained by convolving the deconvolved image using the PSF. White arrows denote the artifacts found in the convolved back image using the original confocal image as reference. The bar plot shows the SSIM value of the convolved back image with the original confocal image. c The resolution-scaled error map of the MRA, Huygens, and Sparse deconvolution result obtained by NanoJ-SQUIRREL software [50]. The error value is calculated using normalized 8-bit images. d The fidelity metrics, which include the resolution-scaled error value (Left), and the fidelity penalty (Right) of the three deconvolved images. e The widefield, SIM images of actin in a U2OS cell. Huygens, Sparse and MRA deconvolution are used to deconvolve the widefield image. f Intensity profiles along the lines indicated by the two white arrowheads in e. g The FWHM value of the intensity profiles along the white dotted lines in e. The resolution of SIM image exceeds the upper resolution limit (130 nm) allowed by the widefield pixel size. Therefore, the SIM image can serve as a non-diffraction GT for FWHM measurement at 65-nm pixel size level. h The fidelity metrics, which include the resolution-scaled error value of the four SR images (Left), and the SSIM value of the three deconvolved images with SIM image (Right). i The linear-nonlinear SIM image pair obtained from open-source BioSR dataset [15]. The linear-SIM image was deconvolved by Huygens, Sparse, and MRA. j The intensity profiles along the lines indicated by the two white arrowheads in i. Decorrelation analysis [49] is used to estimate the resolution value displayed in this figure. Scale bars: 10 μm (a), 0.5 μm (b), 1 μm (e, i)

Subsequent evaluations employ low–high resolution reference pairs. The MRA deconvolved widefield actin image achieves ~ 1.4-fold resolution improvement, yielding high similarity (SSIM:0.85) with the SR-SIM result (Fig. 2e). In contrast, MLE deconvolution produces spurious sharpened edges, failing to compute genuine high-frequency information (Fig. 2f, g). Notably, the fidelity metrics of SIM and MRA images are comparable, significantly surpassing those of MLE methods (Fig. 2h). Widefield-SIM pairs assessments on other samples further confirm MLE’s tendency to introduce illusory high-frequency information, while MRA’s fidelity closely resembles physical SR results (Fig. S13). On open-source BioSR dataset [15], we also verify that MRA can resolve ~ 60-nm resolution feature on a linear-SIM image, which is verified in the nonlinear-SIM result (Fig. 2i). Conversely, two MLE counterparts failed to resolve such subtle structures but produced some false high-frequency information that contradicts the nonlinear-SIM result (Fig. 2j). We also examine the performance of the MLE deconvolution with different iteration times. The results show that reducing the iteration number can alleviate the over-sharpening in MLE deconvolution, but it still fails to infer real high-resolution structures as MRA does (Fig. S14).

Incorporating MRA regularization to the MLE-based deconvolution framework also improves its performance, yet is still affected by the deficits of MLE nature (Fig. S15). Noise presence necessitates weighting the regularization term in MRA, which leads to some gaps between the MRA result and the ideal non-diffraction situation. In this case, executing a few RL iterations (1–5 times) post MRA convergence may to some extent mitigates the discrepancy in image contrast and enhance visual perception (Fig. S16). The high-frequency information framework inferred by MRA ensures fidelity, but excessive RL iterations still can damage it.

The resolving power of MRA can be readily applied to capture subtle structures in various imaging modalities and samples. MRA shows impressive resolving ability on the Argo-SIM slide, which accomplishes the separation of parallelly aligned fluorescent lines with a marginal distance of 30 nm (Fig. 3a). On the Nanoruler sample, the resolving capability of MRA is also substantiated, which assists in separating the 70-nm distanced spot pairs under SIM imaging (Fig. S17). An equivalent ~ 70 nm resolution was achieved in actin structures under SIM imaging modality (Fig. 3b–d). MRA can also be utilized to directly compute the SR information on widefield images (Fig. 3e). Some ring structures in the ER network blurred by widefield PSF can be finely resolved post MRA deconvolution, allowing a finer observation of the ER dynamics (Fig. 3f and Video S4). We also demonstrated the utility of MRA to facilitate commercial LiveSR imaging to capture mitochondrial fine structure dynamics (Fig. 3g and Video S5). MRA deconvolution discerns previously obscured mitochondria cristae cluster structures within original LiveSR images, unveiling their detailed dynamic features (Fig. 3h).

Fig. 3
figure 3

Application of MRA to improve the resolution of fluorescence images with high fidelity. a Separation of Argo-SIM parallelly aligned fluorescence lines by MRA deconvolution. The bottom Avg (10) denotes averaging ten SIM images to improve SNR and then processing with MRA deconvolution. b The SIM image of actin in a U2OS cell, and the MRA deconvolution result. c Magnified blue boxed region in a. The right column shows the intensity profile along the two white arrowheads in the left column, which shows a structure characterizing 65-nm resolution. d Magnified orange boxed region in a, which shows the resolving of a complex actin structure. e The first frame of the time-lapse images of ER tubules in a COS-7 cell captured by a widefield microscopy, and the MRA deconvolution result. It contains 50 frames with a frame rate of ~ 417 fps. f Magnified blue boxed region in d at different frames. The intensity profiles between the two white arrowheads in frame 16 and 45 is displayed on their right side. The single arrow marks the elongation process of an ER tubule. g LiveSR time-lapse images of mitochondria in U2OS cells, and the MRA deconvolution result. h The magnified blue boxed region in g. Scale bars: 0.1 μm (a), 5 μm (b, e), 0.2 μm (c, d), 2 μm (f), 10 μm (g), 0.5 μm (h)

2.4 SecMRA deconvolution algorithm

Although MRA shows superior deconvolution performance, it is inherently limited to dealing with fluorescence background since only planar information is known. Traditional solution includes ||x||1 regularization [10,51] and preliminary background subtraction [49]. However, the loss of details and weakness encountering strong background are evitable because of naïve global thresholding and SNR loss in background subtraction (Fig. S18). To address this issue, here we provide a novel scheme by introducing a bias thresholding mechanism to the MRA iteration, termed SecMRA deconvolution (Note S4.2). This method selectively penalizes the background information and controls the noise in each deconvolution iteration, enabling background mitigation with better preserved details. In the deconvolution iteration, the trade-off between background attenuation and MRA contribution is still inherent. To alleviate ultra-strong fluorescence background, we add an optimized framelet-based preliminary background subtraction step (Note S4.2 and Fig. S19), with the whole procedure shown in Fig. 4a.

Fig. 4
figure 4

SecMRA enhances fluorescence image with background in various microscopies. a The diagram of the SecMRA deconvolution algorithm. b Widefield and SD-confocal image of nuclei in mouse kidney cells, and the corresponding SecMRA deconvolution results (Left). Magnified orange boxed region in the left column (Right). c SoRa image of actin in mouse kidney cells, and the SecMRA deconvolution results (Left). The magnified orange boxed region is in the left column (Middle). The 3D view of the SoRa and SecMRA deconvolution result (Right). d ER tubules in a COS-7 cell captured by wide-field microscopy, and the MRA and SecMRA deconvolution results, the blue graph in the left bottom shows the histogram. The magnified orange boxed region is shown at the right bottom. e BPAEC cell mitochondria image captured by SD-confocal microscopy and SoRa, and the corresponding SecMRA deconvolution results (Left). Magnified orange boxed region in the left column (Right). f The LiveSR captured Nile-Red labeled U2OS cell image, and the SecMRA deconvolution result (Left). Magnified orange boxed region in the left column (Right). g The SIM image of actin in a U2OS cell, and the corresponding SecMRA deconvolution result. Magnified orange boxed region in the left column (Right). h The STED image of mitochondria in a COS-7 cell, and the SecMRA deconvolution result (Left). Magnified orange boxed region in the left column (Right). Scale bars: 50 μm (b Left), 10 μm (b Right, c Left), 1 μm (c Right), 5 μm (d, e Left, f Left, g Left, h Left), 2 μm (d subgraph, f Right, g Right), 0.5 μm (e Right, h Right)

With its remarkable background inhibition ability, SecMRA deconvolution has extended MRA to wider imaging conditions. We acquired images of mouse kidney cell nuclei using widefield and spinning-disk confocal (SD-confocal) microscopy, which exhibited significant background interference (Fig. 4b). The images show a substantial background reduction with finely preserved details after SecMRA deconvolution. The SecMRA deconvolved widefield image even yields better quality than the original SD-confocal image. The SoRa imaging of the mouse kidney cell actin is also affected by noise and out-of-focus signals, which were effectively removed by SecMRA (Fig. 4c and Video S6). Background significantly limits MRA for high-frequency information extraction, which is finely overcome by SecMRA (Fig. 4d). SecMRA also inherits the high-fidelity feature in resolving high-frequency structures, which provides 1.6-fold and 2.2-fold fidelity-ensured resolution improvement to the SD-confocal and SoRa mitochondria image, respectively (Fig. 4e). The SecMRA deconvolved SD-confocal image exhibits high similarity with SoRa. We further verify the fidelity of SecMRA resolution enhancement in other low–high resolution imaging pairs (Fig. S20). Additionally, we demonstrate the effectiveness of SecMRA in other common setups, including LiveSR (Fig. 4f), SIM (Fig. 4g), STED (Fig. 4h), and light-sheet microscopy [52,53] (Fig. S21).

2.5 SecMRA enables low-toxicity, long-term organelle interaction imaging

As a general algorithm for enhancing fluorescence images, SecMRA has great potential to support a variety of life science research. Here we demonstrate the application of SecMRA on the observation of microtubules-associated and mitochondria-associated organelle interaction, which is crucial for cellular function [47,54]. Due to the photosensitivity of these two organelles, low-toxicity imaging is necessary, which can be greatly supported by SecMRA.

We first focused on microtubule-related interaction and performed two-color widefield imaging of microtubules and lysosomes in COS-7 cells (Fig. 5a and Video S7). SecMRA effectively reduced background, noise, and blurring, allowing SR imaging of microtubule and lysosome images (Fig. 5b, the resolution of lysosome and microtubule is improved by ~ 1.5-fold and ~ 2.2-fold respectively). With the assistance of SecMRA, we can finely observe the interactions of lysosomes and microtubules in a long term. The lysosomes in the region shown in Fig. 5c move within the microtubule framework, while the lysosomes in another region shown in Fig. 5d rapidly move along the microtubule. In another set of data, we also observed lysosomes stayed almost static during the ~ 12-min recording period and were closely bound to microtubules (Fig. S22 and Video S8). We also demonstrate the efficacy of SecMRA in SIM imaging (Fig. 5e and Video S9). The raw SIM image suffered from excessive background and noise that hindered observation. SecMRA effectively reduced the noise and compensated signal loss due to photobleaching (Fig. 5f, g). With the assistance of SecMRA, we observe that some lysosomes gradually shifted from the microtubule accumulation region where depolymerization occurs (Fig. 5h, i). Notably, conventional deconvolution algorithms typically sacrifice low-intensity and high-frequency details when enhancing the densely meshed microtubule image (Fig. S18).

Fig. 5
figure 5

SecMRA deconvolution assists long-term fluorescence imaging of microtubule-lysosome interaction. a Dual-color widefield time-lapse images of microtubules (red) and lysosomes (green) in a COS-7 cell, and the SecMRA deconvolution result. The imaging lasts 13 min and 20 s with 80 frames. b Resolution of the original widefield and SecMRA deconvolved lysosome and microtubule image stack. c Magnified blue boxed region in a at different time points. d Magnified orange boxed region in a at different time points. e Dual-color SIM time-lapse images of microtubules (red) and lysosomes (green) in a COS-7 cell, and the SecMRA deconvolution result. The imaging lasts 16 min and 40 s with 100 frames. f Magnified orange boxed region in e at 0 and 1,000 s time point. g The SNR at different time points is estimated by decorrelation analysis. h Magnified blue boxed region in e. i Temporal projection of the blue boxed region in d with the microtubule channel (Top), and the yellow boxed region in d with the lysosome channel (Bottom). Scale bars: 10 μm (a), 1 μm (c, f), 2 μm (d, h, i Top), 5 μm (e), 0.5 μm (i Bottom)

Subsequently, we employed a LiveSR microscope to image the interaction between mitochondria and microtubules. Due to the photosensitivity of both organelles, we utilized low-intensity illumination to ensure their viability (Fig. 6a and Video S10). Nevertheless, the tubulin signal was plagued by noise in the presence of undesired ultrabright regions. With the assistance of the intensity correction mode in SecMRA, we were able to recover the tubulin signal and alleviate undesired intensity distribution by selectively attenuating the ultrabright region, enabling visualization of a mitochondrial fission process around the microtubule (Fig. 6b). Compared with traditional algorithms, this function allows SecMRA to enhance images more flexibly according to certain requirements (Fig. S18). Our method also facilitated the observation of two consecutive mitochondrial fission and fusion events around the microtubules in the LiveSR system (Fig. S23 and Video S11).

Fig. 6
figure 6

SecMRA deconvolution assists long-term fluorescence imaging of mitochondria interaction with microtubules and ER tubules. a Dual-color LiveSR time-lapse images of mitochondria (yellow) and microtubules (magenta) in a U2OS cell, and the corresponding SecMRA deconvolution result. The imaging lasts 2 min with 120 frames. b Magnified blue boxed region in a at different time points. The white arrow denotes the mitochondrion fission site. c Dual-color widefield time-lapse images of mitochondria (orange) and ER tubules (green) in a COS-7 cell, and the SecMRA deconvolution result (Left). The imaging lasts 33 min and 20 s with 400 frames. The Fourier spectrum of widefield and SecMRA deconvolved images (Right). d Magnified blue boxed region in g at 5 s and 33 min 20 s time point (Right). e Magnified red boxed region in c, which shows a mitochondrial fission process. f Magnified purple boxed region in c. Segmentation was performed using the TWS method. g Position of the distal end of the mitochondrion displayed in f. The reference coordinate is shown in f. h MOC value between the ER tubules and mitochondrial distal end at different time points. The meaning of MOC values for specific scenarios is shown in Fig. S24. Scale bars: 2 μm (a, d), 1 μm (b, e, f), 5 μm (c)

Fluorescence imaging near the nucleus region may suffer from severe background due to the dense cellular structure. Microscopes with better optical sectioning are required to accomplish relevant imaging, which conversely induce additional costs and toxicity. Here, we demonstrate that by utilizing the computational sectioning and denoising abilities of SecMRA, interactions between mitochondria and the ER tubules near the nucleus region can be observed in a great precision even using the simplest widefield microscopy. We employed a low illumination intensity to support imaging for more than 30 min without severe photobleaching and phototoxicity, resulting in excessive noise and background that submerged the signal (Fig. 6c, d). SecMRA can finely recover the structural information while conventional algorithms are incompetent (Fig. S18). With the assistance of SecMRA, we observed a mitochondrion division event near the ER network (Fig. 6e and Video S12). We also observed a mitochondrion extended by ~ 5.2 μm, then gradually shrunk and attached to a surrounding ER network (Fig. 6f, g). The MOC value of the mitochondrial distal end with the ER tubules finely describes the mitochondrion movement (Fig. 6h and Fig. S24). During the extension and shrinkage process, peaks and valleys of the MOC curve were formed as the mitochondrion shuttled through the ER network. Then the mitochondrion distal end was bound to the ER network, resulting in a continuously high MOC value. Some deformations of the ER network briefly caused a decrease in MOC values. The mitochondrion then moved to rebind with ER, causing MOC value to increase again.

3 Discussion

Over the past decades, many computational technologies have emerged to improve the quality of fluorescent images and enhance imaging capabilities. However, the presence of artifacts has made achieving computational-SR contentious. In this work, we demonstrate that by reasonably constraining the across-edge contrast and along-edge continuity of fluorescence images, deconvolution via model-solution framework attains assured computational-SR across diverse modalities. In contrast, existing statistical MLE deconvolutions frequently yield artifacts rather than authentic SR information. Our assertion is substantiated by MRA’s resemblance to physical-SR results across various parameters: (1) comparable resolution of intricate structures, (2) high global structural similarity, and (3) equivalent resolution-scaled errors. MRA proves highly effective for subcellular structures featuring pronounced sharp edge and along-edge continuity, allowing sparse representation within the framelet and curvelet domains. The feasibility of these constraints remains even for images with less distinct features, such as electron microscopy images (Fig. S25), as noise generally causes a response in the framelet and curvelet domain sparsity. We further designed SecMRA with a bias thresholding mechanism to address the situation of strong multilayer emitters in various imaging modalities. Both MRA and SecMRA show good linearity in the signal portion (Fig. S26 and Table S1).

We elucidate the influence of parameters through simulations and diverse fluorescence data (Note S7, Figs. S27-32), showing that the MRA parameters mainly adjust the noise-control and deblurring balance. The relative objectivity of the parameters in our pipeline also benefits the fidelity in the deconvolution process (Fig. S33). To facilitate the dissemination of our techniques, we offer MATLAB source code for developers, along with interactive software for users. The imaging conditions and algorithm parameters are provided to ensure the reproducibility (Tables S2-6). Our software automates parameter selection based on estimated noise levels through curvelet sparsity (Fig. S34), and we also provides manual parameter tuning guidance in the user manual document. We analyzed potential MRA failures and artifacts stemming from improper parameter choices or extreme conditions to facilitate the assessment of the deconvolution outcomes (Note S8).

We hope that as a fidelity-ensured deconvolution technique, MRA and SecMRA can help advance biological research. There also exists vast space for MRA to combine with other techniques. The impressive resolution enhancement and background inhibition of SecMRA is extraordinarily beneficial for widefield microscopy, and hopefully can be extended to event-trigger microscopy setups [55,56]. Moreover, combining with some additional physical models of the fluorophores as constraints may further boost the performance of the algorithm [57,58,59]. While the deep learning technique is emerging for image restoration, its generalization capability and uncertainty remain challenging, especially in the context of life science research demanding high fidelity. For fluorescence imaging, where precision and fidelity are paramount, we believe our MRA analytical model holds its unique advantages. Moreover, we believe that introducing our MRA-based prior knowledge to the deep-learning model may further improve its performance, as some recent works indicate the effectiveness of incorporating some analytical models to the deep-learning network [14,16].

4 Methods

4.1 Fluorescence microscopes

We employed various fluorescence microscopes to test the effectiveness of the MRA and SecMRA deconvolution algorithms. The commercial Airy Polar-SIM super-resolution microscopy system (Airy Technologies Co., Ltd, China) was used to capture the widefield and SIM images displayed in Figs. 16. The Zeiss confocal laser scanning microscopy LSM 980 with Airyscan 2 (Zeiss, Germany) was used for imaging the mitochondria displayed in Fig. 2a. The Nikon CSU-W1 SoRa spinning-disk microscopy system (Nikon, Japan) was used to capture the widefield, SD-confocal, and SoRa images displayed in Fig. 4b, c and e. Yokogawa spinning disk equipped with a LiveSR super resolution module (Gataca systems, France) was employed to capture the LiveSR images displayed in Figs. 3, 4. Leica SP8 STED 3X microscope (Leica, Germany) was employed to capture the STED image displayed in Fig. 4. A detailed imaging parameter of each image is listed in Table S2.

4.2 Cell maintenance and preparation

COS-7 cells and U2OS cells were cultured in high glucose medium DMEM (Gibco, 11995–040) with the addition of 10% fetal bovine serum (FBS, Gibco, 10099) and 1% penicillin–streptomycin antibiotics (10,000U/mL, Gibco, 15140148), in an incubator at 37 °C with 5% CO2. For live cell imaging experiments, cells were seeded in μ-Slide 8 Well (ibidi, 80827). For fixed-cell imaging experiments, cells were seeded on coverslips (Thorlabs, CG15CH2). The imaging sample was prepared until the cells reached a confluency of 75%.

4.3 Fixed sample

4.3.1 Mouse kidney section sample

The specimens of phallodin-AF568-labeled actin in mouse kidney section are commercially available (FluoCells Prepared Slide #3, Invitrogen, F24630).

4.3.2 BPAEC cell sample

We used commercial FluoCells Slide #1 (ThermoFisher, F36924) to test the performance of our algorithms. It contains BPAEC cells stained with MitoTracker ™ Red CMXRos, Alexa Fluor™ 488 ghost pen cyclic peptide and DAPI.

4.3.3 Argo-SIM standard slide

In order to verify the resolving power and fidelity of the algorithm, commercial gradually spaced fluorescent lines (Argo-POWER SIM Slide V2, Argolight, France) were employed for SIM imaging. The sample consists of pairs of fluorescent doublets (spacing from 0 to 390 nm, calibrated in the marginal distance manner). The excitation wavelength is 488 nm.

4.3.4 GATTAquant nanorulers

In order to verify the resolving power and fidelity of the algorithm, commercial Nanoruler sample (GATTA-STED 70R, GATTAquant, Germany) was employed for SIM imaging. It contains calibrated fluorescent spots with 70 nm distance. The excitation wavelength is 637 nm.

4.3.5 Labeling actin in fixed U2OS cells

The cell was fixed with 4% formaldehyde (R37814, Invitrogen) for 15 min at room temperature. After washing the sample with PBS, we permeabilized samples with 0.1% Triton X-100 (Invitrogen, HFH10) for 15 min. After washing with PBS, we used Alexa Fluor 568 Phalloidin (Invitrogen, A12380) / Alexa Fluor™ 488 Phalloidin (Invitrogen, A12379) dye to stain the actin filament for 1h at room temperature. We placed coverslips in a covered container to prevent evaporation during incubation. Then we washed the samples two or more times with PBS and placed the stained coverslips in a dark place to dry naturally. The coverslip was sealed with 30 µL of Prolong (Invitrogen, P36984) mounting medium and placed at 4 °C overnight to air-dry and then observe.

4.3.6 Labeling membrane structures in fixed U2OS cell

To label all lipid membrane structures in the cell, 1 ug/ml Nile Red (Invitrogen, N1142) was added into the culture medium 30 min before imaging and was present during imaging.

4.3.7 Centrosome sample for expansion microscopy

The centrosome sample for expansion microscopic imaging was a gift from Jingyan Fu’s laboratory. The associated sample preparation procedure and imaging method  were described previously [60].

4.4 Live-cell sample

4.4.1 Transfection of GFP-KDEL plasmid to mark endoplasmic reticulum

To label the dynamic structure of the ER in living cells, we transfected GFP-KDEL to COS-7 cells with Lipofectamine 3000 (Invitrogen, L3000) transfection reagent. The imaging was performed 36–48 h after transfection.

4.4.2 Labeling mitochondria in living cells

For widefield and SIM imaging: The COS-7 cells were labelled with PKmito RED (Cytoskeleton, CY-SC052)/PKmito DEEP RED (Cytoskeleton, CY-SC055) for 15 min in DMEM. After labeling, we washed the dye 2–3 times with new pre-warmed DMEM before imaging.

For LiveSR imaging: The U2OS cells were labeled with 250 nM MitoTracker Green FM (Invitrogen, M7514) 30 min before imaging.

For STED imaging: The COS-7 cells were labeled with HBmito Crimson (MCE, HY-D2346) [61] at 37°C for 10 min before imaging.

4.4.3 Labeling tubulin in living cells

For widefield and SIM imaging: We used SiR Tubulin Kit (Cytoskeleton, CY-SC002) to label tubulin in live COS-7 cells under the concentration of 1μM. Then we incubated the cells in the incubator with 5% CO2 at 37 °C for 1 h before imaging.

For LiveSR imaging: The tubulin-GFP plasmid was transfected into U2OS cells with Lipofectamine 3000 (Invitrogen, L3000) under the standard protocol.

4.4.4 Labeling lysosome in living cells

We used LysoView 488 (Biotium, 70067) to stain the lysosome in COS-7 cells for 15–30 min without washing.

4.5 Pearson correlation coefficient

Pearson correlation coefficient measures the similarity between two images:

$$R = \frac{{{\text{E}}\Big[ {\left( {f - \mu_{f} } \right)\left( {g - \mu_{g} } \right)} \Big]}}{{\sigma_{f} {\sigma_{g}} }}$$

where f and g are two images to be compared, μf and μg are their average value, σf, and σg are their standard deviation value.

In this work, we used Pearson correlation to measure the correlation of the raw image with the noisy image and MRA deconvolved image in the simulation presented in Fig. 1. Moreover, we also used the Pearson correlation to quantify the correlation between two organelles.

4.6 Mander’s overlap coefficient

Mander’s overlap coefficient (MOC) is an index that measures the overlap of two organelles. Compared with Pearson correlation, MOC has better interpretability and focuses on absolute colocalization. MOC is calculated as follows:

$$MOC\, = \,\frac{{\sum\limits_{i} {g_{i} \cdot Mask_{i} } }}{{\sum\limits_{i} {g_{i} } }}$$

where g is the gray value of one organelle, ad Mask is the binarized image of another organelle.

For calculation of the MOC between mitochondrial distal end and ER tubules shown in Fig. 6h, we used the TWS machine-learning tool to segment the ER tubules, which generates Mask. Then the distal end region (15 pixels are taken) were used to calculate the MOC value.

4.7 Image decorrelation analysis

We used image decorrelation analysis [49] to estimate the image resolution and evaluate image SNR from the view of frequency domain. After standard edge apodization that mitigates high-frequency artifacts, it calculates the cross-correlation of the image spectrum and its normalized spectrum. Then this process is repeated with the normalized spectrum additionally filtered by a binary mask. The decorrelation curve is expressed as follows:

$$d\left( r \right) = \frac{{\displaystyle\int {{\text{Re}} \left\{ {I\left( {\vec{k}} \right)I_{n}^{*} \left( {\vec{k}} \right)M\left( {\vec{k};r} \right)} \right\}dk_{x} dk_{y} } }}{{\sqrt {\displaystyle\int {\left| {I\left( {\vec{k}} \right)} \right|^{2} dk_{x} dk_{y} \int {\left| {I_{n} \left( {\vec{k}} \right)M\left( {\vec{k};r} \right)} \right|^{2} dk_{x} dk_{y} } } } }}$$

where \(\overrightarrow {k} { = }\left[ {{\text{k}}_{{\text{x}}} {\text{,k}}_{{\text{y}}} } \right]\) is the frequency-domain coordinate, \({\text{I}}\text{ (}\overrightarrow{\text{k}}\text{)}\) and \({\text{I}}_{\text{n}}\text{ (}\overrightarrow{\text{k}}\text{)}\) are the image Fourier spectrum and normalized spectrum, respectively, \({\text{M}}\text{ (}\overrightarrow{\text{k}}\text{;}{\text{r}}\text{)}\) is the binary mask with a radius of r.

The normalized Fourier spectrum balances the contribution of signal and noise, which is crucial to differ them. As most information is recorded in the low-frequency region due to the band-limited nature of the optical system, the decorrelation curve would firstly increase with the increase of radius r until most signals are included, then it would decrease with increasing radius because noise contribution is larger. The maximum value of the decorrelation value A0 (0 ~ 1) reflects the SNR metrics, which are used to evaluate the image SNR From the view of the Fourier domain.

4.8 NanoJ-SQUIRREL resolution scaled error

The SQUIRREL algorithm [50] was employed to evaluate the resolution-scaled error of a resolution-enhanced image f based on low-resolution image g and resolution-scaling function (RSF). It starts by correcting the lateral mismatch of the two images, then find the optimal parameter to convolve the resolution-enhanced image f back to g:

$$\mathop {\arg \min }\limits_{\alpha ,\beta ,\sigma } \left\| {g - f_{\gamma } \left( {\alpha ,\beta } \right) \otimes I_{RSF} \left( \sigma \right)} \right\|_{2}$$
$$RSF = \frac{1}{{2\pi \sigma^{2} }}\exp \left( {\frac{{ - \left( {x^{2} + y^{2} } \right)}}{{2\sigma^{2} }}} \right)$$
$$I_{RSF} \left( {x,y} \right) = \Delta E_{x} \left( {x,y} \right)\Delta E_{y} \left( {x,y} \right)$$
$$\Delta E_{x} \left( {x,y} \right) \equiv \frac{1}{2}erf\left( {\frac{x + 0.5}{{\sqrt 2 \sigma }}} \right) - \frac{1}{2}erf\left( {\frac{x - 0.5}{{\sqrt 2 \sigma }}} \right)$$
$$\Delta E_{y} \left( {x,y} \right) \equiv \frac{1}{2}erf\left( {\frac{y + 0.5}{{\sqrt 2 \sigma }}} \right) - \frac{1}{2}erf\left( {\frac{y - 0.5}{{\sqrt 2 \sigma }}} \right)$$
$$f_{\gamma } \left( {\alpha ,\beta } \right) = \beta + \alpha f$$

Then f is convolved back using the optimal parameter: fRS = αf + β, which is used to calculate the resolution-scaled error (RSE) and error map:

$$RSE = \sqrt {\frac{{\sum {\left( {g - f_{RS} } \right)^{2} } }}{n}}$$
$$M = \left| {g - f_{RS} } \right|$$

4.9 SSIM

We used SSIM to measure the similarity between the convolved back image and the original image to evaluate the fidelity of the deconvolution algorithm as a supplement to the NanoJ-SQUIRREL analysis. The SSIM between two images is calculated as follows:

$$SSIM = \frac{{\left( {2\mu_{f} \mu_{g} + c_{1} } \right)\left( {2\sigma_{f} \sigma_{g} + c_{2} } \right)}}{{\left( {\mu_{f}^{2} + \mu_{g}^{2} + c_{1} } \right)\left( {\sigma_{f}^{2} + \sigma_{g}^{2} + c_{2} } \right)}}$$

where f and g are two images to be compared, μf and μg are their average value, σf, and σg are their standard deviation value, σf σg is the covariance of f and g, c1 and c2 are two constants to stabilize the result (c1 = (k1L)2, c2 = (k2L)2, L is the dynamic range of pixel value, k1 = 0.01, k2 = 0.03).

4.10 Calculation of fidelity penalty

The calculation of fidelity penalty essentially is an easy process by summing up the square of the difference between the convolved-back image and the original image. When computing the penalty value of the images obtained by different algorithms, a linearity intensity transformation should be considered since the output result was normalized. Therefore, we searched the minimal fidelity penalty allowing a scaling factor multiplying the deconvolved images, which yield the result shown in Fig. 2d.

4.11 SNR, PSNR, and SBR estimation

In simulation, we calculate the SNR and PSNR metrics using GT as reference:

$$SNR = 10\lg \frac{{\sum\nolimits_{i} {\sum\nolimits_{j} {f\left( {i,j} \right)^{2} } } }}{{\sum\nolimits_{i} {\sum\nolimits_{j} {\Big[ {f\left( {i,j} \right) - g\left( {i,j} \right)} \Big]^{2} } } }}$$
$$PSNR = 10\lg \frac{{\left( {2^{i} - 1} \right)^{2} }}{{\sum\nolimits_{i} {\sum\nolimits_{j} {\Big[ {f\left( {i,j} \right) - g\left( {i,j} \right)} \Big]^{2} } } }}$$

where f denotes the GT image, g denotes the degenerated image, and i is the bit of the image.

In the simulation shown in Fig. 1D–H, to give a practical instruction of the noise level that MRA can deal, we segmented the signal portion of the image and used its average value as the numerator of the SNR calculation fraction.

Considering that there is no GT image in practical imaging, we calculate the image SNR (dB) using the following commonly used formula:

$$SNR = 10 \cdot \log_{10} \left( {\frac{{I_{signal} - b}}{{I_{noise} }}} \right)$$

where Isignal denotes the intensity of the signal, Inoise denotes the intensity of noise, and b denotes the background intensity.

We estimate Isignal by calculating the average intensity in a selected region which is taken as the signal, and estimate Inoise and b by calculating the intensity standard deviation and average value in a selected non-signal region.

The PSNR is calculated as follow:

$$PSNR = 10 \cdot \log_{10} \left( {\frac{{2^{i} - b - 1}}{{I_{noise} }}} \right)$$

where i is the bit of the image.

A similar approach is used to calculate SBR:

$$SBR = 10 \cdot \log_{10} \left( {\frac{{I_{signal} }}{b}} \right)$$