Welcome to Mediso

Please select your region:

North America
Rest of the World

Please select your language:


Robustness analysis of denoising neural networks for bone scintigraphy


Akos Kovacs et al., Nuclear Inst. and Methods in Physics Research, A, 2022


Comparison of two neural network (NN) based noise filters developed for planar bone scintigraphy.

Images taken with a gamma camera typically have a low signal-to-noise ratio and are subject to significant Poisson noise. In our work, we have designed a neural network based noise filter that can be used with planar bone scintigraphy recordings at multiple noise levels, instead of developing a separate network for each noise level.

The proposed denoising solution is a convolutional neural network (CNN) inspired by U-NET architecture. A total of 1215 pairs of anterior and posterior patient images were available for training and evaluation during the analysis. The noise-filtering network was trained using bone scintigraphy recordings with real statistics according to the standard protocol, without noise-free recordings. The resulting solution proved to be robust to the noise level of the images within the examined limits.

During the evaluation, we compared the performance of the networks to Gaussian and median filters and to the Block-matching and 3D filtering (BM3D) filter. Our presented evaluation method in this article does not require noiseless images and we measured the performance and robustness of our solution on specialized validation sets.

We showed that particularly high signal-to-noise ratios can be achieved using noise-filtering neural networks (NNs), which are more robust than the traditional methods and can help diagnosis, especially for images with high noise content.


The evaluation method used in the evaluation section of this paper provides a solution to qualify the trained networks.

Fig. 5. Evaluation pipeline: We start from the real measurements acquired by the scanner (1). The second step is to create a noise-free image (2) with a reference enhancement solution (a), which was a neural network based denoiser in our case. The ideal image will be then examined by physicians to see if there was any unusual structure, accumulation or artifact in the image. From this noiseless ideal image we generate synthetic measurement (3) with adding Poisson noise (b), which will be verified (c) by statistical tests. The next step is to construct the records with worse statistics (4) using Poisson thinning (d). Finally these images will be the inputs to the various filtering tools (e), which results’ (5) will be compared (f) to the ideal images (2).

The process of choosing the best model is the following:

  • Training of multiple neural networks with our training strategy based on binomial sampling
  • Best model selection based on validation loss
  • Creation of the evaluation framework
  • Evaluation of the models with the framework

Once the evaluation framework is established, it is recommended to use the scores of our evaluation method instead of calculating the validation loss when training new neural networks.


At a late stage of the development, we selected a neural network that we judged to have sufficiently good performance on measurements with low noise content. With this neural network, we created noise-filtered images from the evaluation dataset (544 measurements), which were then examined by physicians to see if there was any unusual structure, accumulation or artifact in the image, compared to the original unfiltered image. In our evaluation process, we considered these images as noise-free, expected ideal images, from which we generated images with normal statistics using Poisson noise. These normal statistics images were used as input to our solutions and we also used them to produce lower quality images by binomial sampling. The whole pipeline and the examples of the images produced by the pipeline are shown in Fig. 5, Fig. 6. Using this process, we can also measure the peak signal-to-noise ratio (PSNR) of the filtered images, which is shown in the figure examples.

Fig. 6. Evaluation pipeline: We start from Measurement (a), from which we create a noise-free image with a reference filter (b). This will be then reviewed by doctors and taken as a benchmark. From this we generate an artificial degraded noisy image (c). Images (d), (e), (f), (g), (h) show the results of different filters. Since we have the noise-free reference image, we can correctly compute the errors of each method using the metrics.


We have demonstrated that it is possible to train a neural network that performs well under a wide range of noise levels and outperforms previous non-neural network based tools such as Gaussian filter, median filter and BM3D. Noise-filtered images may allow to reduce the amount of injected activity and the measurement time, and may also improve the accuracy, speed and reliability of diagnosis, but this must be supported by clinical trials. Such a noise-filtering solution can also be used to improve the image quality of fast, localization preview scans. The evaluation method presented here can be applied and generalized in all cases where noise-free measurements are not available.

Full article on Nuclear Inst. and Methods in Physics Research, A


How can we help you?

Please contact us for technical information, products and services.

Get in touch