Journal of Supercomputing, cilt.81, sa.13, 2025 (SCI-Expanded)
Image denoising is a fundamental challenge in image restoration and is crucial for medical imaging, photography, and remote sensing applications. However, current deep learning methods often struggle with multi-scale feature extraction, generalization to diverse noise types, and computational efficiency. To address these issues, we propose SERDNet, a dual-branch architecture that integrates Residual Dense Blocks, Attention-Based Feature Fusion modules, and a Multi-Scale U-Net Encoder–Decoder with Squeeze-and-Excitation attention. The SERD branch enhances local features via hierarchical dense connectivity, while the Encoder–Decoder branch captures global context with adaptive feature fusion and skip connections. Extensive experiments on standard benchmarks—including BSD68, Set12, CBSD68, Kodak24, McMaster, CC, and SIDD—confirm that SERDNet consistently delivers superior performance compared to recent deep models, particularly in real-noise scenarios. It achieves up to 33.18 dB on BSD68 at σ=15, 39.32 dB on SIDD, and 35.60 dB on the real-noise CC dataset. Moreover, the model exhibits strong generalization on challenging non-Gaussian noise scenarios, including Poisson and Salt-and-Pepper noise, where it achieves visually and quantitatively competitive results. With a competitive inference time of 0.2126 s per image, SERDNet offers a robust, efficient solution for both blind and non-blind denoising across a wide range of real-world and synthetic noise conditions. In addition, SERDNet’s dual-path design supports efficient parallelization on GPU and multi-core HPC platforms, making it well-suited for high-throughput and real-time denoising in compute-intensive environments.