Applied Sciences (Switzerland), cilt.16, sa.8, 2026 (SCI-Expanded, Scopus)
Obtaining multiple MRI contrasts for each patient prolongs scan acquisition time, increases healthcare costs, and may not always be feasible due to patient specific constraints. Deep learning-based MRI contrast synthesis offers a potential solution, yet most existing approaches are evaluated on preprocessed public benchmarks that do not reflect real-world clinical variability. In this study, we propose a fusion U-Net transformer framework for bidirectional T1-weighted ↔ T2-weighted brain MRI synthesis trained and evaluated exclusively on retrospectively acquired clinical data. The proposed architecture integrates multiscale convolutional feature extraction with axial attention mechanisms and a transformer bottleneck for efficient global context modeling. A fusion refinement block is incorporated to mitigate skip connection artifacts. An adversarial training strategy with the least squares GAN objective and a hybrid loss combining L1 reconstruction and structural similarity (SSIM) is employed to promote both pixel-level accuracy and perceptual fidelity. The model is evaluated using SSIM and PSNR metrics alongside qualitative expert assessment conducted by two board-certified radiologists. For both synthesis directions, the framework achieves competitive quantitative performance against baseline models under the challenging conditions of clinical data. Expert evaluation confirms high anatomical fidelity and clinically acceptable image quality across both synthesis directions. These results indicate that the proposed framework represents a promising approach for multi-contrast MRI synthesis in clinically heterogeneous data environments.