Inter-vendor variability in strain measurements depends on software rather than image characteristics


ÜNLÜ S. , Mirea O., Bezy S., Duchenne J., Pagourelias E. D. , Bogaert J., ...Daha Fazla

INTERNATIONAL JOURNAL OF CARDIOVASCULAR IMAGING, 2021 (SCI İndekslerine Giren Dergi) identifier identifier identifier

  • Cilt numarası:
  • Basım Tarihi: 2021
  • Doi Numarası: 10.1007/s10554-020-02155-2
  • Dergi Adı: INTERNATIONAL JOURNAL OF CARDIOVASCULAR IMAGING

Özet

Despite standardization efforts, vendors still use specific proprietary software algorithms for echocardiographic strain measurements, which result in high inter-vendor variability. Using vendor-independent software could be one solution. Little is known, however, how vendor specific image characteristics can influence tracking results of such software. We therefore investigated the reproducibility, accuracy, and scar detection ability of strain measurements on images from different vendors by using a vendor-independent software. A vendor-independent software (TomTec Image Arena) was used to analyse datasets of 63 patients which were obtained on machines from four different ultrasound machine vendors (GE, Philips, Siemens, Toshiba). We measured the tracking feasibility, inter-vendor bias, the relative test-re-test variability and scar discrimination ability of strain measurements. Cardiac magnetic resonance delayed enhancement images were used as the reference standard of scar definition. Tracking feasibility on vendor datasets were significantly different (p < 0.001). Variability of global longitudinal strain (GLS) measurements was similar among the vendors whereas variability of segmental longitudinal strain (SLS) showed modest difference. Relative test-re-test variability of GLS and SLS showed no relevant differences. No significant difference in scar detection capability was observed. Average GLS and SLS values were similar among vendors. Reproducibility of GLS measurements showed no difference among vendors and was in acceptable range. SLS reproducibility was high but similar for all vendors. No relevant difference was found for identifying regional dysfunction. Tracking feasibility showed a substantial difference among images from different vendors. Our findings demonstrate that tracking results depend mainly on the software used and show little influence from vendor specific image characteristics.