South Korea-based medical AI software developer Lunit has announced positive study results for its AI-powered mammography analysis solution, dubbed Lunit INSIGHT MMG.
The company’s announcement is based on data from a collaborative study with Yan Chen, a digital screening professor at the University of Nottingham, UK.
The study compared Lunit’s solution with assessments by 552 human readers and showed that the Lunit INSIGHT MMG matches the diagnostic performance of human readers.
Lunit CEO Brandon Suh said: “This first study to apply the Personal Performance in Mammographic Screening (PERFORMS) scheme to AI algorithms marks a remarkable achievement, showcasing our AI’s ability to match human performance in detecting breast cancer.
“It offers hope to patients worldwide and underscores AI’s potential to enhance cancer detection and treatment outcomes.
“This follows our recent study in The Lancet Digital Health, validating Lunit INSIGHT MMG as a game-changing alternative in breast cancer screening. We’re committed to leveraging AI to transform healthcare and save lives.”
The three-year retrospective study evaluated two PERFORMS test sets, each consisting of 60 challenging cases from the National Health Service Breast Screening Program (NHSBSP).
Human readers assessed the cases between May 2018 and March 2021, while Lunit’s AI-powered mammography analysis solution evaluated them in 2022.
Lunit INSIGHT MMG evaluated each breast individually and assigned a suspicion of malignancy score to the detected features.
In the study, Lunit’s AI-powered mammography analysis solution showed no significant difference compared to human readers.
Also, Lunit INSIGHT MMG showed no significant difference or even superior performance in sensitivity or specificity compared to human readers, said the medical software company.
Lunit concluded that its AI-powered mammography analysis solution performed at a level equivalent to that of an experienced radiologist in evaluating cases from two enriched test sets.
Professor Chen said: “There are no other studies to date that have compared such a large number of human reader performance in routine quality assurance test sets to AI, so this study may provide a model for assessing AI performance in a real-world setting.
“The results of this study provide strong supporting evidence that AI for breast cancer screening can perform as well as human readers.”