Explore Your Local Site

Looks like you've landed on our   site. Let's take you home:    

Please note that the content and products on the    site might not be available in your region.

 

Choose the language:

  Homepage
Continue on the current website:  

 

Le rôle de l’intelligence artificielle dans le diagnostic et le dépistage du cancer du poumon

Cancer du poumon

Le cancer du poumon est la principale cause de décès liés au cancer dans le monde et le deuxième cancer le plus répandu après le cancer du sein (Sung et al., 2021). Le tabagisme reste le principal facteur de risque de la maladie, environ les trois quarts des patients étant fumeurs ou ex-fumeurs (Siegel et al., 2021). D'autres facteurs de risque comprennent l'exposition à la fumée secondaire, aux polluants environnementaux, au radon et aux risques professionnels tels que l'amiante (Malhotra et al., 2016).

Estimation des décès dus au cancer aux États-Unis en 2022

Femmes

 

fr-chart

 

Il existe deux principaux sous-types de cancer du poumon : le cancer du poumon non à petites cellules (CPNPC) et le cancer du poumon à petites cellules (CPPC) (Thai et al., 2021). Le CPNPC est le plus répandu, représentant environ 85 % de tous les cas, et comprend l'adénocarcinome, le carcinome épidermoïde et le carcinome à grandes cellules (Thai et al., 2021). Le CPPC est la forme la plus agressive, souvent associée à une croissance tumorale rapide et à des métastases précoces (Thai et al., 2021).

Les caractéristiques cliniques du cancer du poumon sont souvent non spécifiques aux premiers stades, ce qui contribue à retarder son diagnostic (Hamilton et al., 2005). Les symptômes courants comprennent une toux persistante, des douleurs thoraciques, un essoufflement et des crachats de sang (Hamilton et al., 2005). Le diagnostic du cancer du poumon implique des études d'imagerie et des biopsies tissulaires ouvertes « bronchoscopiques » ou radioguidées essentielles pour confirmer le diagnostic et le sous-type histologique et orienter les décisions de traitement (Detterbeck et al., 2013).

Le traitement du cancer du poumon dépend du stade auquel il est diagnostiqué et des sous-types histologiques (Detterbeck et al., 2013). La chirurgie, la radiothérapie et la chimiothérapie sont les principales modalités thérapeutiques (Detterbeck et al., 2013 ; Thai et al., 2021). Le CPNPC à un stade précoce peut être traité par résection chirurgicale de la tumeur, tandis que les cas avancés nécessitent souvent une combinaison de chimiothérapie et de radiothérapie (Detterbeck et al., 2013 ; Thai et al., 2021). En raison de sa nature agressive, le CPPC est souvent traité par chimiothérapie, parfois en association avec une radiothérapie (Detterbeck et al., 2013 ; Thai et al., 2021).

L'immunothérapie est apparue comme une option prometteuse pour le traitement du cancer du poumon, en particulier du CPNPC (Thai et al., 2021 ; C. Wang et al., 2021). Les médicaments qui ciblent les points de contrôle immunitaires se sont révélés efficaces pour améliorer la survie de certains patients (Thai et al., 2021 ; C. Wang et al., 2021).

Des thérapies ciblées, axées sur des mutations ou altérations génétiques spécifiques, ont également été développées pour des sous-ensembles de patients atteints de cancer du poumon, offrant ainsi des options de traitement plus personnalisées et plus efficaces (Thai et al., 2021). Une approche multidisciplinaire impliquant des oncologues, des chirurgiens, des radiologues et d'autres professionnels de santé est essentielle pour adapter les plans de traitement aux besoins individuels des patients (Detterbeck et al., 2013).

Stratégies de dépistage

Les premiers essais cliniques de dépistage du cancer du poumon ont utilisé l'analyse des crachats et la radiographie thoracique conventionnelle et n'ont trouvé aucun lien entre le dépistage et la faible mortalité (Marcus et al., 2000). Il a été démontré dans des essais ultérieurs que grâce à sa capacité supérieure à détecter les nodules non calcifiés qui représentent potentiellement un cancer à un stade précoce (Henschke et al., 1999), le dépistage par tomodensitométrie (TDM) à faible dose réduit la mortalité liée au cancer du poumon de 20 % (National Lung Screening TriaResearch Team et. al., 2011 ; Aberle et. al. ; 2011) à 24 % (de Koning et. al., 2020). Bien que la TDM à faible dose soit plus coûteuse et implique une dose de rayonnement plus élevée que la radiographie conventionnelle, les risques associés à l'exposition aux rayonnements sont très faibles (Sands et al., 2021) et de vastes études ont montré que de telles stratégies de dépistage sont rentables (Black et al., 2014 ; Toumazis et al., 2021).

Le dépistage du cancer du poumon est plus efficace lorsqu’il cible les personnes présentant un risque élevé de développer la maladie. Les directives nationales actuelles en matière de dépistage dans de nombreux pays identifient en grande partie ces personnes à l'aide de critères dérivés des premiers essais cliniques sur le dépistage du cancer du poumon, notamment l'âge et les antécédents de tabagisme en paquets-années. Des chercheurs d'autres pays ont développé des modèles mathématiques pour estimer le risque de cancer du poumon afin de déterminer l'admissibilité au dépistage en incorporant des variables supplémentaires telles que l'origine éthnique, les antécédents personnels et familiaux de cancer du poumon et l'indice de masse corporelle (Cassidy et al., 2008 ; Field et al. ., 2016 ; Tammemägi et al., 2013).

 

strategies-de-depistage

 

Défis du dépistage

Malgré les preuves solides de l’efficacité du dépistage du cancer du poumon, sa mise en oeuvre pratique se heurte à plusieurs défis. L'engagement et l'adhésion aux programmes de dépistage du cancer du poumon sont faibles, avec un taux de participation de seulement 7 à 14 % aux États-Unis (J. Li et al., 2018 ; Zahnd et Eberth, 2019) et de faibles taux au Royaume-Uni, notamment parmi les personnes les plus à risque de cancer du poumon (Ali et al., 2015). Alors que de plus en plus de pays mettent en place des programmes nationaux de dépistage du cancer du poumon, la pénurie mondiale déjà importante de radiologues (AAMC Report Reinforces Mounting Physician Shortage, 2021 ; Smieli- auskas et al., 2014 ; The Royal College of Radiologists, 2022) risque de compliquer davantage le dépistage du cancer du poumon, car il y aura moins de radiologues qualifiés pour interpréter un nombre accru d'examens de TDM.

Cela est particulièrement problématique, car la déclaration des examens de dépistage du cancer du poumon nécessite une expertise et une formation spécifiques (LCS Project, n.d.). De plus, une charge de travail plus élevée est généralement associée à davantage d’erreurs de la part des radiologues (Hanna et al., 2018), ce qui peut entraîner de moins bons résultats en matière de dépistage. Il n’existe pas non plus de consensus sur la manière de traiter les découvertes fortuites identifiées lors du dépistage du cancer du poumon. Ces résultats donnent lieu à un bilan diagnostique plus approfondi chez jusqu'à 15 % des patients dépistés (Morgan et al., 2017) et sont associés à une anxiété accrue des patients et à des coûts pour le système de santé (Adams et al., 2016).

Rôle de l'intelligence artificielle

Identification des individus à haut risque

Aux États-Unis, les critères d'admissibilité au dépistage du cancer du poumon, qui émanent des Centers for Medicare et Medicaid Services (CMS), négligent plus de la moitié des cas de cancer du poumon (Y. Wang et al., 2015). Ces critères, qui incluent uniquement les antécédents de tabagisme et l'âge, fournissent une prévision du risque sous-optimale car ils ignorent d'autres facteurs de risque importants (Burzic et al., 2022) et sont souvent basés sur des données inexactes ou indisponibles (Kinsinger et al., 2017). Ceci est particulièrement important étant donné qu’environ un quart de tous les cas de cancer du poumon ne sont pas attribués au tabagisme (Sun et al., 2007).

C'est pourquoi des études se sont penchées sur l'utilisation de l'intelligence artificielle (IA) pour améliorer la prévision du risque de cancer du poumon et inclure davantage de personnes dans les programmes de dépistage. La combinaison d'informations provenant de dossiers médicaux électroniques et de radiographies thoraciques conventionnelles à l'aide d'un réseau neuronal convolutif (CNN) s'est avérée plus efficace que les critères de dépistage existants pour prédire le cancer du poumon sur une période de 12 ans et a été associée à une réduction de 31 % des cas de cancer du poumon non détectés (Lu et al., 2020). Une autre étude utilisant une approche similaire a révélé que les patients classés comme présentant un risque élevé de cancer du poumon en utilisant à la fois des radiographies thoraciques conventionnelles et les critères CMS, mais n'utilisant pas les critères CMS seuls, présentaient une incidence de cancer du poumon de 3,3 % sur 6 ans (Raghu et al., 2022), bien supérieure au seuil de 1,3 % de risque sur 6 ans pour le dépistage du cancer du poumon, similaire au seuil utilisé par l'USPSTF (United States Preventative Services Task Force) (Wood et al., 2018).

Réduction de la dose de rayonnement et amélioration de la qualité de l'image

Des techniques d'apprentissage profond peuvent être utilisées pour le débruitage des images, certaines solutions pour la tomodensitométrie thoracique étant disponibles dans le commerce (Nam et al., 2021). La reconstruction d'images basée sur l'apprentissage profond de la TDM à ultra-faible dose a augmenté le taux de détection des nodules et amélioré la précision de la mesure des nodules par rapport aux algorithmes de reconstruction conventionnels (Jiang et al., 2022). Le débruitage basé sur l'apprentissage profond de la TDM à ultra-faible dose a également montré une meilleure qualité d'image subjective que la TDM à ultra-faible dose sans débruitage (Hata et al., 2020).

Détection des nodules pulmonaires

Les nodules pulmonaires sont définis comme une petite masse (généralement moins de 3 centimètres), arrondie ou irrégulière dans le tissu pulmonaire qui peut être infectieuse, inflammatoire, congénitale ou néoplasique (Wyker & Henderson, 2022). La détection des nodules pulmonaires par les radiologues prend du temps et est sujette à des erreurs telles que la non-identification ou l'identification erronée de nodules potentiellement malins (Al Mohammad et al., 2019 ; Armato et al., 2009 ; Gierada et al., 2017 ; Leader et al., 2005).

radiographie-thoracique

Les petits nodules pulmonaires sont souvent invisibles sur les radiographies thoraciques conventionnelles, et même les nodules d’un volume important peuvent passer inaperçus sur les radiographies (Austin et al., 1992). Malgré cela, le potentiel de l’IA à détecter les nodules pulmonaires et les cancers sur la radiographie thoracique fait l’objet de recherches actives (Cha et al., 2019 ; Homayounieh et al., 2021 ; Jones et al., 2021 ; X. Li et al. , 2020 ; Mendoza & Pedrini, 2020 ; Nam et al., 2019 ; Yoo et al., 2021) en raison de son rôle d'examen d'imagerie de première intention dans les maladies respiratoires et cardiaques, de sa large disponibilité et de son faible coût (Ravin & Chotas, 1997).

Une méta-analyse de neuf études a révélé une ASC de 0,884 pour la détection de nodules pulmonaires sur des radiographies thoraciques conventionnelles utilisant l'IA (Aggarwal et al., 2021).

Une méta-analyse de 41 études a révélé un ratio de 55 à 99 % de sensibilité pour l'apprentissage automatique traditionnel et de 80 à 97 % pour les algorithmes d’apprentissage profond permettant d’identifier les nodules pulmonaires sur la TDM à faible dose (Pehrson et al., 2019). L'une des premières études à utiliser l'apprentissage profond pour détecter les nodules pulmonaires sur une TDM à faible dose a révélé une sensibilité de 98,3 % et un faux positif par examen en utilisant une combinaison de différents algorithmes (Setio et al., 2017). Les faux positifs ont tendance à être des vaisseaux sanguins, du tissu cicatriciel ou encore des sections de la paroi thoracique, des vertèbres ou du tissu médiastinal (Cui et al., 2022 ; L. Li et al., 2019 ; Setio et al., 2017).

Une méta-analyse incluant 56 études a révélé une ASC de 0,94 pour la détection des nodules pulmonaires par TDM à l'aide de l'apprentissage profond (Aggarwal et al., 2021). Cependant, très peu de ces études ont utilisé des données collectées de manière prospective ou validé les algorithmes sur un ensemble de données externes indépendantes (Aggarwal et al., 2021). Un algorithme d'apprentissage profond formé sur plus de 10 000 TDM thoraciques à faible dose a atteint une ASC de 0,86 à 0,94 pour la prédiction du cancer du poumon à 1 an en utilisant l'histopathologie comme norme de référence lorsqu'il a été testé sur trois ensembles de données de validation externes (Mikhael et al., 2023).

Une étude portant sur 346 personnes ayant participé à un programme de dépistage du cancer du poumon a révélé qu'un algorithme d'apprentissage profond était plus sensible aux nodules pulmonaires qu'une double lecture effectuée par deux radiologues hautement spécialisés en imagerie thoracique (86 % contre 79 %) mais un taux de fausses détections beaucoup plus élevé (1,53 contre 0,13 par examen d’imagerie) (L. Li et al., 2019). Une étude similaire portant sur 360 personnes a révélé qu'un algorithme d'apprentissage profond détectait des nodules pulmonaires sur une TDM à faible dose avec une sensibilité de 90 % et un taux de fausse détection de 1 par examen d’imagerie par rapport à une sensibilité de 76 % et un taux de fausse détection de 0,04 par examen d’imagerie lorsque les examens d’imagerie étaient relus en double par des paires composées d'un radiologue junior et d'un radiologue senior (Cui et al., 2022). La différence de sensibilité entre l'algorithme et les radiologues était particulièrement importante pour les nodules d'un diamètre compris entre 4 et 6 mm (86 % contre 59 %) (Cui et al., 2022).

Segmentation des nodules pulmonaires

Une mesure précise des nodules pulmonaires est importante pour surveiller la croissance des nodules au fil du temps, ainsi que pour guider la gestion de ces lésions (Bankier et al., 2017). Cependant, l’erreur estimée dans la mesure manuelle du diamètre des nodules pulmonaires est d’environ 1,5 mm, ce qui représente une erreur d’estimation importante de la taille des petits nodules (Bankier et al., 2017 ; Revel et al., 2004). De plus, se fier au diamètre des nodules peut ne pas refléter avec précision la croissance des nodules, car cela suppose que tous les nodules sont des sphères parfaites (Devaraj et al., 2017). À cet égard, la dernière version de LungRADS 2022 incluait la possibilité d'effectuer une évaluation volumétrique (mm3).

des-nodules

La segmentation des nodules pulmonaires permet une estimation précise de leur volume et constitue un processus en plusieurs étapes qui comprend la détection des nodules, un processus de « croissance de la région » par lequel les limites des nodules sont identifiées en exploitant les différences d'atténuation des tissus entre le nodule et le parenchyme pulmonaire environnant, et l'élimination des structures environnantes présentant une atténuation similaire, telles que les vaisseaux sanguins (Devaraj et al., 2017).

Les approches de segmentation simples qui reposent fortement sur les différences d'atténuation entre les nodules et le parenchyme pulmonaire environnant fonctionnent mal avec les nodules juxta-vasculaires et sub-solides (Devaraj et al., 2017). En revanche, les CNN, en particulier les algorithmes codeurs-décodeurs, atteignent de bien meilleures performances de segmentation avec des coefficients de similarité Dice (une mesure du chevauchement spatial) de 0,79 à 0,93 par rapport aux segmentations de vérité terrain réalisées par les radiologues (Dong et al., 2020 ; Gu et al., 2021). Plusieurs algorithmes de segmentation des nodules pulmonaires basés sur l'IA qui effectuent une volumétrie automatisée des nodules et un suivi longitudinal du volume sont disponibles sur le marché (Hwang et al., 2021 ; Jacobs et al., 2021 ; Murchison et al., 2022 ; Park et al., 2019 ; Röhrich et al., 2023 ; Singh et al., 2021).

Classification des nodules pulmonaires

Les facteurs pris en compte pour déterminer la probabilité qu'un nodule pulmonaire soit cancéreux comprennent sa taille, sa forme, sa composition, son emplacement et si et comment il change au fil du temps (Callister et al., 2015 ; Lung Rads, n.d.). Des études ont montré un accord modéré inter- et intra-observateur entre radiologues pour les caractéristiques utilisées pour estimer la probabilité qu'un nodule pulmonaire soit malin avec une classification discordante dans plus d'un tiers des nodules (van Riel et al., 2015). Plusieurs solutions basées sur l'IA disponibles sur le marché pour la classification des nodules pulmonaires, dont beaucoup incluent une évaluation du risque de malignité, sont actuellement disponibles (Adams et al., 2023 ; Hwang et al., 2021 ; Murchison et al., 2022 ; Park et al., 2019 ; Röhrich et al., 2023).

Un algorithme d'apprentissage profond formé sur les données de 943 patients et validé sur un ensemble de données indépendant de 468 patients a montré une précision globale de 78 à 80 % pour classer les nodules dans l’une des 6 catégories (solides, calcifiés, partiellement solides, non solides, périscissuraux ou spiculés) (Ciompi et al., 2017). La précision était la plus faible pour les nodules partiellement solides, spiculés et périscissuraux (Ciompi et al., 2017).

Un algorithme d'apprentissage profond testé sur 6 716 TDM à faible dose et validé sur un ensemble de données indépendant de 1 139 TDM à faible dose a montré une ASC de 0,94 pour la prédiction du risque de cancer du poumon en utilisant l'histopathologie comme norme de référence (Ardila et al., 2019). Lorsqu’il est disponible, l’algorithme intègre les informations provenant de TDM antérieures du même patient et, dans ces cas, ses performances étaient similaires à celles de six radiologues (Ardila et al., 2019). Lorsque l'imagerie préalable n'était pas disponible, l'algorithme avait un taux de faux positifs inférieur de 11 % et un taux de faux négatifs inférieur de 5 % à celui des radiologues (Ardila et al., 2019).

Une autre étude a utilisé un CNN tridimensionnel multitâche pour extraire les caractéristiques des nodules telles que la calcification, la lobulation, la sphéricité, la spiculation, les marges et la texture avec une précision « hors norme » de 91,3 % (Hussein et al., 2017).

L’étude a utilisé l’apprentissage par transfert d’un algorithme entraîné sur un million de vidéos et l’a testé sur un ensemble de données de plus de 1 000 TDM du thorax. La norme de référence était constituée de scores de malignité et de scores caractéristiques des nodules évalués par au moins trois radiologues (Hussein et al., 2017). À l'aide d'un CNN 3D interprétable multitâche testé sur le même ensemble de données sans apprentissage par transfert, une autre étude a simultanément segmenté les nodules pulmonaires, prédit la probabilité de malignité et généré des attributs descriptifs des nodules, atteignant un coefficient de similarité Dice de 0,74 et une précision « hors norme » de 97,6 % (Wu et al., 2018).

Défis et orientations futures

Sur la base de toutes les preuves disponibles, certains pays ont commencé à intégrer l’utilisation de l’IA dans leurs programmes nationaux de dépistage. Dans cette optique, il convient de mentionner la récente publication d'une loi fédérale par le ministère fédéral allemand de la Justice, qui inclut l'utilisation d'un logiciel de détection assistée par ordinateur des nodules pulmonaires dans le nouveau programme de dépistage du cancer du poumon, pour la détection et la volumétrie des nodules pulmonaires, la détermination du temps de doublement du volume et le stockage de l'évaluation pour un rapport structuré (https://www.recht.bund.de/ bgbl/1/2024/162/VO.html).

Malgré les résultats très encourageants obtenus ces dernières années dans l’amélioration du dépistage du cancer du poumon grâce à l’intelligence artificielle, plusieurs défis méthodologiques importants demeurent.

La simple hétérogénéité des études menées jusqu’à présent rend difficile la synthèse des preuves par méta- analyse (Aggarwal et al., 2021). De plus, on ne sait pas exactement dans quelle mesure ces algorithmes sont généralisables, car la majorité des études manquent de validation externe solide (Aggarwal et al., 2021). Il est également nécessaire d'étudier la valeur supplémentaire de certains de ces algorithmes au-delà des améliorations des performances diagnostiques, par exemple en termes d'efficacité accrue ou de réduction des coûts (National Institute for Health and Care Excellence [NICE], n.d.).

Les nodules sub-solides, y compris les nodules en verre dépoli pur, sont plus susceptibles d'être malins que les nodules solides (Henschke et al., 2002). Cependant, comme les différences d’atténuation entre ces nodules et le parenchyme pulmonaire environnant sont très subtiles, elles sont particulièrement difficiles à détecter (de Margerie-Mellon & Chassagnon, 2023 ; L. Li et al., 2019 ; Setio et al., 2017). Certains algorithmes automatisés se sont révélés prometteurs pour la détection des nodules sub-solides, mais doivent encore être largement validés (Qi et al., 2020, 2021).

Conclusion

L’utilisation de l’intelligence artificielle pourrait aider à identifier davantage de personnes présentant un risque élevé de cancer du poumon et à améliorer la qualité des examens de dépistage. L’IA s’est avérée particulièrement utile pour identifier et segmenter les nodules pulmonaires potentiellement malins, dépassant souvent la sensibilité des radiologues (Cui et al., 2022 ; L. Li et al., 2019). Elle s’est également révélée prometteuse dans l’estimation de la probabilité de malignité des nodules pulmonaires, en particulier lorsque des images préalables sont disponibles.

Les recherches futures devraient viser à combler les lacunes des études antérieures, notamment le manque de validation externe solide et la négligence des résultats liés à l'efficacité et aux coûts, ouvrant la voie à un diagnostic plus précoce et à un meilleur traitement du cancer du poumon.

fr-article-conclution

Références 

AAMC Report Reinforces Mounting Physician Shortage. (2021). AAMC. https://www.aamc.org/news-insights/press-releases/aamcreport- reinforces-mounting-physician-shortage, accessed on 26.09.2024

Adams, S. J., Babyn, P. S., & Danilkewich, A. (2016). Toward a comprehensive management strategy for incidental findings in imaging. Canadian Family Physician Medecin de Famille Canadien, 62(7), 541–543.

Adams, S. J., Madtes, D. K., Burbridge, B., Johnston, J., Goldberg, I. G., Siegel, E. L., Babyn, P., Nair, V. S., & Calhoun, M. E. (2023). Clinical Impact and Generalizability of a Computer-Assisted Diagnostic Tool to Risk-Stratify Lung Nodules With CT. Journal of the American College of Radiology: JACR, 20(2), 232–242. https:// doi.org/10.1016/j.jacr.2022.08.006

Aggarwal, R., Farag, S., Martin, G., Ashrafian, H., & Darzi, A. (2021). Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-sectional Survey. Journal of Medical Internet Research, 23(8), e26162. https://doi.org/10.2196/26162

Aggarwal, R., Sounderajah, V., Martin, G., Ting, D. S. W., Karthikesalingam, A., King, D., Ashrafian, H., & Darzi, A. (2021). Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digital Medicine, 4(1), 65. https://doi.org/10.1038/s41746-021-00438-z

Ali, N., Lifford, K. J., Carter, B., McRonald, F., Yadegarfar, G., Baldwin, D. R., Weller, D., Hansell, D. M., Duffy, S. W., Field, J. K., & Brain, K. (2015). Barriers to uptake among high-risk individuals declining participation in lung cancer screening: a mixed methods analysis of the UK Lung Cancer Screening (UKLS) trial. BMJ Open, 5(7), e008254. https://doi.org/10.1136/ bmjopen-2015-008254

Al Mohammad, B., Hillis, S. L., Reed, W., Alakhras, M., & Brennan, P. C. (2019). Radiologist performance in the detection of lung cancer using CT. Clinical Radiology, 74(1), 67–75. https://doi.org/10.1016/j.crad.2018.10.008

Ardila, D., Kiraly, A. P., Bharadwaj, S., Choi, B., Reicher, J. J., Peng, L., Tse, D., Etemadi, M., Ye, W., Corrado, G., Naidich, D. P., & Shetty, S. (2019). End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature Medicine, 25(6), 954–961. https://doi.org/10.1038/s41591-019-0447-x

Armato, S. G., 3rd, Roberts, R. Y., Kocherginsky, M., Aberle, D. R., Kazerooni, E. A., Macmahon, H., van Beek, E. J. R., Yankelevitz, D., McLennan, G., McNitt-Gray, M. F., Meyer, C. R., Reeves, A. P., Caligiuri, P., Quint, L. E., Sundaram, B., Croft, B. Y., & Clarke, L. P. (2009). Assessment of radiologist performance in the detection of lung nodules: dependence on the definition of “truth.” Academic Radiology, 16(1), 28–38. https://doi.org/10.1016/j.acra.2008.05.022

Austin, J. H., Romney, B. M., & Goldsmith, L. S. (1992). Missed bronchogenic carcinoma: radiographic findings in 27 patients with a potentially resectable lesion evident in retrospect. Radiology, 182(1), 115–122. https://doi.org/10.1148/radiology.182.1.1727272

Bankier, A. A., MacMahon, H., Goo, J. M., Rubin, G. D., Schaefer- Prokop, C. M., & Naidich, D. P. (2017). Recommendations for Measuring Pulmonary Nodules at CT: A Statement from the Fleischner Society. Radiology, 285(2), 584–600. https://doi. org/10.1148/radiol.2017162894

Black, W. C., Gareen, I. F., Soneji, S. S., Sicks, J. D., Keeler, E. B., Aberle, D. R., Naeim, A., Church, T. R., Silvestri, G. A., Gorelick, J., Gatsonis, C., & National Lung Screening Trial Research Team. (2014). Cost-effectiveness of CT screening in the National Lung Screening Trial. The New England Journal of Medicine, 371(19), 1793–1802. https://doi.org/10.1056/NEJMoa1312547

Burzic, A., O’Dowd, E. L., & Baldwin, D. R. (2022). The Future of Lung Cancer Screening: Current Challenges and Research Priorities. Cancer Management and Research, 14, 637–645. https://doi.org/10.2147/CMAR.S293877

Callister, M. E. J., Baldwin, D. R., Akram, A. R., Barnard, S., Cane, P., Draffan, J., Franks, K., Gleeson, F., Graham, R., Malhotra, P., Prokop, M., Rodger, K., Subesinghe, M., Waller, D., Woolhouse, I., British Thoracic Society Pulmonary Nodule Guideline Development Group, & British Thoracic Society Standards of Care Committee. (2015). British Thoracic Society guidelines for the investigation and management of pulmonary nodules. Thorax, 70 Suppl 2, ii1–ii54. https://doi.org/10.1136/thoraxjnl-2015-207168

Cassidy, A., Myles, J. P., van Tongeren, M., Page, R. D., Liloglou, T., Duffy, S. W., & Field, J. K. (2008). The LLP risk model: an individual risk prediction model for lung cancer. British Journal of Cancer, 98(2), 270–276. https://doi.org/10.1038/sj.bjc.6604158

Cha, M. J., Chung, M. J., Lee, J. H., & Lee, K. S. (2019). Performance of Deep Learning Model in Detecting Operable Lung Cancer With Chest Radiographs. Journal of Thoracic Imaging, 34(2), 86–91. https://doi.org/10.1097/RTI.0000000000000388

Ciompi, F., Chung, K., van Riel, S. J., Setio, A. A. A., Gerke, P. K., Jacobs, C., Scholten, E. T., Schaefer-Prokop, C., Wille, M. M. W., Marchianò, A., Pastorino, U., Prokop, M., & van Ginneken, B. (2017). Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Scientific Reports, 7, 46479. https://doi.org/10.1038/srep46479

Cui, X., Zheng, S., Heuvelmans, M. A., Du, Y., Sidorenkov, G., Fan, S., Li, Y., Xie, Y., Zhu, Z., Dorrius, M. D., Zhao, Y., Veldhuis, R. N. J., de Bock, G. H., Oudkerk, M., van Ooijen, P. M. A., Vliegenthart, R., & Ye, Z. (2022). Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program. European Journal of Radiology, 146, 110068. https://doi.org/10.1016/j. ejrad.2021.110068

de Koning, H. J., van der Aalst, C. M., de Jong, P. A., Scholten, E. T., Nackaerts, K., Heuvelmans, M. A., Lammers, J.-W. J., Weenink, C., Yousaf-Khan, U., Horeweg, N., van ’t Westeinde, S., Prokop, M., Mali, W. P., Mohamed Hoesein, F. A. A., van Ooijen, P. M. A., Aerts, J. G. J. V., den Bakker, M. A., Thunnissen, E., Verschakelen, J., … Oudkerk, M. (2020). Reduced Lung-Cancer Mortality with Volume CT Screening in a Randomized Trial. The New England Journal of Medicine, 382(6), 503–513. https://doi. org/10.1056/NEJMoa1911793

de Margerie-Mellon, C., & Chassagnon, G. (2023). Artificial intelligence: A critical review of applications for lung nodule and lung cancer. Diagnostic and Interventional Imaging, 104(1), 11–17. https://doi.org/10.1016/j.diii.2022.11.007

Detterbeck, F. C., Lewis, S. Z., Diekemper, R., Addrizzo-Harris, D., & Alberts, W. M. (2013). Executive Summary: Diagnosis and management of lung cancer, 3rd ed: American College of Chest Physicians evidence-based clinical practice guidelines. Chest, 143(5 Suppl), 7S – 37S. https://doi.org/10.1378/chest.12-2377

Devaraj, A., van Ginneken, B., Nair, A., & Baldwin, D. (2017). Use of Volumetry for Lung Nodule Management: Theory and Practice. Radiology, 284(3), 630–644. https://doi.org/10.1148/ radiol.2017151022

Dong, X., Xu, S., Liu, Y., Wang, A., Saripan, M. I., Li, L., Zhang, X., & Lu, L. (2020). Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation. Cancer Imaging: The Official Publication of the International Cancer Imaging Society, 20(1), 53. https://doi.org/10.1186/s40644-020-00331-0

Field, J. K., Duffy, S. W., Baldwin, D. R., Whynes, D. K., Devaraj, A., Brain, K. E., Eisen, T., Gosney, J., Green, B. A., Holemans, J. A., Kavanagh, T., Kerr, K. M., Ledson, M., Lifford, K. J., McRonald, F. E., Nair, A., Page, R. D., Parmar, M. K. B., Rassl, D. M., … Hansell, D. M. (2016). UK Lung Cancer RCT Pilot Screening Trial: baseline findings from the screening arm provide evidence for the potential implementation of lung cancer screening. Thorax, 71(2), 161–170. https://doi.org/10.1136/thoraxjnl-2015-207140

Gierada, D. S., Pinsky, P. F., Duan, F., Garg, K., Hart, E. M., Kazerooni, E. A., Nath, H., Watts, J. R., Jr, & Aberle, D. R. (2017). Interval lung cancer after a negative CT screening examination: CT findings and outcomes in National Lung Screening Trial participants. European Radiology, 27(8), 3249–3256. https://doi. org/10.1007/s00330-016-4705-8

Gu, D., Liu, G., & Xue, Z. (2021). On the performance of lung nodule detection, segmentation and classification. Computerized Medical Imaging and Graphics: The Official Journal of the Computerized Medical Imaging Society, 89, 101886. https://doi. org/10.1016/j.compmedimag.2021.101886

Hamilton, W., Peters, T. J., Round, A., & Sharp, D. (2005). What are the clinical features of lung cancer before the diagnosis is made? A population based case-control study. Thorax, 60(12), 1059–1065. https://doi.org/10.1136/thx.2005.045880

Hanna, T. N., Lamoureux, C., Krupinski, E. A., Weber, S., & Johnson, J.-O. (2018). Effect of Shift, Schedule, and Volume on Interpretive Accuracy: A Retrospective Analysis of 2.9 Million Radiologic Examinations. Radiology, 287(1), 205–212. https://doi. org/10.1148/radiol.2017170555

Hata, A., Yanagawa, M., Yoshida, Y., Miyata, T., Tsubamoto, M., Honda, O., & Tomiyama, N. (2020). Combination of Deep Learning-Based Denoising and Iterative Reconstruction for Ultra-Low-Dose CT of the Chest: Image Quality and Lung-RADS Evaluation. AJR. American Journal of Roentgenology, 215(6), 1321–1328. https://doi.org/10.2214/AJR.19.22680

Henschke, C. I., McCauley, D. I., Yankelevitz, D. F., Naidich, D. P., McGuinness, G., Miettinen, O. S., Libby, D. M., Pasmantier, M. W., Koizumi, J., Altorki, N. K., & Smith, J. P. (1999). Early Lung Cancer Action Project: overall design and findings from baseline screening. The Lancet, 354(9173), 99–105. https://doi.org/10.1016/S0140-6736(99)06093-6

Henschke, C. I., Yankelevitz, D. F., Mirtcheva, R., McGuinness, G., McCauley, D., & Miettinen, O. S. (2002). CT Screening for Lung Cancer. American Journal of Roentgenology, 178(5), 1053–1057. https://doi.org/10.2214/ajr.178.5.1781053

Homayounieh, F., Digumarthy, S., Ebrahimian, S., Rueckel, J., Hoppe, B. F., Sabel, B. O., Conjeti, S., Ridder, K., Sistermanns, M., Wang, L., Preuhs, A., Ghesu, F., Mansoor, A., Moghbel, M., Botwin, A., Singh, R., Cartmell, S., Patti, J., Huemmer, C., … Kalra, M. (2021). An Artificial Intelligence-Based Chest X-ray Model on Human Nodule Detection Accuracy From a Multicenter Study. JAMA Network Open, 4(12), e2141096. https://doi.org/10.1001/ jamanetworkopen.2021.41096

Hussein, S., Cao, K., Song, Q., & Bagci, U. (2017). Risk Stratification of Lung Nodules Using 3D CNN-Based Multi-task Learning. Information Processing in Medical Imaging, 249–260. https://doi.org/10.1007/978-3-319-59050-9_20

Hwang, E. J., Goo, J. M., Kim, H. Y., Yi, J., Yoon, S. H., & Kim, Y. (2021). Implementation of the cloud-based computerized interpretation system in a nationwide lung cancer screening with low-dose CT: comparison with the conventional reading system. European Radiology, 31(1), 475–485. https://doi.org/10.1007/ s00330-020-07151-7

Jacobs, C., Schreuder, A., van Riel, S. J., Scholten, E. T., Wittenberg, R., Wille, M. M. W., de Hoop, B., Sprengers, R., Mets, O. M., Geurts, B., Prokop, M., Schaefer-Prokop, C., & van Ginneken, B. (2021). Assisted versus Manual Interpretation of Low- Dose CT Scans for Lung Cancer Screening: Impact on Lung-RADS Agreement. Radiology. Imaging Cancer, 3(5), e200160. https://doi. org/10.1148/rycan.2021200160

Jiang, B., Li, N., Shi, X., Zhang, S., Li, J., de Bock, G. H., Vliegenthart, R., & Xie, X. (2022). Deep Learning Reconstruction Shows Better Lung Nodule Detection for Ultra-Low-Dose Chest CT. Radiology, 303(1), 202–212. https://doi.org/10.1148/radiol.210551

Jones, C. M., Buchlak, Q. D., Oakden-Rayner, L., Milne, M., Seah, J., Esmaili, N., & Hachey, B. (2021). Chest radiographs and machine learning - Past, present and future. Journal of Medical Imaging and Radiation Oncology, 65(5), 538–544. https://doi. org/10.1111/1754-9485.13274

Kinsinger, L. S., Anderson, C., Kim, J., Larson, M., Chan, S. H., King, H. A., Rice, K. L., Slatore, C. G., Tanner, N. T., Pittman, K., Monte, R. J., McNeil, R. B., Grubber, J. M., Kelley, M. J., Provenzale, D., Datta, S. K., Sperber, N. S., Barnes, L. K., Abbott, D. H., … Jackson, G. L. (2017). Implementation of Lung Cancer Screening in the Veterans Health Administration. JAMA Internal Medicine, 177(3), 399–406. https://doi.org/10.1001/ jamainternmed.2016.9022

LCS Project. (n.d.). https://www.myesti.org/lungcancerscreeningcertificationproject/, accessed on 26.09.2024

Leader, J. K., Warfel, T. E., Fuhrman, C. R., Golla, S. K., Weissfeld, J. L., Avila, R. S., Turner, W. D., & Zheng, B. (2005). Pulmonary nodule detection with low-dose CT of the lung: agreement among radiologists. AJR. American Journal of Roentgenology, 185(4), 973–978. https://doi.org/10.2214/AJR.04.1225

Li, J., Chung, S., Wei, E. K., & Luft, H. S. (2018). New recommendation and coverage of low-dose computed tomography for lung cancer screening: uptake has increased but is still low. BMC Health Services Research, 18(1), 525. https://doi.org/10.1186/ s12913-018-3338-9

Li, L., Liu, Z., Huang, H., Lin, M., & Luo, D. (2019). Evaluating the performance of a deep learning-based computer-aided diagnosis (DL-CAD) system for detecting and characterizing lung nodules: Comparison with the performance of double reading by radiologists. Thoracic Cancer, 10(2), 183–192. https://doi. org/10.1111/1759-7714.12931

Li, X., Shen, L., Xie, X., Huang, S., Xie, Z., Hong, X., & Yu, J. (2020). Multi-resolution convolutional networks for chest X-ray radiograph based lung nodule detection. Artificial Intelligence in Medicine, 103, 101744. https://doi.org/10.1016/j. artmed.2019.101744

Lu, M. T., Raghu, V. K., Mayrhofer, T., Aerts, H. J. W. L., & Hoffmann, U. (2020). Deep Learning Using Chest Radiographs to Identify High-Risk Smokers for Lung Cancer Screening Computed Tomography: Development and Validation of a Prediction Model. Annals of Internal Medicine, 173(9), 704–713. https://doi. org/10.7326/M20-1868

Lung Rads. (n.d.). https://www.acr.org/Clinical-Resources/Reporting-and-Data-Systems/Lung-Rads, accessed on 26.09.2024

Malhotra, J., Malvezzi, M., Negri, E., La Vecchia, C., & Boffetta, P. (2016). Risk factors for lung cancer worldwide. The European Respiratory Journal: Official Journal of the European Society for Clinical Respiratory Physiology, 48(3), 889–902. https://doi. org/10.1183/13993003.00359-2016

Marcus, P. M., Bergstralh, E. J., Fagerstrom, R. M., Williams, D. E., Fontana, R., Taylor, W. F., & Prorok, P. C. (2000). Lung cancer mortality in the Mayo Lung Project: impact of extended followup. Journal of the National Cancer Institute, 92(16), 1308–1316. https://doi.org/10.1093/jnci/92.16.1308

Mendoza, J., & Pedrini, H. (2020). Detection and classification of lung nodules in chest X‐ray images using deep convolutional neural networks. Computational Intelligence. An International Journal, 36(2), 370–401. https://doi.org/10.1111/coin.12241

Mikhael, P. G., Wohlwend, J., Yala, A., Karstens, L., Xiang, J., Takigami, A. K., Bourgouin, P. P., Chan, P., Mrah, S., Amayri, W., Juan, Y.-H., Yang, C.-T., Wan, Y.-L., Lin, G., Sequist, L. V., Fintelmann, F. J., & Barzilay, R. (2023). Sybil: A Validated Deep Learning Model to Predict Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography. Journal of Clinical Oncology: Official Journal of the American Society of Clinical Oncology, 41(12), 2191–2200. https://doi.org/10.1200/ JCO.22.01345

Morgan, L., Choi, H., Reid, M., Khawaja, A., & Mazzone, P. J. (2017). Frequency of Incidental Findings and Subsequent Evaluation in Low-Dose Computed Tomographic Scans for Lung Cancer Screening. Annals of the American Thoracic Society, 14(9), 1450–1456. https://doi.org/10.1513/AnnalsATS.201612-1023OC

Murchison, J. T., Ritchie, G., Senyszak, D., Nijwening, J. H., van Veenendaal, G., Wakkie, J., & van Beek, E. J. R. (2022). Validation of a deep learning computer aided system for CT based lung nodule detection, classification, and growth rate estimation in a routine clinical population. PloS One, 17(5), e0266799. https://doi. org/10.1371/journal.pone.0266799

Nam, J. G., Ahn, C., Choi, H., Hong, W., Park, J., Kim, J. H., & Goo, J. M. (2021). Image quality of ultralow-dose chest CT using deep learning techniques: potential superiority of vendor-agnostic postprocessing over vendor-specific techniques. European Radiology, 31(7), 5139–5147. https://doi.org/10.1007/s00330-020-07537-7

Nam, J. G., Park, S., Hwang, E. J., Lee, J. H., Jin, K.-N., Lim, K. Y., Vu, T. H., Sohn, J. H., Hwang, S., Goo, J. M., & Park, C. M. (2019). Development and Validation of Deep Learning-based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs. Radiology, 290(1), 218–228. https://doi.org/10.1148/ radiol.2018180237

National Institute for Health and Care Excellence (NICE). (n.d.). Evidence standards framework for digital health technologies.  https://www.nice.org.uk/corporate/ecd7, accessed on 26.09.2024

National Lung Screening Trial Research Team, Aberle, D. R., Adams, A. M., Berg, C. D., Black, W. C., Clapp, J. D., Fagerstrom, R. M., Gareen, I. F., Gatsonis, C., Marcus, P. M., & Sicks, J. D. (2011). Reduced lung-cancer mortality with low-dose computed tomographic screening. The New England Journal of Medicine, 365(5), 395–409. https://doi.org/10.1056/NEJMoa1102873

Park, H., Ham, S.-Y., Kim, H.-Y., Kwag, H. J., Lee, S., Park, G., Kim, S., Park, M., Sung, J.-K., & Jung, K.-H. (2019). A deep learning-based CAD that can reduce false negative reports: A preliminary study in health screening center. RSNA 2019. RSNA 2019. https://archive.rsna.org/2019/19017034.html

Pehrson, L. M., Nielsen, M. B., & Ammitzbøl Lauridsen, C. (2019). Automatic Pulmonary Nodule Detection Applying Deep Learning or Machine Learning Algorithms to the LIDC-IDRI Database: A Systematic Review. Diagnostics (Basel, Switzerland), 9(1). https://doi.org/10.3390/diagnostics9010029

Qi, L.-L., Wang, J.-W., Yang, L., Huang, Y., Zhao, S.-J., Tang, W., Jin, Y.-J., Zhang, Z.-W., Zhou, Z., Yu, Y.-Z., Wang, Y.-Z., & Wu, N. (2021). Natural history of pathologically confirmed pulmonary subsolid nodules with deep learning-assisted nodule segmentation. European Radiology, 31(6), 3884–3897. https://doi. org/10.1007/s00330-020-07450-z

Qi, L.-L., Wu, B.-T., Tang, W., Zhou, L.-N., Huang, Y., Zhao, S.-J., Liu, L., Li, M., Zhang, L., Feng, S.-C., Hou, D.-H., Zhou, Z., Li, X.- L., Wang, Y.-Z., Wu, N., & Wang, J.-W. (2020). Long-term followup of persistent pulmonary pure ground-glass nodules with deep learning-assisted nodule segmentation. European Radiology, 30(2), 744–755. https://doi.org/10.1007/s00330-019-06344-z

Raghu, V. K., Walia, A. S., Zinzuwadia, A. N., Goiffon, R. J., Shepard, J.-A. O., Aerts, H. J. W. L., Lennes, I. T., & Lu, M. T. (2022). Validation of a Deep Learning-Based Model to Predict Lung Cancer Risk Using Chest Radiographs and Electronic Medical Record Data. JAMA Network Open, 5(12), e2248793. https://doi. org/10.1001/jamanetworkopen.2022.48793

Ravin, C. E., & Chotas, H. G. (1997). Chest radiography. Radiology, 204(3), 593–600. https://doi.org/10.1148/radiology.204.3.9280231

Revel, M.-P., Bissery, A., Bienvenu, M., Aycard, L., Lefort, C., & Frija, G. (2004). Are two-dimensional CT measurements of small noncalcified pulmonary nodules reliable? Radiology, 231(2), 453–458. https://doi.org/10.1148/radiol.2312030167

Röhrich, S., Heidinger, B. H., Prayer, F., Weber, M., Krenn, M., Zhang, R., Sufana, J., Scheithe, J., Kanbur, I., Korajac, A., Pötsch, N., Raudner, M., Al-Mukhtar, A., Fueger, B. J., Milos, R.-I., Scharitzer, M., Langs, G., & Prosch, H. (2023). Impact of a content-based image retrieval system on the interpretation of chest CTs of patients with diffuse parenchymal lung disease. European Radiology, 33(1), 360–367. https://doi.org/10.1007/ s00330-022-08973-3

Sands, J., Tammemägi, M. C., Couraud, S., Baldwin, D. R., Borondy-Kitts, A., Yankelevitz, D., Lewis, J., Grannis, F., Kauczor, H.-U., von Stackelberg, O., Sequist, L., Pastorino, U., & McKee, B. (2021). Lung Screening Benefits and Challenges: A Review of The Data and Outline for Implementation. Journal of Thoracic Oncology: Official Publication of the International Association for the Study of Lung Cancer, 16(1), 37–53. https://doi. org/10.1016/j.jtho.2020.10.127

Setio, A. A. A., Traverso, A., de Bel, T., Berens, M. S. N., van den Bogaard, C., Cerello, P., Chen, H., Dou, Q., Fantacci, M. E., Geurts, B., Gugten, R. van der, Heng, P. A., Jansen, B., de Kaste, M. M. J., Kotov, V., Lin, J. Y.-H., Manders, J. T. M. C., Sóñora-Mengana, A., García-Naranjo, J. C., … Jacobs, C. (2017). Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge. Medical Image Analysis, 42, 1–13. https://doi.org/10.1016/j.media.2017.06.015

Siegel, D. A., Fedewa, S. A., Henley, S. J., Pollack, L. A., & Jemal, A. (2021). Proportion of Never Smokers Among Men and Women With Lung Cancer in 7 US States. JAMA Oncology, 7(2), 302–304. https://doi.org/10.1001/jamaoncol.2020.6362

Singh, R., Kalra, M. K., Homayounieh, F., Nitiwarangkul, C., McDermott, S., Little, B. P., Lennes, I. T., Shepard, J.-A. O., & Digumarthy, S. R. (2021). Artificial intelligence-based vessel suppression for detection of sub-solid nodules in lung cancer screening computed tomography. Quantitative Imaging in Medicine and Surgery, 11(4), 1134–1143. https://doi.org/10.21037/ qims-20-630

Smieliauskas, F., MacMahon, H., Salgia, R., & Shih, Y.- C. T. (2014). Geographic variation in radiologist capacity and widespread implementation of lung cancer CT screening. Journal of Medical Screening, 21(4), 207–215. https://doi. org/10.1177/0969141314548055

Sung, H., Ferlay, J., Siegel, R. L., Laversanne, M., Soerjomataram, I., Jemal, A., & Bray, F. (2021). Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA: A Cancer Journal for Clinicians, 71(3), 209–249. https://doi.org/10.3322/caac.21660

Sun, S., Schiller, J. H., & Gazdar, A. F. (2007). Lung cancer in never smokers--a different disease. Nature Reviews. Cancer, 7(10), 778–790. https://doi.org/10.1038/nrc2190

Tammemägi, M. C., Katki, H. A., Hocking, W. G., Church, T. R., Caporaso, N., Kvale, P. A., Chaturvedi, A. K., Silvestri, G. A., Riley, T. L., Commins, J., & Berg, C. D. (2013). Selection criteria for lung-cancer screening. The New England Journal of Medicine, 368(8), 728–736. https://doi.org/10.1056/NEJMoa1211776

Thai, A. A., Solomon, B. J., Sequist, L. V., Gainor, J. F., & Heist, R. S. (2021). Lung cancer. The Lancet, 398(10299), 535–554. https:// doi.org/10.1016/S0140-6736(21)00312-3

The Royal College of Radiologists. (2022). Clinical Radiology Workforce Census.

Toumazis, I., de Nijs, K., Cao, P., Bastani, M., Munshi, V., Ten Haaf, K., Jeon, J., Gazelle, G. S., Feuer, E. J., de Koning, H. J., Meza, R., Kong, C. Y., Han, S. S., & Plevritis, S. K. (2021). Costeffectiveness Evaluation of the 2021 US Preventive Services Task Force Recommendation for Lung Cancer Screening. JAMA Oncology, 7(12), 1833–1842. https://doi.org/10.1001/jamaoncol.2021.4942

van Riel, S. J., Sánchez, C. I., Bankier, A. A., Naidich, D. P., Verschakelen, J., Scholten, E. T., de Jong, P. A., Jacobs, C., van Rikxoort, E., Peters-Bax, L., Snoeren, M., Prokop, M., van Ginneken, B., & Schaefer-Prokop, C. (2015). Observer Variability for Classification of Pulmonary Nodules on Low-Dose CT Images and Its Effect on Nodule Management. Radiology, 277(3), 863–871. https://doi.org/10.1148/radiol.2015142700

Wang, C., Li, J., Zhang, Q., Wu, J., Xiao, Y., Song, L., Gong, H., & Li, Y. (2021). The landscape of immune checkpoint inhibitor therapy in advanced lung cancer. BMC Cancer, 21(1), 968. https:// doi.org/10.1186/s12885-021-08662-2

Wang, Y., Midthun, D. E., Wampfler, J. A., Deng, B., Stoddard, S. M., Zhang, S., & Yang, P. (2015). Trends in the proportion of patients with lung cancer meeting screening criteria. JAMA: The Journal of the American Medical Association, 313(8), 853–855. https://doi.org/10.1001/jama.2015.413

Wood, D. E., Kazerooni, E. A., Baum, S. L., Eapen, G. A., Ettinger, D. S., Hou, L., Jackman, D. M., Klippenstein, D., Kumar, R., Lackner, R. P., Leard, L. E., Lennes, I. T., Leung, A. N. C., Makani, S. S., Massion, P. P., Mazzone, P., Merritt, R. E., Meyers, B. F., Midthun, D. E., … Hughes, M. (2018). Lung Cancer Screening, Version 3.2018, NCCN Clinical Practice Guidelines in Oncology. Journal of the National Comprehensive Cancer Network: JNCCN, 16(4), 412–441. https://doi.org/10.6004/jnccn.2018.0020

Wu, B., Zhou, Z., Wang, J., & Wang, Y. (2018). Joint learning for pulmonary nodule segmentation, attributes and malignancy prediction. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 1109–1113. https://doi.org/10.1109/ ISBI.2018.8363765

Wyker, A., & Henderson, W. W. (2022). Solitary Pulmonary Nodule. StatPearls Publishing.

Yoo, H., Lee, S. H., Arru, C. D., Doda Khera, R., Singh, R., Siebert, S., Kim, D., Lee, Y., Park, J. H., Eom, H. J., Digumarthy, S. R., & Kalra, M. K. (2021). AI-based improvement in lung cancer detection on chest radiographs: results of a multi-reader study in NLST dataset. European Radiology, 31(12), 9664–9674. https://doi. org/10.1007/s00330-021-08074-7

Zahnd, W. E., & Eberth, J. M. (2019). Lung Cancer Screening Utilization: A Behavioral Risk Factor Surveillance System Analysis. American Journal of Preventive Medicine, 57(2), 250–255. https:// doi.org/10.1016/j.amepre.2019.03.015

 

Guide to Artificial Intelligence in Radiology

    Artificial intelligence (AI) is playing a growing role in all our lives and has shown promise in addressing some of the greatest current and upcoming societal challenges we face. The healthcare industry, though notoriously complex and resistant to disruption, potentially has a lot to gain from the use of AI. With an established history of leading digital transformation in healthcare and an urgent need for improved efficiency, radiology has been at the forefront of harnessing AI’s potential.

    This book covers how and why AI can address challenges faced by radiology departments, provides an overview of the fundamental concepts related to AI, and describes some of the most promising use cases for AI in radiology. In addition, the major challenges associated with the adoption of AI into routine radiological practice are discussed. The book also covers some crucial points radiology departments should keep in mind when deciding on which AI-based solutions to purchase. Finally, it provides an outlook on what new and evolving aspects of AI in radiology to expect in the near future.

    The healthcare industry has experienced a number of trends over the past few decades that demand a change in the way certain things are done. These trends are particularly salient in radiology, where the diagnostic quality of imaging scans has improved dramatically while scan times have decreased. As a result, the amount and complexity of medical imaging data acquired have increased substantially over the past few decades (Smith-Bindman et al., 2019; Winder et al., 2021) and are expected to continue to increase (Tsao, 2020). This issue is complicated by a widespread global shortage of radiologists (AAMC Report Reinforces Mounting Physician Shortage, 2021, Clinical Radiology UK Workforce Census 2019 Report, 2019). Healthcare workers, including radiologists, have an increasing workload (Bruls & Kwee, 2020; Levin et al., 2017) that contributes to burnout and medical errors (Harry et al., 2021). Being an essential service provider to virtually all other hospital departments, staff shortages within radiology have significant effects that spread throughout the hospital and to society as a whole (England & Improvement, 2019; Sutherland et al., n.d.).

    With an ageing global population and a rising burden of chronic illnesses, these issues are expected to pose even more of a challenge to the healthcare industry in the future.

    AI-based medical imaging solutions have the potential to ameliorate these challenges for several reasons. They are particularly suited to handling large, complex datasets (Alzubaidi et al., 2021). Moreover, they are well suited to automate some of the tasks traditionally performed by radiologists and radiographers, potentially freeing up time and making workflows within radiology departments more efficient (Allen et al., 2021; Baltruschat et al., 2021; Kalra et al., 2020; O’Neill et al., 2021; van Leeuwen et al., 2021; Wong et al., 2019). AI is also capable of detecting complex patterns in data that humans cannot necessarily find or quantify (Dance, 2021; Korteling et al., 2021; Kühl et al., 2020).

    The term “artificial intelligence” refers to the use of computer systems to solve specific problems in a way that simulates human reasoning. One fundamental characteristic of AI is that, like humans, these systems can tailor their solutions to changing circumstances. Note that, while these systems are meant to mimic on a fundamental level how humans think, their capacity to do so (e.g. in terms of the amount of data they can handle at one time, the nature and amount of patterns they can find in the data, and the speed at which they do so) often exceeds that of humans.

    AI solutions come in the form of computer algorithms, which are pieces of computer code representing instructions to be followed to solve a specific problem. In its most fundamental form, the algorithm takes data as an input, performs some computation on that data, and returns an output.

    An AI algorithm can be explicitly programmed to solve a specific task, analogous to a step-by-step recipe for baking a cake. On the other hand, the algorithm can be programmed to look for patterns within the data in order to solve the problem. These types of algorithms are known as machine learning algorithms. Thus, all machine learning algorithms are AI, but not all AI is machine learning. The patterns in the data that the algorithm can be explicitly programmed to look for or that it can “discover” by itself are known as features. An important characteristic of machine learning is that such algorithms learn from the data itself, and their performance improves the more data they are given.

    One of the most common uses of machine learning is in classification - assigning a piece of data a particular label. For example, a machine learning algorithm might be used to tell if a photo (the input) shows a dog or a cat (the label). The algorithm can learn to do so in a supervised or unsupervised way.

    Supervised learning

    In supervised learning, the machine learning algorithm is given data that has been labelled with the ground truth, in this example, photos of dogs and cats that have been labelled as such. The process then goes through the following phases:

    1.Training phase: The algorithm learns the features associated with dogs and cats using the aforementioned data (training data).
    2.Test phase: The algorithm is then given a new set of photos (the test data), it labels them and the performance of the algorithm on that data is assessed.

    In some cases, there is a phase in between training and test, known as the validation phase. In this phase, the algorithm is given a new set of photos (not included in either the training or test data), its performance is assessed on this data, and the model is tweaked and retrained on the training data. This is repeated until some predefined performance-based criterion is reached, and the algorithm then enters the test phase.

    Unsupervised learning

    In unsupervised learning, the algorithm identifies features within the input data that allow it to assign classes to the individual data points without being told explicitly what those classes are or should be. Such algorithms can identify patterns or group data points together without human intervention and include clustering and dimensionality reduction algorithms. Not all machine learning algorithms perform classification. Some are used to predict a continuous metric (e.g. the temperature in four weeks’ time) instead of a discrete label (e.g. cats vs dogs). These are known as regression algorithms.

    Neural networks and deep learning

    A neural network is made up of an input layer and an output layer, which are themselves composed of nodes. In simple neural networks, features that are manually derived from a dataset are fed into the input layer, which performs some computations, the results of which are relayed to the output layer. In deep learning, multiple “hidden” layers exist between the input and output layers. Each node of the hidden layers performs calculations using certain weights and relays the output to the next hidden layer until the output layer is reached.

    In the beginning, random values are assigned to the weights and the accuracy of the algorithm is calculated. The values of the weights are then iteratively adjusted until a set of weight values that maximize accuracy is found. This iterative adjustment of the weight values is usually done by moving backwards from the output layer to the input layer, a technique called backpropagation. This entire process is done on the training data.

    Performance evaluation

    Understanding how the performance of AI algorithms is assessed is key to interpreting the AI literature. Several performance metrics exist for assessing how well a model performs certain tasks. No single metric is perfect, so a combination of several metrics provides a fuller picture of model performance.

    In regression, the most commonly used metrics include:

    • Mean absolute error (MAE): the average difference between the predicted values and the ground truth.
    • Root mean square error (RMSE): the differences between the predicted values and the ground truth are squared and then averaged over the sample. Then the square root of the average is taken. Unlike the MAE, the RMSE thus gives higher weight to larger differences.
    • R2: the proportion of the total variance in the ground truth explained by the variance in the predicted values. It ranges from 0 to 1.

    The following metrics are commonly used in classification tasks:

    • Accuracy: this is the proportion of all predictions that were predicted correctly. It ranges from 0 to 1.
    • Sensitivity: also known as the true positive rate (TPR) or recall, this is the proportion of true positives that were predicted correctly. It ranges from 0 to 1.
    • Specificity: Also known as the true negative rate (TNR), this is the proportion of true negatives that were predicted correctly. It ranges from 0 to 1.
    • Precision: also known as positive predictive value (PPV), this is the proportion of positive classifications that were predicted correctly. It ranges from 0 to 1.

    An inherent trade-off exists between sensitivity and specificity. The relevant importance of each, as well as their interpretation, highly depends on the specific research question and classification task.

    Importantly, although classification models are meant to reach a binary conclusion, they are inherently probability-based. This means that these models will output a probability that a data point belongs to one class or another. In order to reach a conclusion on the most likely class, a threshold is used. Metrics such as accuracy, sensitivity, specificity and precision refer to the performance of the algorithm based on a certain threshold. The area under the receiver operating characteristic curve (AUC) is a threshold-independent performance metric. The AUC can be interpreted as the probability that a random positive example is ranked higher by the algorithm than a random negative example.

    In image segmentation tasks, which are a type of classification task, the following metrics are commonly used:

    • Dice similarity coefficient: a measure of overlap between two sets (e.g. two images) that is calculated as two times the number of elements common to the sets divided by the sum of the number of elements in each set. It ranges from 0 (no overlap) to 1 (perfect overlap).
    • Hausdorff distance: a measure of how far two sets (e.g. two images) within a space are far from each other. It is basically the largest distance from one point in one set to the closest point in the other set.

    Internal and external validity

    Internally valid models perform well in their task on the data being used to train and validate them. The degree to which they are internally valid is assessed using the performance metrics outlined above and depends on the characteristics of the model itself and the quality of the data that the model was trained and validated on.

    Externally valid models perform well in their tasks on new data (Ramspek et al., 2021). The better the model performs on data that differs from the data the models were trained and validated on, the higher the external validity. In practice, this often requires the performance of the models to be tested on data from hospitals or geographical areas that were not part of the model’s training and validation datasets.

    Guidelines for evaluating AI research

    Several guidelines have been developed to assess the evidence behind AI-based interventions in healthcare (X. Liu et al., 2020; Mongan et al., 2020; Shelmerdine et al., 2021; Weikert et al., 2021). These provide a template for those doing AI research in healthcare and ensure that relevant information is reported transparently and comprehensively, but can also be used by other stakeholders to assess the quality of published research. This helps ensure that AI-based solutions with substantial potential or actual limitations, particularly those caused by poor reporting (Bozkurt et al., 2020; D. W. Kim et al., 2019; X. Liu et al., 2019; Nagendran et al., 2020; Yusuf et al., 2020), are not prematurely adopted (CONSORT-AI and SPIRIT-AI Steering Group, 2019). Guidelines have also been proposed for evaluating the trustworthiness of AI-based solutions in terms of transparency, confidentiality, security, and accountability (Buruk et al., 2020; Lekadir et al., 2021; Zicari et al., 2021).

    Over the past few years, AI has shown great potential in addressing a broad range of tasks within a medical imaging department, including many that happen before the patient is scanned. Implementations of AI to improve the efficiency of radiology workflows prior to patient scanning are sometimes referred to as “upstream AI” (Kapoor et al., 2020; M. L. Richardson et al., 2021).

    Scheduling

    One promising upstream AI application is predicting whichpatients arelikelytomisstheirscanappointments. Missed appointments are associated with significantly increased workload and costs (Dantas et al., 2018). Using a Gradient Boosting approach, Nelson et al. predicted missed hospital magnetic resonance imaging (MRI) appointments in the United Kingdom’s National Health Service (NHS) with high accuracy (Nelson et al., 2019). Their simulations also suggested that acting on the predictions of this model by targeting patients who are likely to miss their appointments would potentially yield a net benefit of several pounds per appointment across a range of model thresholds and missed appointment rates (Nelson et al., 2019). Similar results were recently found in a study of a single hospital in Singapore. For the 6-month period following the deployment of the predictive tool they were able to significantly reduce the no show rate from 19.3 % tp 15.9 % which translated into a potential economic benefit of $180,000 (Chong et. al., 2020).

    Scheduling scans in a radiology department is a challenging endeavour because, although it is largely an administrative task, it depends heavily on medical information. The task of assigning patients to specific appointments thus often requires the input of someone with domain knowledge, which stipulates that either the person making the appointments must be a radiologist or radiology technician, or these people will have to provide input regularly. In either scenario, the process is somewhat inefficient and can potentially be streamlined using AI-based algorithms that check scan indications and contraindications and provide the people scheduling the scans with information about scan urgency (Letourneau-Guillon et al., 2020).

    Protocolling

    Depending on hospital or clinic policy, the decision on what exact scan protocol a patient receives is usually made based on the information on the referring physician’s scan request and the judgement of the radiologist. This is often supplemented by direct communication between the referring physician and radiologist and the radiologist’s review of the patient’s medical information. This process improves patient care (Boland et al., 2014) but can be time-consuming and inefficient, particularly with modalities like MRI, where a large number of protocol permutations exist. In one study, protocolling alone accounted for about 6 % of the radiologist’s working time (Schemmel et al., 2016). Radiologists are also often interrupted by tasks such as protocolling when interpreting images, despite the fact that the latter is considered a radiologist’s primary responsibility (Balint et al., 2014; J.-P. J. Yu et al., 2014).

    Interpretation of the narrative text of the referring physician’s scan request has been attempted using natural language classifiers, the same technology used in chatbots and virtual assistants. Natural language classifiers based on deep learning have shown promise in assigning patients to either a contrast-enhanced or non-enhanced MRI protocol for musculoskeletal MRI, with an accuracy of 83 % (Trivedi et al., 2018) and 94 % (Y. H. Lee, 2018). Similar algorithms have shown an accuracy of 95 % for predicting the appropriate brain MRI protocol using a combination of up to 41 different MRI sequences (Brown & Marotta, 2018). Across a wide range of body regions, a deep-learning-based natural language classifier decided based on the narrative text of the scan requests whether to automatically assign a specific computed tomography (CT) or MRI protocol (which it did with 95 % accuracy) or, in more difficult cases, recommend a list of three most appropriate protocols to the radiologist (which it did with 92 % accuracy) (Kalra et al., 2020).

    AI has also been used to decide whether already protocolled scans need to be extended, a decision which has to be made in real-time while the patient is inside the scanner. One such example is in prostate MRI, where a decision on whether to administer a contrast agent is often made after the non-contrast sequences. Hötker et al. found that a convolutional neural network (CNN) assigned 78 % of patients to the appropriate prostate MRI protocol (Hötker et al., 2021). The sensitivity of the CNN for the need for contrast was 94.4 % with a specificity of 68.8 % and only 2 % of patients in their study would have had to be called back for a contrast- enhanced scan (Hötker et al., 2021).

    Image quality improvement and monitoring

    Many AI-based solutions that work in the background of radiology workflows to improve image quality have recently been established. These include solutions for monitoring image quality, reducing image artefacts, improving spatial resolution, and speeding up scans.

    Such solutions are entering the radiology mainstream, particularly for computed tomography, which for decades used established but artefact-prone methods for reconstructing interpretable images from the raw sensor data (Deák et al., 2013; Singh et al., 2010).

    These are gradually being replaced by deep-learning- based reconstruction methods, which improve image quality while maintaining low radiation doses (Akagi et al., 2019; H. Chen et al., 2017; Choe et al., 2019; Shan et al., 2019). This reconstruction is performed on supercomputers on the CT scanner itself or on the cloud. The balance between radiation dose and image quality can be adjusted on a protocol-specific basis to tailor scans to individual patients and clinical scenarios (McLeavy et al., 2021; Willemink & Noël, 2019). Such approaches have found particular use when scanning children, pregnant women, and obese patients as well as CT scans of the urinary tract and heart (McLeavy et al., 2021).

    AI-based solutions have also been used to speed up scans while maintaining diagnostic quality. Scan time reduction not only improves overall efficiency but also contributes to an overall better patient experience and compliance with imaging examination. A multi- centre study of spine MRI showed that a deep-learning- based image reconstruction algorithm that enhanced images using filtering and detail-preserving noise reduction reduced scan times by 40 % (Bash, Johnson, et al., 2021). For T1-weighted MRI scans of the brain, a similar algorithm that improves image sharpness and reduces image noise reduced scan times by 60 % while maintaining the accuracy of brain region volumetry compared to standard scans (Bash, Wang, et al., 2021).

    In routine radiological practice, images often contain artefacts that reduce their interpretability. These artefacts are the result of characteristics of the specific imaging modality or protocol used or factors intrinsic to the patient being scanned, such as the presence of foreign bodies or the patient moving during the scan. Particularly with MRI, imaging protocols that demand fast scanning often introduce certain artefacts to the reconstructed image. In one study, a deep-learning- based algorithm reduced banding artefacts associated with balanced steady-state free precession MRI sequences of the brain and knee (K. H. Kim & Park, 2017). For real-time imaging of the heart using MRI, another study found that the aliasing artefacts introduced by the data undersampling were reduced by using a deep-learning-based approach (Hauptmann et al., 2019). The presence of metallic foreign bodies such as dental, orthopaedic or vascular implants is a common patient-related factor causing image artefacts in both CT and MRI (Boas & Fleischmann, 2012; Hargreaves et al., 2011). Although not yet well established, several deep-learning-based approaches for reducing these artefacts have been investigated (Ghani & Clem Karl, 2019; Puvanasunthararajah et al., 2021; Zhang & Yu, 2018). Similar approaches are being tested for reducing motion-related artefacts in MRI (Tamada et al., 2020; B. Zhao et al., 2022).

    AI-based solutions for monitoring image quality potentially reduce the need to call patients back to repeat imaging examinations, which is a common problem (Schreiber-Zinaman & Rosenkrantz, 2017). A deep-learning-based algorithm that identifies the radiographic view acquired and extracts quality-related metrics from ankle radiographs was able to predict image quality with about 94 % accuracy (Mairhöfer et al., 2021). Another deep-learning-based approach was capable of predicting nondiagnostic liver MRI scans with a negative predictive value of between 86 % and 94 % (Esses et al., 2018). This real-time automated quality control potentially allows radiology technicians to rerun scans or run additional scans with greater diagnostic value.

    Scan reading prioritization

    With staff shortages and increasing scan numbers, radiologists face long reading lists. To optimize efficiency and patient care, AI-based solutions have been suggested as a way to prioritize which scans radiologists read and report first, usually by screening acquired images for findings that require urgent intervention (O’Connor & Bhalla, 2021). This has been most extensively studied in neuroradiology, where moving CT scans that were found to have intracranial haemorrhage by an AI-based tool to the top of the reading list reduced the time it took radiologists to view the scans by several minutes (O’Neill et al., 2021). Another study found that the time-to diagnosis (which includes the time from image acquisition to viewing by the radiologist and the time to read and report the scans) was reduced from 512 to 19 minutes in an outpatient setting when such a worklist prioritization was used (Arbabshirani et al., 2018). A simulation study using AI-based worklist prioritization based on identifying urgent findings on chest radiographs (such as pneumothorax, pleural effusions, and foreign bodies) also found a substantial reduction in the time it took to view and report the scans compared to standard workflow prioritization (Baltruschat et al., 2021).

    Image interpretation

    Currently, the majority of commercially available AI- based solutions in medical imaging focus on some aspect of analyzing and interpreting images (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021). This includes segmenting parts of the image (for surgical or radiation therapy targeting, for example), bringing suspicious areas to radiologists’ attention, extracting imaging biomarkers (radiomics), comparing images across time, and reaching specific imaging diagnoses.

    Neurology

    ¡ 29–38 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    Most commercially available AI-based solutions targeted at neuroimaging data aim to detect and characterize ischemic stroke, intracranial haemorrhage, dementia, and multiple sclerosis (Olthof et al., 2020). Several studies have shown excellent accuracy of AI- based methods for the detection and classification of intraparenchymal, subarachnoid, and subdural haemorrhage on head CT (Flanders et al., 2020; Ker et al., 2019; Kuo et al., 2019). Subsequent studies showed that, compared to radiologists, some AI-based solutions have substantially lower false positive and negative rates (Ginat, 2020; Rao et al., 2021). In ischemic stroke, AI-based solutions have largely focused on the quantification of the infarct core (Goebel et al., 2018; Maegerlein et al., 2019), the detection of large vessel occlusion (Matsoukas et al., 2022; Morey et al., 2021; Murray et al., 2020; Shlobin et al., 2022), and the prediction of stroke outcomes (Bacchi et al., 2020; Nielsen et al., 2018; Y. Yu et al., 2020, 2021).

    In multiple sclerosis, AI has been used to identify and segment lesions (Nair et al., 2020; S.-H. Wang et al., 2018), which can be particularly helpful for the longitudinal follow-up of patients. It has also been used to extract imaging features associated with progressive disease and conversion from clinically isolated syndrome to definite multiple sclerosis (Narayana et al., 2020; Yoo et al., 2019). Other applications of AI in neuroradiology include the detection of intracranial aneurysms (Faron et al., 2020; Nakao et al., 2018; Ueda et al., 2019) and the segmentation of brain tumours (Kao et al., 2019; Mlynarski et al., 2019; Zhou et al., 2020) as well as the prediction of brain tumour genetic markers from imaging data (Choi et al., 2019; J. Zhao et al., 2020)

    Chest

    ¡ 24 %–31 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    When interpreting chest radiographs, radiologists detected substantially more critical and urgent findings when aided by a deep-learning-based algorithm, and did so much faster than without the algorithm (Nam et al., 2021). Deep-learning-based image interpretation algorithms have also been found to improve radiology residents’ sensitivity for detecting urgent findings on chest radiographs from 66 % to 73 % (E. J. Hwang, Nam, et al., 2019). Another study which focused on a broader range of findings on chest radiographs also found that radiologists aided by a deep-learning-based algorithm had higher diagnostic accuracy than radiologists who read the radiographs without assistance (Seah et al., 2021). The uses of AI in chest radiology also extend to cross-sectional imaging like CT. A deep learning algorithm was found to detect pulmonary embolism on CT scans with high accuracy (AUC = 0.85) (Huang, Kothari, et al., 2020). Moreover, a deep learning algorithm was 90 % accurate in detecting aortic dissection on non-contrast-enhanced CT scans, similar to the performance of radiologists (Hata et al., 2021).

    Outside the emergency setting, AI-based solutions have been widely tested and implemented for tuberculosis screening on chest radiographs (E. J. Hwang, Park, et al., 2019; S. Hwang et al., 2016; Khan et al., 2020; Qin et al., 2019; WHO Operational Handbook on Tuberculosis Module 2: Screening – Systematic Screening for Tuberculosis Disease, n.d.). In addition, they have been useful for lung cancer screening both in terms of detecting lung nodules on CT (Setio et al., 2017) and chest radiographs (Li et al., 2020) and by classifying whether nodules are likely to be malignant or benign (Ardila et al., 2019; Bonavita et al., 2020; Ciompi et al., 2017; B. Wu et al., 2018). AI-based solutions also show great promise for the diagnosis of pneumonia, chronic obstructive pulmonary disease, and interstitial lung disease (F. Liu et al., 2021).

    Breast

    ¡ 11 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    So far, many of the AI-based algorithms targeting breast imaging aim to reduce the workload of radiologists reading mammograms. Ways to do this have included using AI-based algorithms to triage out negative mammograms, which in one study was associated with a reduction in radiologists’ workload by almost one-fifth (Yala et al., 2019). Other studies that have replaced second readers of mammograms with AI- based algorithms have shown that this leads to fewer false positives and false negatives as well as reduces the workload of the second reader by 88 % (McKinney et al., 2020).

    AI-based solutions for mammography have also been found to increase the diagnostic accuracy of radiologists (McKinney et al., 2020; Rodríguez-Ruiz et al., 2019; Watanabe et al., 2019) and some have been found to be highly accurate in independently detecting and classifying breast lesions (Agnes et al., 2019; Al- Antari et al., 2020; Rodriguez-Ruiz et al., 2019).
    Despite this, a recent systematic review of 36 AI- based algorithms found that these studies were of poor methodological quality and that all algorithms were less accurate than the consensus of two or more radiologists (Freeman et al., 2021). AI-based algorithms have nonetheless shown potential for extracting cancer-predictive features from mammograms beyond mammographic breast density (Arefan et al., 2020; Dembrower et al., 2020; Hinton et al., 2019). Beyond mammography, AI-based solutions have been developed for detecting and classifying breast lesions on ultrasound (Akkus et al., 2019; Park et al., 2019; G.- G. Wu et al., 2019) and MRI (Herent et al., 2019).

    Cardiac

    ¡ 11 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    Cardiac radiology has always been particularly challenging because of the difficulties inherent in acquiring images of a constantly moving organ. Because of this, it has benefited immensely from advances in imaging technology and seems set to benefit greatly from AI as well (Sermesant et al., 2021). Most of the AI-based applications of the cardiovascular system use MRI, CT or ultrasound data (Weikert et al., 2021). Prominent examples include the automated calculation of ejection fraction on echocardiography, quantification of coronary artery calcification on cardiac CT, determination of right ventricular volume on CT pulmonary angiography, and determination of heart chamber size and thickness on cardiac MRI (Medical AI Evaluation, n.d., The Medical Futurist, n.d.). AI-based solutions for the prediction of patients likely to respond favourably to cardiac interventions, such as cardiac resynchronization therapy, based on imaging and clinical parameters have also shown great promise (Cikes et al., 2019; Hu et al., 2019). Changes in cardiac MRI not readily visible to human readers but potentially useful for differentiating different types of cardiomyopathies can also be detected using AI through texture analysis (Neisius et al., 2019; J. Wang et al., 2020) and other radiomic approaches (Mancio et al., 2022).

    Musculoskeletal

    ¡ 7–11 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    Promising applications of AI in the assessment of muscles, bones and joints include applications where human readers generally show poor between- and within-rater reliability, such as the determination of skeletal age based on bone radiographs (Halabi et al., 2019; Thodberg et al., 2009) and screening for osteoporosis on radiographs (Kathirvelu et al., 2019; J.-S. Lee et al., 2019) and CT (Pan et al., 2020). AI- based solutions have also shown promise for detecting fractures on radiographs and CT (Lindsey et al., 2018; Olczak et al., 2017; Urakawa et al., 2019). One systematic review of AI-based solutions for fracture detection in several different body parts showed AUCs ranging from 0.94 to 1.00 and accuracies of 77 % to 98 % (Langerhuizen et al., 2019). AI-based solutions have also achieved accuracies similar to radiologists for classification of the severity of degenerative changes of the spine (Jamaludin et al., 2017) and extremity joints (F. Liu et al., 2018; Thomas et al., 2020). AI-based solutions have also been developed to determine the origin of skeletal metastases (Lang et al., 2019) and the classification of primary bone tumours (Do et al., 2017).

    Abdomen and pelvis

    ¡ 4 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    Much of the efforts in using AI in abdominal imaging have thus far concentrated on the automated segmentation of organs such as the liver (Dou et al., 2017), spleen (Moon et al., 2019), pancreas (Oktay et al., 2018), and kidneys (Sharma et al., 2017). In addition, a systematic review of 11 studies using deep learning for the detection of malignant liver masses showed accuracies of up to 97 % and AUCs of up to 0.92 (Azer, 2019).

    Other applications of AI in abdominal radiology include the detection of liver fibrosis (He et al., 2019; Yasaka et al., 2018), fatty liver disease, hepatic iron content, the detection of free abdominal gas on CT, and automated volumetry and segmentation of the prostate (AI for Radiology, n.d.).

    Despite the great potential of AI in medical imaging, it has yet to find widespread implementation and impact in routine clinical practice. This research-to- clinic translation is being hindered by several complex and interrelated issues that directly or indirectly lower the likelihood of AI-based solutions being adopted. One major way they do so is by creating a lack of trust in AI- based solutions by key stakeholders such as regulators, healthcare professionals and patients (Cadario et al., 2021; Esmaeilzadeh, 2020; J. P. Richardson et al., 2021; Tucci et al., 2022).

    Generalizability

    One major challenge is to develop AI-based solutions that continue to perform well in new, real-world scenarios. In a large systematic review, almost half of the studied AI-based medical imaging algorithms reported a greater than 0.05 decrease in the AUC when tested on new data (A. C. Yu et al., 2022). This lack of generalizability can lead to adverse effects on how well the model performs in a real-world scenario.

    If a solution performs poorly when tested on a dataset with a similar or identical distribution to the training dataset, it is said to lack narrow generalizability and is often a consequence of overfitting (Eche et al., 2021). Potential solutions for overfitting are using larger training datasets and reducing the model’s complexity. If a solution performs poorly when tested on a dataset with a different distribution to the training dataset (e.g. a different distribution of patient ethnicities), it is said to lack broad generalizability (Eche et al., 2021). Solutions to poor broad generalizability include stress-testing the model on datasets with different distributions from the training dataset (Eche et al., 2021).

    AI solutions are often developed in a high-resource environment such as large technology companies and academic medical centres in wealthy countries. It is likely that findings and performance in these high-resource contexts will fail to generalize to lower- resource contexts such as smaller hospitals, rural areas or poorer countries (Price & Nicholson, 2019), which complicates the issue further.

    Risk of bias

    Biases can arise in AI-based solutions due to data or human factors. The former occurs when the data used to train the AI solution does not adequately represent the target population. Datasets can be unrepresentative when they are too small or have been collected in a way that misrepresents a certain population category. AI solutions trained on unrepresentative data perpetuate biases and perform poorly in the population categories underrepresented or misrepresented in the training data. The presence of such biases has been empirically shown in many AI-based medical imaging studies (Larrazabal et al., 2020; Seyyed-Kalantari et al., 2021).

    AI-based solutions are prone to several subjective and sometimes implicitly or explicitly prejudiced decisions during their development by humans. These human factors include how the training data is selected, how it is labelled, and how the decision is made to focus on the specific problem the AI-based solution intends to solve (Norori et al., 2021). Some recommendations and tools are available to help minimize the risk of bias in AI research (AIF360: A Comprehensive Set of Fairness Metrics for Datasets and Machine Learning Models, Explanations for These Metrics, and Algorithms to Mitigate Bias in Datasets and Models, n.d., IBM Watson Studio - Model Risk Management, n.d.; Silberg & Manyika, 2019).

    Data quantity, quality and variety

    Problems such as bias and lack of generalizability can be mitigated by ensuring that training data is of sufficient quantity, quality and variety. However, this is difficult to do because patients are often reluctant to share their data for commercial purposes (Aggarwal, Farag, et al., 2021; Ghafur et al., 2020; Trinidad et al., 2020), hospitals and clinics are usually not equipped to make this data available in a useable and secure manner, and organizing and labelling the data is time- consuming and expensive.

    Many datasets can be used for a number of different purposes, and sharing data between companies can help make the process of data collection and organization more efficient, as well as increase the amount of data available for each application. However, developers are often reluctant to share data with each other, or even reveal the exact source of their data, to stay competitive.

    Data protection and privacy

    The development and implementation of AI-based solutions require that patients are explicitly informed about, and give their consent to, the use of their data for a particular purpose and by certain people. This data also has to be adequately protected from data breaches and misuse. Failure to ensure this greatly undermines the public’s trust in AI-based solutions and hinders their adoption. While regulations governing health data privacy state that the collection of fully anonymized data does not require explicit patient consent (General Data Protection Regulation (GDPR) – Official Legal Text, 2016; Office for Civil Rights (OCR), 2012) and in theory protects from the data being misused, whether or not imaging data can be fully anonymized is controversial (Lotan et al., 2020; Murdoch, 2021). Whether consent can be truly informed considering the complexity of the data being acquired, and the resulting myriad of potential future uses of the data, is also disputed (Vayena & Blasimme, 2017).

    IT infrastructure

    Among hospital departments, radiology has always been at the forefront ofdigitalization. AI-based solutions that focus on image processing and interpretation are likely to find the prerequisite infrastructure in most radiology departments, for example for linking imaging equipment to computers for analysis and for archiving images and other outputs. However, most radiology departments are likely to require significant infrastructure upgrades for other applications of AI, particularly those requiring the integration of information from multiple sources and having complex outputs. Moreover, it is important to keep in mind that the distribution of necessary infrastructure is highly unequal across and within countries (Health Ethics & Governance, 2021).

    In terms of computing power, radiology departments will either have to invest resources into the hardware and personnel necessary to run these AI-based solutions or opt for cloud-based solutions. The former comes with an extra cost but allows data processing within the confines of the hospital or clinic’s local network. Cloud-based solutions for computing (known as “infrastructure as a service” or “IaaS”) are often considered the less secure and less trustworthy option, but this depends on a number of factors and is thus not always true (Baccianella & Gough, n.d.). Guidelines on what to consider when procuring cloud-based solutions in healthcare are available (Cloud Security for Healthcare Services, 2021).

    Lack of standardization, interoperability, and integrability

    The problem of infrastructure becomes even more complicated when considering how fragmented the AI medical imaging market currently is (Alexander et al., 2020). It is therefore likely that in the near future a single department will have several dozen AI-based solutions from different vendors running simultaneously. Having a separate self-contained infrastructure (e.g. a workstation or server) for each of these would be incredibly complicated and difficult to manage. Suggested solutions for this have included AI solution “marketplaces”, similar to app stores (Advanced AI Solutions for Radiology, n.d., Curated Marketplace, 2018, Imaging AI Marketplace - Overview, n.d., Sectra Amplifier Marketplace, 2021, The Nuance AI Marketplace for Diagnostic Imaging, n.d.), and development of an overarching vendor-neutral infrastructure (Leiner et al., 2021). The successful implementation of such solutions requires close partnerships between AI solution developers, imaging vendors and information technology companies.

    Interpretability

    It is often impossible to understand exactly how AI- based solutions come to their conclusions, particularly with complex approaches like deep learning. This reduces how transparent the decision-making process for procuring and approving these solutions can be, makes the identification of biases difficult, and makes it harder for clinicians to explain the outputs of these solutions to their patients and to determine whether a solution is working properly or has malfunctioned (Char et al., 2018; Reddy et al., 2020; Vayena et al., 2018; Whittlestone et al., 2019). Some have suggested that techniques that help humans understand how AI- based algorithms made certain decisions or predictions (“interpretable” or “explainable” AI) might help mitigate these challenges. However, others have argued that currently available techniques are unsuitable for understanding individual decisions of an algorithm and have warned against relying on them for ensuring that algorithms work in a safe and reliable way (Ghassemi et al., 2021).

    Liability

    In healthcare systems, a framework of accountability ensures that healthcare workers and medical institutions can be held responsible for adverse effects resulting from their actions. The question of who should be held accountable for the failures of an AI- based solution is complicated. For pharmaceuticals, for example, the accountability for inherent failures in the product or its use often lies with either the manufacturer or the prescriber. One key difference is that AI-based systems are continuously evolving and learning, and so inherently work in a way that is independent of what their developers could have foreseen (Yeung, 2018). To the end-user such as the healthcare worker, the AI- based solution may be opaque and so they may not be able to tell when the solution is malfunctioning or inaccurate (Habli et al., 2020; Yeung, 2018).

    Brittleness

    Despite substantial progress in their development over the past few years, deep learning algorithms are still surprising brittle. This means that, when the algorithm faces a scenario that differs substantially from what it faced during training, it cannot contextualize and often produces nonsensical or inaccurate results. This happens because, unlike humans, most algorithms learn to perceive things within the confines of certain assumptions, but fail to generalize outside these assumptions. As an example of how this can be abused with malicious intent, subtle changes to medical images, imperceptible by humans, can render the results of disease-classifying algorithms inaccurate (Finlayson et al., 2018). The lack of interpretability of many AI-based solutions compounds this problem because it makes it difficult to troubleshoot how they reached the wrong conclusion.

    So far, more than 100 AI-based products have gained conformité européenne (CE) marking or Food and Drug Adminstration (FDA) clearance. These products can be found in continuously updated and searchable online databases curated by the FDA (Center for Devices & Radiological Health, n.d.), the American College of Radiology (Assess-AI, n.d.), and others (AI for Radiology, n.d., The Medical Futurist, n.d.; E. Wu et al., 2021). The increasing number of available products, the inherent complexity of many of these solutions, and the fact that many people who usually make purchasing decisions in hospitals are not familiar with evaluating such products make it important to think carefully when deciding on which product to purchase. Such decisions will need to be made after incorporating input from healthcare workers, information technology (IT) professionals, as well as management, finance, legal, and human resources professionals within hospitals.

    Deciding on whether to purchase an AI-based solution in radiology, as well as which of the increasing number of commercially available solutions to purchase, includes considerations of quality, safety, and finances. Over the past few years, several guidelines have emerged to help potential buyers make these decisions (A Buyer’s Guide to AI in Health and Care, 2020; Omoumi et al., 2021; Reddy et al., 2021), and these guidelines are likely to evolve in the future with changing expectations from customers, regulatory bodies, and stakeholders involved in reimbursement decisions.

    First of all, it has to be clear to the potential buyer what the problem is and whether AI is the appropriate approach to this solution, or whether alternatives exist that are more advantageous on balance. If AI is the appropriate approach, buyers should know exactly what a potential AI-based product’s scope of the solution is - i.e. what specific problem the AI-based solution is designed to solve and in what specific circumstances. This includes whether the solution is intended for screening, diagnosis, monitoring, treatment recommendation or another application. It also includes the intended users of the solution and what kind of specific qualifications or training they are expected to have in order to be able to operate the solution and interpret its outputs. It needs to be clear to buyers whether the solution is intended to replace certain tasks that would normally be performed by the end-user, act as a double-reader, as a triaging mechanism, or for other tasks like quality control. Buyers should also understand whether the solution is intended to provide “new” information (i.e. information that would otherwise be unavailable to the user without the solution), improve the performance of an existing task beyond a human’s or other non-AI-based solution’s performance or if it is intended to save time or other resources.

    Buyers should also have access to information that allows them to assess the potential benefits of the AI solution, and this should be backed up by published scientific evidence for the efficacy and cost-efficiency of the solution. How this is done will depend highly on the solution itself and the context in which it is expected to be deployed, but guidelines for this are available (National Institute for Health and Care Excellence (NICE), n.d.). Some questions to ask here would be: How much of an influence will the solution have on patient management? Will it improve diagnostic performance? Will it save time and money? Will it affect patients’ quality of life? It should also be clear to the buyer who exactly is expected to benefit from the use of this solution (Radiologists? Clinicians? Patients? The healthcare system or society as a whole?).

    As with any healthcare intervention, all AI-based solutions come with potential risks, and these should be made clear to the buyer. Some of these risks might have legal consequences, such as the potential for misdiagnosis. These risks should be quantified, and potential buyers should have a framework for dealing with them, including identifying a framework for accountability within the organizations implementing these solutions. Buyers should also ensure they clearly understand the potential negative effects on radiologists’ training and the potential disruption to radiologists’ workflows associated with the use of these solutions.

    Specifics of the AI solution’s design are also relevant to the decision on whether or not to purchase it. These include how robust the solution is to differences between vendors and scanning parameters, the circumstances under which the algorithm was trained (including potential confounding factors), and the way that performance was assessed. It should also be clear to buyers if and how potential sources of bias were accounted for during development. Because a core characteristic of AI-based solutions is their ability to continuously learn from new data, whether and how exactly this retraining is incorporated into the solution with time should also be clear to the buyer, including whether or not new regulatory approval is needed with each iteration. This also includes whether or not retraining is required, for example, due to changes in imaging equipment at the buyer’s institution.

    The main selling points of many AI-based solutions are ease-of-use and improved workflows. Therefore, potential buyers should carefully scrutinize how these solutions are to be integrated into existing workflows, including inter-operability with PACS and electronic medical record systems. Whether or not the solution requires extra hardware (e.g. graphical processing units) or software (e.g. for visualization of the solution’s outputs), or if it can readily be integrated into the existing information technology infrastructure of the buyer’s organization influences the overall cost of the solution for the buyer and is therefore also a critical consideration. In addition, the degree of manual interaction required, both under normal circumstances and for troubleshooting, should be known to the buyer. All potential users of the AI solution should be involved in the purchasing process to ensure that they are familiar with it and that it meets their professional ethical standards and suits their needs.

    From a regulatory perspective, it should be clear to the buyer whether the solution complies with medical device and data protection regulations. Has the solution been approved in the buyer’s country? If so, under which risk classification? Buyers should also consider creating data flow maps that display how the data flows in the operation of the AI-based solution, including who has access to the data.

    Finally, there are other factors to consider which are not necessarily unique to AI-based solutions and which buyers might be familiar with from purchasing other types of solutions. This includes the licensing model of the solution, how users are to be trained on using the solution, how the solution is maintained, how failures in the solution are dealt with, and whether additional costs are to be expected when scaling up the solution’s implementation (e.g. using the solution for more imaging equipment or more users). This allows the potential buyer to anticipate the current and future costs of purchasing the solution.

    The past decade of increasing interest and progress in AI-based solutions for medical imaging has set the stage for a number of trends that are likely to appear or intensify in the near future.

    Firstly, there is an increasing sentiment that, although AI holds a great deal of promise for interpretive applications (such as the detection of pathology), non-interpretive AI-based solutions might hold the most potential in terms of instilling efficiency into radiology workflows and improving patient experiences. This trend towards involving AI earlier in the patient management process is likely to extend to AI increasingly acting as a clinical decision support system to guide when and which imaging scans are performed.

    For this to happen, AI needs to be integrated into existing clinical information systems, and the specific algorithms used need to be able to handle more varied data. This will likely pave the way for the development of algorithms that are capable of integrating demographic, clinical, and laboratory patient data to make recommendations about patient management (Huang, Pareek, et al., 2020; Rockenbach, 2021). The previously mentioned natural language processing algorithms that have been used to interpret scan requests may be useful candidates for this.

    In addition, we are likely to see AI algorithms that can interpret multiple different types of imaging data from the same patient. Currently, less than 5 % of commercially available AI-based solutions in medical imaging work with more than one imaging modality (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021) despite the fact that the typical patient in a hospital receives multiple imaging scans during their stay (Shinagare et al., 2014). With this, it is also likely that more AI-based solutions will be developed that target hitherto neglected modalities such as nuclear imaging techniques and ultrasound.

    The current market for AI-based solutions in radiology is spread across a relatively large number of companies (Alexander et al., 2020). Potential users are likely to expect a streamlined integration of these products in their workflows, which can be challenging in such a fragmented market. Improved integration can be achieved in several different ways, including with vendor-neutral marketplaces or by the gradual consolidation of providers of AI-based solutions.

    With the expanding use of AI, the issue of trust between AI developers, healthcare professionals, regulators, and patients will become more relevant. It is therefore likely that efforts will intensify to take steps towards strengthening that trust. This will potentially include raising the expected standards of evidence for AI- based solutions (Aggarwal, Sounderajah, et al., 2021; X. Liu et al., 2019; van Leeuwen et al., 2021; Yusuf et al., 2020), making them more transparent through the use and improvement of interpretable AI techniques (Holzinger et al., 2017; Reyes et al., 2020; “Towards Trustable Machine Learning,” 2018), and enhancing techniques for maintaining patient data privacy (G. Kaissis et al., 2021; G. A. Kaissis et al., 2020).

    Furthermore, while most existing regulations stipulate that AI-based algorithms cannot be modified after regulatory approval, this is likely to change in the future. The potential for these algorithms to learn from data acquired after approval and adapt to changing circumstances is a major advantage of AI. Still, frameworks for doing so have thus far been lacking in the healthcare sector. However, promising ideas have recently emerged, including adapting existing hospital quality assurance and improvement frameworks to monitor AI-based algorithms’ performance and the data they are trained on and update the algorithms accordingly (Feng et al., 2022). This will likely require the development of multidisciplinary teams within hospitals consisting of clinicians, IT professionals, and biostatisticians who closely collaborate with model developers and regulators (Feng et al., 2022).

    While the obstacles discussed in previous sections might slow down the adoption of AI in radiology somewhat, the fear of AI potentially replacing radiologists is unlikely to be one of them. A recent survey from Europe showed that most radiologists did not perceive a reduction in their clinical workload after adopting AI-based solutions (European Society of Radiology (ESR), 2022), likely because, at the same time, demand for radiologists’ services has been continuously rising. Studies from around the world have shown that radiology professionals, particularly those with AI exposure and experience, are generally optimistic about the role of AI in their practice (Y. Chen et al., 2021; Huisman et al., 2021; Ooi et al., 2021; Santomartino & Yi, 2022; Scott et al., 2021).

    AI has shown promise in positively impacting virtually every facet of a radiology department’s work - from scheduling and protocolling patient scans to interpreting images and reaching diagnoses. Promising research on AI-based tools in radiology has not yet been widely translated to adoption in routine practice, however, because of a number of complex, partially intertwined issues. Potential solutions exist for many of these challenges, but many of these solutions require further refinement and testing. In the meantime, guidelines are emerging to help potential users of AI-based solutions in radiology navigate the increasing number of commercial products. This encourages their adoption in real-world scenarios, thus allowing their true potential to be uncovered, as well as their weaknesses to be identified and addressed in a safe and effective way. As these incremental improvements are made, these tools will likely evolve to handle more varied data, become integrated into consolidated workflows, become more transparent, and ultimately more useful for increasing efficiency and improving patient care.

    AAMC Report Reinforces Mounting Physician Shortage. (2021). AAMC. https://www.aamc.org/news-insights/press-releases/aamcreport- reinforces-mounting-physician-shortage, accessed on 26.09.2024

    Adams, S. J., Babyn, P. S., & Danilkewich, A. (2016). Toward a comprehensive management strategy for incidental findings in imaging. Canadian Family Physician Medecin de Famille Canadien, 62(7), 541–543.

    Adams, S. J., Madtes, D. K., Burbridge, B., Johnston, J., Goldberg, I. G., Siegel, E. L., Babyn, P., Nair, V. S., & Calhoun, M. E. (2023). Clinical Impact and Generalizability of a Computer-Assisted Diagnostic Tool to Risk-Stratify Lung Nodules With CT. Journal of the American College of Radiology: JACR, 20(2), 232–242. https:// doi.org/10.1016/j.jacr.2022.08.006

    Aggarwal, R., Farag, S., Martin, G., Ashrafian, H., & Darzi, A. (2021). Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-sectional Survey. Journal of Medical Internet Research, 23(8), e26162. https://doi.org/10.2196/26162

    Aggarwal, R., Sounderajah, V., Martin, G., Ting, D. S. W., Karthikesalingam, A., King, D., Ashrafian, H., & Darzi, A. (2021). Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digital Medicine, 4(1), 65. https://doi.org/10.1038/s41746-021-00438-z

    Ali, N., Lifford, K. J., Carter, B., McRonald, F., Yadegarfar, G., Baldwin, D. R., Weller, D., Hansell, D. M., Duffy, S. W., Field, J. K., & Brain, K. (2015). Barriers to uptake among high-risk individuals declining participation in lung cancer screening: a mixed methods analysis of the UK Lung Cancer Screening (UKLS) trial. BMJ Open, 5(7), e008254. https://doi.org/10.1136/ bmjopen-2015-008254

    Al Mohammad, B., Hillis, S. L., Reed, W., Alakhras, M., & Brennan, P. C. (2019). Radiologist performance in the detection of lung cancer using CT. Clinical Radiology, 74(1), 67–75. https://doi.org/10.1016/j.crad.2018.10.008

    Ardila, D., Kiraly, A. P., Bharadwaj, S., Choi, B., Reicher, J. J., Peng, L., Tse, D., Etemadi, M., Ye, W., Corrado, G., Naidich, D. P., & Shetty, S. (2019). End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature Medicine, 25(6), 954–961. https://doi.org/10.1038/s41591-019-0447-x

    Armato, S. G., 3rd, Roberts, R. Y., Kocherginsky, M., Aberle, D. R., Kazerooni, E. A., Macmahon, H., van Beek, E. J. R., Yankelevitz, D., McLennan, G., McNitt-Gray, M. F., Meyer, C. R., Reeves, A. P., Caligiuri, P., Quint, L. E., Sundaram, B., Croft, B. Y., & Clarke, L. P. (2009). Assessment of radiologist performance in the detection of lung nodules: dependence on the definition of “truth.” Academic Radiology, 16(1), 28–38. https://doi.org/10.1016/j.acra.2008.05.022

    Austin, J. H., Romney, B. M., & Goldsmith, L. S. (1992). Missed bronchogenic carcinoma: radiographic findings in 27 patients with a potentially resectable lesion evident in retrospect. Radiology, 182(1), 115–122. https://doi.org/10.1148/radiology.182.1.1727272

    Bankier, A. A., MacMahon, H., Goo, J. M., Rubin, G. D., Schaefer- Prokop, C. M., & Naidich, D. P. (2017). Recommendations for Measuring Pulmonary Nodules at CT: A Statement from the Fleischner Society. Radiology, 285(2), 584–600. https://doi. org/10.1148/radiol.2017162894

    Black, W. C., Gareen, I. F., Soneji, S. S., Sicks, J. D., Keeler, E. B., Aberle, D. R., Naeim, A., Church, T. R., Silvestri, G. A., Gorelick, J., Gatsonis, C., & National Lung Screening Trial Research Team. (2014). Cost-effectiveness of CT screening in the National Lung Screening Trial. The New England Journal of Medicine, 371(19), 1793–1802. https://doi.org/10.1056/NEJMoa1312547

    Burzic, A., O’Dowd, E. L., & Baldwin, D. R. (2022). The Future of Lung Cancer Screening: Current Challenges and Research Priorities. Cancer Management and Research, 14, 637–645. https://doi.org/10.2147/CMAR.S293877

    Callister, M. E. J., Baldwin, D. R., Akram, A. R., Barnard, S., Cane, P., Draffan, J., Franks, K., Gleeson, F., Graham, R., Malhotra, P., Prokop, M., Rodger, K., Subesinghe, M., Waller, D., Woolhouse, I., British Thoracic Society Pulmonary Nodule Guideline Development Group, & British Thoracic Society Standards of Care Committee. (2015). British Thoracic Society guidelines for the investigation and management of pulmonary nodules. Thorax, 70 Suppl 2, ii1–ii54. https://doi.org/10.1136/thoraxjnl-2015-207168

    Cassidy, A., Myles, J. P., van Tongeren, M., Page, R. D., Liloglou, T., Duffy, S. W., & Field, J. K. (2008). The LLP risk model: an individual risk prediction model for lung cancer. British Journal of Cancer, 98(2), 270–276. https://doi.org/10.1038/sj.bjc.6604158

    Cha, M. J., Chung, M. J., Lee, J. H., & Lee, K. S. (2019). Performance of Deep Learning Model in Detecting Operable Lung Cancer With Chest Radiographs. Journal of Thoracic Imaging, 34(2), 86–91. https://doi.org/10.1097/RTI.0000000000000388

    Ciompi, F., Chung, K., van Riel, S. J., Setio, A. A. A., Gerke, P. K., Jacobs, C., Scholten, E. T., Schaefer-Prokop, C., Wille, M. M. W., Marchianò, A., Pastorino, U., Prokop, M., & van Ginneken, B. (2017). Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Scientific Reports, 7, 46479. https://doi.org/10.1038/srep46479

    Cui, X., Zheng, S., Heuvelmans, M. A., Du, Y., Sidorenkov, G., Fan, S., Li, Y., Xie, Y., Zhu, Z., Dorrius, M. D., Zhao, Y., Veldhuis, R. N. J., de Bock, G. H., Oudkerk, M., van Ooijen, P. M. A., Vliegenthart, R., & Ye, Z. (2022). Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program. European Journal of Radiology, 146, 110068. https://doi.org/10.1016/j. ejrad.2021.110068

    de Koning, H. J., van der Aalst, C. M., de Jong, P. A., Scholten, E. T., Nackaerts, K., Heuvelmans, M. A., Lammers, J.-W. J., Weenink, C., Yousaf-Khan, U., Horeweg, N., van ’t Westeinde, S., Prokop, M., Mali, W. P., Mohamed Hoesein, F. A. A., van Ooijen, P. M. A., Aerts, J. G. J. V., den Bakker, M. A., Thunnissen, E., Verschakelen, J., … Oudkerk, M. (2020). Reduced Lung-Cancer Mortality with Volume CT Screening in a Randomized Trial. The New England Journal of Medicine, 382(6), 503–513. https://doi. org/10.1056/NEJMoa1911793

    de Margerie-Mellon, C., & Chassagnon, G. (2023). Artificial intelligence: A critical review of applications for lung nodule and lung cancer. Diagnostic and Interventional Imaging, 104(1), 11–17. https://doi.org/10.1016/j.diii.2022.11.007

    Detterbeck, F. C., Lewis, S. Z., Diekemper, R., Addrizzo-Harris, D., & Alberts, W. M. (2013). Executive Summary: Diagnosis and management of lung cancer, 3rd ed: American College of Chest Physicians evidence-based clinical practice guidelines. Chest, 143(5 Suppl), 7S – 37S. https://doi.org/10.1378/chest.12-2377

    Devaraj, A., van Ginneken, B., Nair, A., & Baldwin, D. (2017). Use of Volumetry for Lung Nodule Management: Theory and Practice. Radiology, 284(3), 630–644. https://doi.org/10.1148/ radiol.2017151022

    Dong, X., Xu, S., Liu, Y., Wang, A., Saripan, M. I., Li, L., Zhang, X., & Lu, L. (2020). Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation. Cancer Imaging: The Official Publication of the International Cancer Imaging Society, 20(1), 53. https://doi.org/10.1186/s40644-020-00331-0

    Field, J. K., Duffy, S. W., Baldwin, D. R., Whynes, D. K., Devaraj, A., Brain, K. E., Eisen, T., Gosney, J., Green, B. A., Holemans, J. A., Kavanagh, T., Kerr, K. M., Ledson, M., Lifford, K. J., McRonald, F. E., Nair, A., Page, R. D., Parmar, M. K. B., Rassl, D. M., … Hansell, D. M. (2016). UK Lung Cancer RCT Pilot Screening Trial: baseline findings from the screening arm provide evidence for the potential implementation of lung cancer screening. Thorax, 71(2), 161–170. https://doi.org/10.1136/thoraxjnl-2015-207140

    Gierada, D. S., Pinsky, P. F., Duan, F., Garg, K., Hart, E. M., Kazerooni, E. A., Nath, H., Watts, J. R., Jr, & Aberle, D. R. (2017). Interval lung cancer after a negative CT screening examination: CT findings and outcomes in National Lung Screening Trial participants. European Radiology, 27(8), 3249–3256. https://doi. org/10.1007/s00330-016-4705-8

    Gu, D., Liu, G., & Xue, Z. (2021). On the performance of lung nodule detection, segmentation and classification. Computerized Medical Imaging and Graphics: The Official Journal of the Computerized Medical Imaging Society, 89, 101886. https://doi. org/10.1016/j.compmedimag.2021.101886

    Hamilton, W., Peters, T. J., Round, A., & Sharp, D. (2005). What are the clinical features of lung cancer before the diagnosis is made? A population based case-control study. Thorax, 60(12), 1059–1065. https://doi.org/10.1136/thx.2005.045880

    Hanna, T. N., Lamoureux, C., Krupinski, E. A., Weber, S., & Johnson, J.-O. (2018). Effect of Shift, Schedule, and Volume on Interpretive Accuracy: A Retrospective Analysis of 2.9 Million Radiologic Examinations. Radiology, 287(1), 205–212. https://doi. org/10.1148/radiol.2017170555

    Hata, A., Yanagawa, M., Yoshida, Y., Miyata, T., Tsubamoto, M., Honda, O., & Tomiyama, N. (2020). Combination of Deep Learning-Based Denoising and Iterative Reconstruction for Ultra-Low-Dose CT of the Chest: Image Quality and Lung-RADS Evaluation. AJR. American Journal of Roentgenology, 215(6), 1321–1328. https://doi.org/10.2214/AJR.19.22680

    Henschke, C. I., McCauley, D. I., Yankelevitz, D. F., Naidich, D. P., McGuinness, G., Miettinen, O. S., Libby, D. M., Pasmantier, M. W., Koizumi, J., Altorki, N. K., & Smith, J. P. (1999). Early Lung Cancer Action Project: overall design and findings from baseline screening. The Lancet, 354(9173), 99–105. https://doi.org/10.1016/S0140-6736(99)06093-6

    Henschke, C. I., Yankelevitz, D. F., Mirtcheva, R., McGuinness, G., McCauley, D., & Miettinen, O. S. (2002). CT Screening for Lung Cancer. American Journal of Roentgenology, 178(5), 1053–1057. https://doi.org/10.2214/ajr.178.5.1781053

    Homayounieh, F., Digumarthy, S., Ebrahimian, S., Rueckel, J., Hoppe, B. F., Sabel, B. O., Conjeti, S., Ridder, K., Sistermanns, M., Wang, L., Preuhs, A., Ghesu, F., Mansoor, A., Moghbel, M., Botwin, A., Singh, R., Cartmell, S., Patti, J., Huemmer, C., … Kalra, M. (2021). An Artificial Intelligence-Based Chest X-ray Model on Human Nodule Detection Accuracy From a Multicenter Study. JAMA Network Open, 4(12), e2141096. https://doi.org/10.1001/ jamanetworkopen.2021.41096

    Hussein, S., Cao, K., Song, Q., & Bagci, U. (2017). Risk Stratification of Lung Nodules Using 3D CNN-Based Multi-task Learning. Information Processing in Medical Imaging, 249–260. https://doi.org/10.1007/978-3-319-59050-9_20

    Hwang, E. J., Goo, J. M., Kim, H. Y., Yi, J., Yoon, S. H., & Kim, Y. (2021). Implementation of the cloud-based computerized interpretation system in a nationwide lung cancer screening with low-dose CT: comparison with the conventional reading system. European Radiology, 31(1), 475–485. https://doi.org/10.1007/ s00330-020-07151-7

    Jacobs, C., Schreuder, A., van Riel, S. J., Scholten, E. T., Wittenberg, R., Wille, M. M. W., de Hoop, B., Sprengers, R., Mets, O. M., Geurts, B., Prokop, M., Schaefer-Prokop, C., & van Ginneken, B. (2021). Assisted versus Manual Interpretation of Low- Dose CT Scans for Lung Cancer Screening: Impact on Lung-RADS Agreement. Radiology. Imaging Cancer, 3(5), e200160. https://doi. org/10.1148/rycan.2021200160

    Jiang, B., Li, N., Shi, X., Zhang, S., Li, J., de Bock, G. H., Vliegenthart, R., & Xie, X. (2022). Deep Learning Reconstruction Shows Better Lung Nodule Detection for Ultra-Low-Dose Chest CT. Radiology, 303(1), 202–212. https://doi.org/10.1148/radiol.210551

    Jones, C. M., Buchlak, Q. D., Oakden-Rayner, L., Milne, M., Seah, J., Esmaili, N., & Hachey, B. (2021). Chest radiographs and machine learning - Past, present and future. Journal of Medical Imaging and Radiation Oncology, 65(5), 538–544. https://doi. org/10.1111/1754-9485.13274

    Kinsinger, L. S., Anderson, C., Kim, J., Larson, M., Chan, S. H., King, H. A., Rice, K. L., Slatore, C. G., Tanner, N. T., Pittman, K., Monte, R. J., McNeil, R. B., Grubber, J. M., Kelley, M. J., Provenzale, D., Datta, S. K., Sperber, N. S., Barnes, L. K., Abbott, D. H., … Jackson, G. L. (2017). Implementation of Lung Cancer Screening in the Veterans Health Administration. JAMA Internal Medicine, 177(3), 399–406. https://doi.org/10.1001/ jamainternmed.2016.9022

    LCS Project. (n.d.). https://www.myesti.org/lungcancerscreeningcertificationproject/, accessed on 26.09.2024

    Leader, J. K., Warfel, T. E., Fuhrman, C. R., Golla, S. K., Weissfeld, J. L., Avila, R. S., Turner, W. D., & Zheng, B. (2005). Pulmonary nodule detection with low-dose CT of the lung: agreement among radiologists. AJR. American Journal of Roentgenology, 185(4), 973–978. https://doi.org/10.2214/AJR.04.1225

    Li, J., Chung, S., Wei, E. K., & Luft, H. S. (2018). New recommendation and coverage of low-dose computed tomography for lung cancer screening: uptake has increased but is still low. BMC Health Services Research, 18(1), 525. https://doi.org/10.1186/ s12913-018-3338-9

    Li, L., Liu, Z., Huang, H., Lin, M., & Luo, D. (2019). Evaluating the performance of a deep learning-based computer-aided diagnosis (DL-CAD) system for detecting and characterizing lung nodules: Comparison with the performance of double reading by radiologists. Thoracic Cancer, 10(2), 183–192. https://doi. org/10.1111/1759-7714.12931

    Li, X., Shen, L., Xie, X., Huang, S., Xie, Z., Hong, X., & Yu, J. (2020). Multi-resolution convolutional networks for chest X-ray radiograph based lung nodule detection. Artificial Intelligence in Medicine, 103, 101744. https://doi.org/10.1016/j. artmed.2019.101744

    Lu, M. T., Raghu, V. K., Mayrhofer, T., Aerts, H. J. W. L., & Hoffmann, U. (2020). Deep Learning Using Chest Radiographs to Identify High-Risk Smokers for Lung Cancer Screening Computed Tomography: Development and Validation of a Prediction Model. Annals of Internal Medicine, 173(9), 704–713. https://doi. org/10.7326/M20-1868

    Lung Rads. (n.d.). https://www.acr.org/Clinical-Resources/Reporting-and-Data-Systems/Lung-Rads, accessed on 26.09.2024

    Malhotra, J., Malvezzi, M., Negri, E., La Vecchia, C., & Boffetta, P. (2016). Risk factors for lung cancer worldwide. The European Respiratory Journal: Official Journal of the European Society for Clinical Respiratory Physiology, 48(3), 889–902. https://doi. org/10.1183/13993003.00359-2016

    Marcus, P. M., Bergstralh, E. J., Fagerstrom, R. M., Williams, D. E., Fontana, R., Taylor, W. F., & Prorok, P. C. (2000). Lung cancer mortality in the Mayo Lung Project: impact of extended followup. Journal of the National Cancer Institute, 92(16), 1308–1316. https://doi.org/10.1093/jnci/92.16.1308

    Mendoza, J., & Pedrini, H. (2020). Detection and classification of lung nodules in chest X‐ray images using deep convolutional neural networks. Computational Intelligence. An International Journal, 36(2), 370–401. https://doi.org/10.1111/coin.12241

    Mikhael, P. G., Wohlwend, J., Yala, A., Karstens, L., Xiang, J., Takigami, A. K., Bourgouin, P. P., Chan, P., Mrah, S., Amayri, W., Juan, Y.-H., Yang, C.-T., Wan, Y.-L., Lin, G., Sequist, L. V., Fintelmann, F. J., & Barzilay, R. (2023). Sybil: A Validated Deep Learning Model to Predict Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography. Journal of Clinical Oncology: Official Journal of the American Society of Clinical Oncology, 41(12), 2191–2200. https://doi.org/10.1200/ JCO.22.01345

    Morgan, L., Choi, H., Reid, M., Khawaja, A., & Mazzone, P. J. (2017). Frequency of Incidental Findings and Subsequent Evaluation in Low-Dose Computed Tomographic Scans for Lung Cancer Screening. Annals of the American Thoracic Society, 14(9), 1450–1456. https://doi.org/10.1513/AnnalsATS.201612-1023OC

    Murchison, J. T., Ritchie, G., Senyszak, D., Nijwening, J. H., van Veenendaal, G., Wakkie, J., & van Beek, E. J. R. (2022). Validation of a deep learning computer aided system for CT based lung nodule detection, classification, and growth rate estimation in a routine clinical population. PloS One, 17(5), e0266799. https://doi. org/10.1371/journal.pone.0266799

    Nam, J. G., Ahn, C., Choi, H., Hong, W., Park, J., Kim, J. H., & Goo, J. M. (2021). Image quality of ultralow-dose chest CT using deep learning techniques: potential superiority of vendor-agnostic postprocessing over vendor-specific techniques. European Radiology, 31(7), 5139–5147. https://doi.org/10.1007/s00330-020-07537-7

    Nam, J. G., Park, S., Hwang, E. J., Lee, J. H., Jin, K.-N., Lim, K. Y., Vu, T. H., Sohn, J. H., Hwang, S., Goo, J. M., & Park, C. M. (2019). Development and Validation of Deep Learning-based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs. Radiology, 290(1), 218–228. https://doi.org/10.1148/ radiol.2018180237

    National Institute for Health and Care Excellence (NICE). (n.d.). Evidence standards framework for digital health technologies.  https://www.nice.org.uk/corporate/ecd7, accessed on 26.09.2024

    National Lung Screening Trial Research Team, Aberle, D. R., Adams, A. M., Berg, C. D., Black, W. C., Clapp, J. D., Fagerstrom, R. M., Gareen, I. F., Gatsonis, C., Marcus, P. M., & Sicks, J. D. (2011). Reduced lung-cancer mortality with low-dose computed tomographic screening. The New England Journal of Medicine, 365(5), 395–409. https://doi.org/10.1056/NEJMoa1102873

    Park, H., Ham, S.-Y., Kim, H.-Y., Kwag, H. J., Lee, S., Park, G., Kim, S., Park, M., Sung, J.-K., & Jung, K.-H. (2019). A deep learning-based CAD that can reduce false negative reports: A preliminary study in health screening center. RSNA 2019. RSNA 2019. https://archive.rsna.org/2019/19017034.html

    Pehrson, L. M., Nielsen, M. B., & Ammitzbøl Lauridsen, C. (2019). Automatic Pulmonary Nodule Detection Applying Deep Learning or Machine Learning Algorithms to the LIDC-IDRI Database: A Systematic Review. Diagnostics (Basel, Switzerland), 9(1). https://doi.org/10.3390/diagnostics9010029

    Qi, L.-L., Wang, J.-W., Yang, L., Huang, Y., Zhao, S.-J., Tang, W., Jin, Y.-J., Zhang, Z.-W., Zhou, Z., Yu, Y.-Z., Wang, Y.-Z., & Wu, N. (2021). Natural history of pathologically confirmed pulmonary subsolid nodules with deep learning-assisted nodule segmentation. European Radiology, 31(6), 3884–3897. https://doi. org/10.1007/s00330-020-07450-z

    Qi, L.-L., Wu, B.-T., Tang, W., Zhou, L.-N., Huang, Y., Zhao, S.-J., Liu, L., Li, M., Zhang, L., Feng, S.-C., Hou, D.-H., Zhou, Z., Li, X.- L., Wang, Y.-Z., Wu, N., & Wang, J.-W. (2020). Long-term followup of persistent pulmonary pure ground-glass nodules with deep learning-assisted nodule segmentation. European Radiology, 30(2), 744–755. https://doi.org/10.1007/s00330-019-06344-z

    Raghu, V. K., Walia, A. S., Zinzuwadia, A. N., Goiffon, R. J., Shepard, J.-A. O., Aerts, H. J. W. L., Lennes, I. T., & Lu, M. T. (2022). Validation of a Deep Learning-Based Model to Predict Lung Cancer Risk Using Chest Radiographs and Electronic Medical Record Data. JAMA Network Open, 5(12), e2248793. https://doi. org/10.1001/jamanetworkopen.2022.48793

    Ravin, C. E., & Chotas, H. G. (1997). Chest radiography. Radiology, 204(3), 593–600. https://doi.org/10.1148/radiology.204.3.9280231

    Revel, M.-P., Bissery, A., Bienvenu, M., Aycard, L., Lefort, C., & Frija, G. (2004). Are two-dimensional CT measurements of small noncalcified pulmonary nodules reliable? Radiology, 231(2), 453–458. https://doi.org/10.1148/radiol.2312030167

    Röhrich, S., Heidinger, B. H., Prayer, F., Weber, M., Krenn, M., Zhang, R., Sufana, J., Scheithe, J., Kanbur, I., Korajac, A., Pötsch, N., Raudner, M., Al-Mukhtar, A., Fueger, B. J., Milos, R.-I., Scharitzer, M., Langs, G., & Prosch, H. (2023). Impact of a content-based image retrieval system on the interpretation of chest CTs of patients with diffuse parenchymal lung disease. European Radiology, 33(1), 360–367. https://doi.org/10.1007/ s00330-022-08973-3

    Sands, J., Tammemägi, M. C., Couraud, S., Baldwin, D. R., Borondy-Kitts, A., Yankelevitz, D., Lewis, J., Grannis, F., Kauczor, H.-U., von Stackelberg, O., Sequist, L., Pastorino, U., & McKee, B. (2021). Lung Screening Benefits and Challenges: A Review of The Data and Outline for Implementation. Journal of Thoracic Oncology: Official Publication of the International Association for the Study of Lung Cancer, 16(1), 37–53. https://doi. org/10.1016/j.jtho.2020.10.127

    Setio, A. A. A., Traverso, A., de Bel, T., Berens, M. S. N., van den Bogaard, C., Cerello, P., Chen, H., Dou, Q., Fantacci, M. E., Geurts, B., Gugten, R. van der, Heng, P. A., Jansen, B., de Kaste, M. M. J., Kotov, V., Lin, J. Y.-H., Manders, J. T. M. C., Sóñora-Mengana, A., García-Naranjo, J. C., … Jacobs, C. (2017). Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge. Medical Image Analysis, 42, 1–13. https://doi.org/10.1016/j.media.2017.06.015

    Siegel, D. A., Fedewa, S. A., Henley, S. J., Pollack, L. A., & Jemal, A. (2021). Proportion of Never Smokers Among Men and Women With Lung Cancer in 7 US States. JAMA Oncology, 7(2), 302–304. https://doi.org/10.1001/jamaoncol.2020.6362

    Singh, R., Kalra, M. K., Homayounieh, F., Nitiwarangkul, C., McDermott, S., Little, B. P., Lennes, I. T., Shepard, J.-A. O., & Digumarthy, S. R. (2021). Artificial intelligence-based vessel suppression for detection of sub-solid nodules in lung cancer screening computed tomography. Quantitative Imaging in Medicine and Surgery, 11(4), 1134–1143. https://doi.org/10.21037/ qims-20-630

    Smieliauskas, F., MacMahon, H., Salgia, R., & Shih, Y.- C. T. (2014). Geographic variation in radiologist capacity and widespread implementation of lung cancer CT screening. Journal of Medical Screening, 21(4), 207–215. https://doi. org/10.1177/0969141314548055

    Sung, H., Ferlay, J., Siegel, R. L., Laversanne, M., Soerjomataram, I., Jemal, A., & Bray, F. (2021). Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA: A Cancer Journal for Clinicians, 71(3), 209–249. https://doi.org/10.3322/caac.21660

    Sun, S., Schiller, J. H., & Gazdar, A. F. (2007). Lung cancer in never smokers--a different disease. Nature Reviews. Cancer, 7(10), 778–790. https://doi.org/10.1038/nrc2190

    Tammemägi, M. C., Katki, H. A., Hocking, W. G., Church, T. R., Caporaso, N., Kvale, P. A., Chaturvedi, A. K., Silvestri, G. A., Riley, T. L., Commins, J., & Berg, C. D. (2013). Selection criteria for lung-cancer screening. The New England Journal of Medicine, 368(8), 728–736. https://doi.org/10.1056/NEJMoa1211776

    Thai, A. A., Solomon, B. J., Sequist, L. V., Gainor, J. F., & Heist, R. S. (2021). Lung cancer. The Lancet, 398(10299), 535–554. https:// doi.org/10.1016/S0140-6736(21)00312-3

    The Royal College of Radiologists. (2022). Clinical Radiology Workforce Census.

    Toumazis, I., de Nijs, K., Cao, P., Bastani, M., Munshi, V., Ten Haaf, K., Jeon, J., Gazelle, G. S., Feuer, E. J., de Koning, H. J., Meza, R., Kong, C. Y., Han, S. S., & Plevritis, S. K. (2021). Costeffectiveness Evaluation of the 2021 US Preventive Services Task Force Recommendation for Lung Cancer Screening. JAMA Oncology, 7(12), 1833–1842. https://doi.org/10.1001/jamaoncol.2021.4942

    van Riel, S. J., Sánchez, C. I., Bankier, A. A., Naidich, D. P., Verschakelen, J., Scholten, E. T., de Jong, P. A., Jacobs, C., van Rikxoort, E., Peters-Bax, L., Snoeren, M., Prokop, M., van Ginneken, B., & Schaefer-Prokop, C. (2015). Observer Variability for Classification of Pulmonary Nodules on Low-Dose CT Images and Its Effect on Nodule Management. Radiology, 277(3), 863–871. https://doi.org/10.1148/radiol.2015142700

    Wang, C., Li, J., Zhang, Q., Wu, J., Xiao, Y., Song, L., Gong, H., & Li, Y. (2021). The landscape of immune checkpoint inhibitor therapy in advanced lung cancer. BMC Cancer, 21(1), 968. https:// doi.org/10.1186/s12885-021-08662-2

    Wang, Y., Midthun, D. E., Wampfler, J. A., Deng, B., Stoddard, S. M., Zhang, S., & Yang, P. (2015). Trends in the proportion of patients with lung cancer meeting screening criteria. JAMA: The Journal of the American Medical Association, 313(8), 853–855. https://doi.org/10.1001/jama.2015.413

    Wood, D. E., Kazerooni, E. A., Baum, S. L., Eapen, G. A., Ettinger, D. S., Hou, L., Jackman, D. M., Klippenstein, D., Kumar, R., Lackner, R. P., Leard, L. E., Lennes, I. T., Leung, A. N. C., Makani, S. S., Massion, P. P., Mazzone, P., Merritt, R. E., Meyers, B. F., Midthun, D. E., … Hughes, M. (2018). Lung Cancer Screening, Version 3.2018, NCCN Clinical Practice Guidelines in Oncology. Journal of the National Comprehensive Cancer Network: JNCCN, 16(4), 412–441. https://doi.org/10.6004/jnccn.2018.0020

    Wu, B., Zhou, Z., Wang, J., & Wang, Y. (2018). Joint learning for pulmonary nodule segmentation, attributes and malignancy prediction. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 1109–1113. https://doi.org/10.1109/ ISBI.2018.8363765

    Wyker, A., & Henderson, W. W. (2022). Solitary Pulmonary Nodule. StatPearls Publishing.

    Yoo, H., Lee, S. H., Arru, C. D., Doda Khera, R., Singh, R., Siebert, S., Kim, D., Lee, Y., Park, J. H., Eom, H. J., Digumarthy, S. R., & Kalra, M. K. (2021). AI-based improvement in lung cancer detection on chest radiographs: results of a multi-reader study in NLST dataset. European Radiology, 31(12), 9664–9674. https://doi. org/10.1007/s00330-021-08074-7

    Zahnd, W. E., & Eberth, J. M. (2019). Lung Cancer Screening Utilization: A Behavioral Risk Factor Surveillance System Analysis. American Journal of Preventive Medicine, 57(2), 250–255. https:// doi.org/10.1016/j.amepre.2019.03.015