Explore Your Local Site

Looks like you've landed on our   site. Let's take you home:    

Please note that the content and products on the    site might not be available in your region.

 

Choose the language:

  Homepage
Continue on the current website:  

 

Le guide complet de l'intelligence artificielle en radiologie

L’intelligence artificielle (IA) joue un rôle croissant dans nos vies et s’est révélée prometteuse pour relever certains des plus grands défis sociétaux actuels et futurs auxquels nous sommes confrontés. Le secteur de la santé, bien que notoirement complexe et résistant aux perturbations, a potentiellement beaucoup à gagner de l’utilisation de l’IA. Avec une histoire bien établie de leader de la transformation numérique dans les soins de santé et un besoin urgent d'améliorer l'efficacité, la radiologie a été à l'avant-garde de l'exploitation du potentiel de l'IA.

Ce livre explique comment et pourquoi l'IA peut relever les défis auxquels sont confrontés les services de radiologie, donne un aperçu des concepts fondamentaux liés à l'IA et décrit certains des cas d'utilisation les plus prometteurs de l'IA en radiologie. En outre, les défis majeurs associés à l’adoption de l’IA dans la pratique radiologique de routine sont discutés. L'ouvrage aborde également certains points cruciaux que les services de radiologie doivent garder à l'esprit lorsqu'ils décident d'acheter des solutions basées sur l'IA. Enfin, il donne un aperçu des nouveaux aspects évolutifs de l’IA en radiologie à prévoir dans un avenir proche.

Les arguments en faveur de l’intelligence artificielle en radiologie

Le secteur de la santé a connu un certain nombre de tendances au cours des dernières décennies qui exigent un changement dans la manière de faire certaines choses. Ces tendances sont particulièrement marquantes en radiologie, où la qualité diagnostique des examens d'imagerie s'est considérablement améliorée tandis que les temps d'examen ont diminué. En conséquence, la quantité et la complexité des données d’imagerie médicale acquises ont considérablement augmenté au cours des dernières décennies (Smith-Bindman et al., 2019 ; Winder et al., 2021) et devraient continuer d’augmenter (Tsao, 2020). Ce problème est renforcé par une pénurie mondiale généralisée de radiologues (AAMC Report Reinforces Mounting Physician Shortage, 2021, Clinical Radiology UK Workforce Census 2019 Report , 2019). Les travailleurs de la santé, y compris les radiologues, ont une charge de travail croissante (Bruls & Kwee, 2020 ; Levin et al., 2017) qui contribue à l'épuisement professionnel et aux erreurs médicales (Harry et al., 2021). Étant un fournisseur de services essentiels pour pratiquement tous les autres services hospitaliers, les pénuries de personnel en radiologie ont des effets importants qui se propagent à l'ensemble de l'hôpital et à la société dans son ensemble (England & Improvement, 2019 ; Sutherland et al., n.d.). Avec une population mondiale vieillissante et un fardeau croissant des maladies chroniques, ces problèmes devraient poser encore plus de défis au secteur de la santé à l’avenir.

Les solutions d’imagerie médicale basées sur l’IA ont le potentiel d’atténuer ces défis pour plusieurs raisons. Elles sont particulièrement adaptées à la gestion d’ensembles de données volumineuses et complexes (Alzubaidi et al., 2021). De plus, elles sont bien adaptées pour automatiser certaines des tâches traditionnellement effectuées par les radiologues et les manipulateurs radio, libérant potentiellement du temps et rendant les flux de travail au sein des services de radiologie plus efficaces (Allen et al., 2021 ; Baltruschat et al., 2021 ; Kalra et al., 2020 ; O'Neill et al., 2021 ; van Leeuwen et al., 2021 ; Wong et al., 2019). L’IA est également capable de détecter des modèles complexes dans des données que les humains ne peuvent pas nécessairement trouver ou quantifier (Dance, 2021 ; Korteling et al., 2021 ; Kühl et al., 2020).

Fondamentaux de l'intelligence artificielle

Le terme « intelligence artificielle » fait référence à l’utilisation de systèmes informatiques pour résoudre des problèmes spécifiques d’une manière qui simule le raisonnement humain. L’une des caractéristiques fondamentales de l’IA est que, comme les humains, ces systèmes peuvent adapter leurs solutions aux circonstances changeantes. Notez que, même si ces systèmes sont censés imiter fondamentalement la façon dont les humains pensent, leur capacité à le faire (par exemple en termes de quantité de données qu'ils peuvent gérer en même temps, de nature et de quantité de modèles qu'ils peuvent trouver dans les données et la vitesse à laquelle ils le font) dépasse souvent celle des humains.

Les solutions d'IA se présentent sous la forme d'algorithmes informatiques, qui sont des morceaux de code informatique représentant des instructions à suivre pour résoudre un problème spécifique. Dans sa forme la plus fondamentale, l'algorithme prend des données en entrée, effectue des calculs sur ces données et renvoie une sortie.

Un algorithme d’IA peut être explicitement programmé pour résoudre une tâche spécifique, analogue à une recette étape par étape pour préparer un gâteau. D’un autre côté, l’algorithme peut être programmé pour rechercher des modèles dans les données afin de résoudre le problème. Ces types d’algorithmes sont appelés algorithmes d’apprentissage automatique. Ainsi, tous les algorithmes d’apprentissage automatique sont de l’IA, mais toutes les IA ne sont pas d’apprentissage automatique. Les modèles dans les données que l'algorithme peut être explicitement programmé pour rechercher ou qu'il peut « découvrir » par lui-même sont appelés fonctionnalités. Une caractéristique importante de l’apprentissage automatique est que ces algorithmes apprennent à partir des données elles-mêmes et que leurs performances s’améliorent à mesure qu’ils reçoivent des données. 

L’une des utilisations les plus courantes de l’apprentissage automatique est la classification : l’attribution d’une étiquette particulière à une donnée. Par exemple, un algorithme d'apprentissage automatique peut être utilisé pour déterminer si une photo (l'entrée) montre un chien ou un chat (l'étiquette). L’algorithme peut apprendre à le faire de manière supervisée ou non.

Apprentissage supervisé

Dans l'apprentissage supervisé, l'algorithme d'apprentissage automatique reçoit des données qui ont été étiquetées par l’humain, dans cet exemple, des photos de chiens et de chats qui ont été étiquetées comme telles. Le processus passe ensuite par les phases suivantes : 

1. Espace d’entraînement : L'algorithme apprend les caractéristiques associées aux chiens et aux chats à l'aide des données susmentionnées (données d’entraînement).
2. Phase de Test : L'algorithme reçoit ensuite un nouvel ensemble de photos (les données de test), il les étiquette et les performances de l'algorithme sur ces données sont évaluées. 

Dans certains cas, il existe une phase entre l’entraînement et le test, appelée phase de validation. Dans cette phase, l'algorithme reçoit un nouvel ensemble de photos (non incluses dans les données d'entraînement ou de test), ses performances sont évaluées sur ces données, et le modèle est peaufiné et recyclé sur les données d'entraînement. Ceci est répété jusqu'à ce qu'un critère prédéfini basé sur les performances soit atteint, et l'algorithme entre alors dans la phase de test. 

Apprentissage non supervisé

Dans l'apprentissage non supervisé, l'algorithme identifie les caractéristiques des données d'entrée qui lui permettent d'attribuer des classes aux points de données individuels sans qu'on lui dise explicitement ce que sont ou devraient être ces classes. De tels algorithmes peuvent identifier des modèles ou regrouper des points de données sans intervention humaine et inclure des algorithmes de clustering et de réduction de dimensionnalité.  Les algorithmes d’apprentissage automatique n’effectuent pas tous de classification. Certains sont utilisés pour prédire une mesure continue (par exemple la température sur quatre semaines) au lieu d'une étiquette discrète (par exemple chats vs chiens). Ceux-ci sont connus sous le nom d’algorithmes de régression.

Réseaux de neurones et apprentissage profond

Un réseau de neurones est constitué d’une couche d’entrée et d’une couche de sortie, elles-mêmes composées de noeuds. Dans les réseaux de neurones simples, les caractéristiques dérivées manuellement d'un ensemble de données sont introduites dans la couche d'entrée, qui effectue certains calculs dont les résultats sont relayés à la couche de sortie. Dans l’apprentissage profond, plusieurs couches « cachées » existent entre les couches d'entrée et de sortie. Chaque noeud des couches cachées effectue des calculs en utilisant certains poids et relaie la sortie vers la couche cachée suivante jusqu'à ce que la couche de sortie soit atteinte. 

Au début, des valeurs aléatoires sont attribuées aux poids et la précision de l'algorithme est calculée. Les valeurs des poids sont ensuite ajustées de manière itérative jusqu'à ce qu'un ensemble de valeurs de poids maximisant la précision soit trouvé. Cet ajustement itératif des valeurs de pondération est généralement effectué en revenant de la couche de sortie à la couche d'entrée, une technique appelée rétropropagation. L’ensemble de ce processus est effectué sur les données d’entraînement. 

Évaluation des performances

Comprendre comment les performances des algorithmes d’IA sont évaluées est essentiel pour interpréter la littérature sur l’IA. Il existe plusieurs mesures de performances pour évaluer dans quelle mesure un modèle effectue certaines tâches. Aucune mesure n’est parfaite à elle seule, c’est pourquoi une combinaison de plusieurs mesures fournit une image plus complète des performances du modèle.

En régression, les métriques les plus couramment utilisées incluent : 

  • Erreur absolue moyenne (MAE) : la différence moyenne entre les valeurs prédites et la vérité du terrain. 
  • Erreur quadratique moyenne (MSE) : les différences entre les valeurs prédites et la vérité de terrain sont élevées au carré, puis la moyenne est calculée sur l'échantillon. Ensuite, la racine carrée de la moyenne est prise. Contrairement à la MAE, la RMSE accorde donc un poids plus élevé aux différences plus importantes. 
  • R2: la proportion de la variance totale de la vérité du terrain expliquée par la variance des valeurs prédites. Elle varie de 0 à 1. 

Les métriques suivantes sont couramment utilisées dans les tâches de classification :

  • Précision : il s'agit de la proportion de toutes les prédictions qui ont été correctement prédites. Elle varie de 0 à 1.
  • Sensibilité : également appelée taux de vrais positifs (TPR) ou rappel, il s'agit de la proportion de vrais positifs qui ont été prédits correctement. Elle varie de 0 à 1.
  • Spécificité : Également appelée de taux de vrais négatifs (TNR), il s’agit de la proportion de vrais négatifs qui ont été prédits correctement. Elle varie de 0 à 1.
  • Précision : également appelée valeur prédictive positive (VPP), il s'agit de la proportion de classifications positives qui ont été prédites correctement. Elle varie de 0 à 1 

Il existe un compromis inhérent entre sensibilité et spécificité. L’importance de chacun, ainsi que leur interprétation, dépendent fortement de la question de recherche spécifique et de la tâche de classification. 

Il est important de noter que même si les modèles de classification visent à parvenir à une conclusion binaire, ils sont intrinsèquement fondés sur des probabilités. Cela signifie que ces modèles généreront une probabilité qu'un point de données appartienne à une classe ou à une autre. Afin de parvenir à une conclusion sur la classe la plus probable, un seuil est utilisé. Les mesures telles que l'exactitude, la sensibilité, la spécificité et la précision font référence aux performances de l'algorithme en fonction d'un certain seuil. L'aire sous la courbe caractéristique de fonctionnement du récepteur (ASC) est une mesure de performance indépendante du seuil. L'ASC peut être interprétée comme la probabilité qu'un exemple aléatoire positif soit classé plus haut par l'algorithme qu'un exemple aléatoire négatif. 

Dans les tâches de segmentation d'images, qui sont un type de tâche de classification, les métriques suivantes sont couramment utilisées :

  • Coefficient de similarité de Dice : mesure du chevauchement entre deux ensembles (par exemple deux images), calculée comme deux fois le nombre d'éléments communs aux ensembles divisé par la somme du nombre d'éléments dans chaque ensemble. Il va de 0 (pas de chevauchement) à 1 (chevauchement parfait). 
  • Distance de Hausdorff : mesure de la distance entre deux ensembles (par exemple deux images) dans un espace. Il s’agit essentiellement de la plus grande distance entre un point d’un ensemble et le point le plus proche de l’autre ensemble.

Validité interne et externe

Les modèles valides en interne fonctionnent bien dans leur tâche sur les données utilisées pour les former et les valider. Le degré interne de leur validité est évalué à l'aide des mesures de performance décrites ci-dessus et dépend des caractéristiques du modèle lui-même et de la qualité des données sur lesquelles le modèle a été formé et validé. 

Les modèles valides en externe fonctionnent bien dans leurs tâches sur les nouvelles données (Ramspek et al., 2021). Plus le modèle fonctionne sur des données différentes de celles sur lesquelles les modèles ont été formés et validés, plus la validité externe est élevée. En pratique, cela nécessite souvent de tester les performances des modèles sur des données provenant d'hôpitaux ou de zones géographiques qui ne faisaient pas partie des ensembles de données d’entraînement et de validation du modèle. 

Directives pour évaluer la recherche sur l’IA

Plusieurs directives ont été élaborées pour évaluer les preuves qui sous-tendent les interventions basées sur l’IA dans le domaine des soins de santé (X. Liu et al., 2020 ; Mongan et al., 2020 ; Shelmerdine et al., 2021 ; Weikert et al., 2021). Celles-ci fournissent un modèle pour ceux qui effectuent des recherches sur l'IA dans le domaine de la santé et garantissent que les informations pertinentes sont rapportées de manière transparente et complète, mais peuvent également être utilisées par d'autres parties prenantes pour évaluer la qualité des recherches publiées. Cela permet de garantir que les solutions basées sur l'IA présentant des limitations potentielles ou réelles substantielles, en particulier celles causées par de mauvais rapports (Bozkurt et al., 2020 ; D. W. Kim et al., 2019 ; X. Liu et al., 2019 ; Nagendran et coll., 2020 ; Yusuf et al., 2020), ne sont pas adoptées prématurément (CONSORT-AI et SPIRIT-AI Steering Group, 2019). Des directives ont également été proposées pour évaluer la fiabilité des solutions basées sur l’IA en termes de transparence, de confidentialité, de sécurité et de responsabilité (Buruk et al., 2020 ; Lekadir et al., 2021 ; Zicari et al., 2021).

Utilisations cliniques

Au cours des dernières années, l’IA a montré un grand potentiel pour répondre à un large éventail de tâches au sein d’un service d’imagerie médicale, dont beaucoup se déroulent avant que le patient ne soit scanné. Les mises en oeuvre de l’IA pour améliorer l’efficacité des flux de travail de radiologie avant l’examen du patient sont parfois appelées « IA en amont » (Kapoor et al., 2020; M. L. Richardson et al., 2021). 

Calendrier

Une application prometteuse de l’IA en amont consiste à prédire quels patients sont susceptibles de manquer leur rendez-vous d’analyse. Les rendez-vous manqués sont associés à une charge de travail et à des coûts considérablement accrus (Dantas et al., 2018). En utilisant une approche de gradient boosting, Nelson et al. prédit avec une grande précision les rendez-vous manqués d'imagerie par résonance magnétique (IRM) à l'hôpital au National Health Service (NHS) du Royaume-Uni (Nelson et al., 2019). Leurs simulations suggèrent également qu'agir sur les prédictions de ce modèle en ciblant les patients susceptibles de manquer leur rendez-vous pourrait potentiellement générer un bénéfice net de plusieurs livres par rendez-vous sur une gamme de seuils du modèle et de taux de rendez-vous manqués (Nelson et al., 2019). Des résultats similaires ont été récemment trouvés dans une étude portant sur un seul hôpital de Singapour. Au cours de la période de 6 mois suivant le déploiement de l'outil prédictif, ils ont pu réduire considérablement le taux de non présentation de 19,3 % à 15,9 %, ce qui s'est traduit par un avantage économique potentiel de 180 000 $ (Chong et. al., 2020). 

La planification des examens dans un service de radiologie est une tâche difficile car, bien qu'il s'agisse en grande partie d'une tâche administrative, elle dépend énormément des informations médicales. La tâche d'assignation des patients à des rendez-vous spécifiques nécessite ainsi souvent l'intervention d'une personne connaissant le domaine, ce qui stipule que soit la personne qui prend les rendez-vous doit être un radiologue ou un technicien en radiologie, soit ces personnes devront apporter leur contribution régulièrement. Dans les deux cas, le processus est quelque peu inefficace et peut potentiellement être rationalisé à l’aide d’algorithmes basés sur l’IA qui vérifient les indications et les contre-indications de l’analyse et fournissent aux personnes planifiant les analyses des informations sur l’urgence de l’analyse (Letourneau-Guillon et al., 2020).

Protocole

En fonction de la politique de l'hôpital ou de la clinique, la décision concernant le protocole d'analyse exact qu'un patient reçoit est généralement prise sur la base des informations contenues dans la demande d'analyse du médecin traitant et du jugement du radiologue. Ceci est souvent complété par une communication directe entre le médecin référent et le radiologue et par l'examen par le radiologue des informations médicales du patient. Ce processus améliore les soins aux patients (Boland et al., 2014), mais peut prendre beaucoup de temps et être inefficace, en particulier avec des modalités comme l'IRM, où il existe un grand nombre de permutations de protocoles. Dans une étude, le protocole à lui seul représentait environ 6 % du temps de travail du radiologue (Schemmel et al., 2016). Les radiologues sont également souvent interrompus par des tâches telles que le protocole lors de l'interprétation des images, même si cette dernière est considérée comme la responsabilité première du radiologue (Balint et al., 2014 ; J.-PJ Yu et al., 2014). 

L'interprétation du texte narratif de la demande d'analyse du médecin référent a été tentée à l'aide de classificateurs en langage naturel, la même technologie utilisée dans les chatbots et les assistants virtuels. Les classificateurs en langage naturel basés sur l'apprentissage profond se sont révélés prometteurs en attribuant aux patients un protocole d'IRM avec ou sans contraste pour l'IRM musculo-squelettique, avec une précision de 83 % (Trivedi et al., 2018) et 94 % (Y. H. Lee, 2018). Des algorithmes similaires ont montré une précision de 95 % pour prédire le protocole d'IRM une précision de 95 % pour prédire le protocole d'IRM cérébrale approprié en utilisant une combinaison de jusqu'à 41 séquences d'IRM différentes (Brown & Marotta, 2018). Sur un large éventail de régions du corps, un classificateur de langage naturel basé sur l'apprentissage profond a décidé, sur la base du texte narratif des demandes d'analyse, d'attribuer automatiquement un protocole de tomodensitométrie (TDM) ou d'IRM spécifique (ce qu'il a fait avec une précision de 95 %) ou, dans les cas plus difficiles, recommander au radiologue une liste des trois protocoles les plus appropriés (ce qu'il a fait avec une précision de 92 %) (Kalra et al., 2020). 

L’IA a également été utilisée pour décider si les analyses déjà protocolées doivent être étendues, une décision qui doit être prise en temps réel pendant que le patient se trouve à l’intérieur du scanner. Un tel exemple est celui de l'IRM de la prostate, où la décision d'administrer ou non un agent de contraste est souvent prise après les séquences sans contraste. Hötker et al. ont constaté qu'un réseau neuronal convolutionnel (CNN) assignait 78 % des patients au protocole d'IRM de la prostate approprié (Hötker et al., 2021). La sensibilité du CNN pour le besoin de contraste était de 94,4 % avec une spécificité de 68,8 % et seulement 2 % des patients de leur étude auraient dû être rappelés pour un examen avec contraste (Hötker et al., 2021). 

Amélioration et surveillance de la qualité des images 

De nombreuses solutions basées sur l’IA qui fonctionnent en arrière-plan des flux de travail radiologiques pour améliorer la qualité des images ont récemment été mises en place. Il s'agit notamment de solutions permettant de surveiller la qualité de l'image, de réduire les artefacts d'image, d'améliorer la résolution spatiale et d'accélérer les numérisations. 

De telles solutions font leur entrée dans le courant dominant de la radiologie, en particulier pour la tomodensitométrie, qui a utilisé pendant des décennies des méthodes établies mais sujettes aux artefacts pour reconstruire des images interprétables à partir des données brutes des capteurs (Deák et al., 2013 ; Singh et al., 2010).  

Celles-ci sont progressivement remplacées par des méthodes de reconstruction basées sur l’apprentissage automatique, qui améliorent la qualité des images tout en maintenant de faibles doses de rayonnement (Akagi et al., 2019 ; H. Chen et al., 2017 ; Choe et al., 2019 ; Shan et al., 2019). Cette reconstruction est réalisée sur des super-ordinateurs installés sur le tomodensitomètre lui-même ou dans le nuage. L'équilibre entre la dose de rayonnement et la qualité de l'image peut être ajusté en fonction du protocole afin d'adapter les analyses à chaque patient et aux scénarios cliniques (McLeavy et al., 2021 ; Willemink & Noël, 2019). De telles approches ont trouvé une utilité particulière lors de l’analyse des enfants, des femmes enceintes et des patients souffrant d’obésité ainsi que des TDM des voies urinaires et du coeur (McLeavy et al., 2021).

Des solutions basées sur l'IA ont également été utilisées pour accélérer les analyses tout en maintenant la qualité du diagnostic. La réduction du temps d’analyse améliore non seulement l’efficacité globale, mais contribue également à une meilleure expérience globale du patient et à une meilleure observance de l’examen d’imagerie. Une étude multicentrique de l'IRM de la colonne vertébrale a montré qu'un algorithme de reconstruction d'images basé sur l'apprentissage profond qui améliorait les images à l'aide d'un filtrage et d'une réduction du bruit préservant les détails réduisait les temps d'analyse de 40 % (Bash, Johnson et al., 2021). Pour les IRM du cerveau pondérées en T1, un algorithme similaire qui améliore la netteté de l'image et réduit le bruit de l'image a réduit les temps d'analyse de 60 % tout en maintenant la précision de la volumétrie des régions cérébrales par rapport aux analyses standard (Bash, Wang, et al., 2021). 

Dans la pratique radiologique de routine, les images contiennent souvent des artefacts qui réduisent leur interprétabilité. Ces artefacts sont le résultat de caractéristiques de la modalité ou du protocole d'imagerie spécifique utilisé ou de facteurs intrinsèques au patient analysé, tels que la présence de corps étrangers ou le mouvement du patient pendant l'examen. En particulier avec l'IRM, les protocoles d'imagerie qui exigent une numérisation rapide introduisent souvent certains artefacts dans l'image reconstruite. Dans une étude, un algorithme basé sur l'apprentissage profond a réduit les artefacts de bandes associés à des séquences d’écho de gradient bSSFP (balanced Steady State Free Precession) (K. H. Kim & Park, 2017). Pour l’imagerie du coeur en temps réel à l’aide de l’IRM, une autre étude a révélé que les artefacts de crénelage introduits par le sous-échantillonnage des données étaient réduits grâce à l’utilisation d’une approche basée sur l’apprentissage profond (Hauptmann et al., 2019). La présence de corps étrangers métalliques tels que des implants dentaires, orthopédiques ou vasculaires est un facteur courant lié au patient provoquant des artefacts d'image en TDM et en IRM (Boas & Fleischmann, 2012 ; Hargreaves et al., 2011). Bien qu’elles ne soient pas encore bien établies, plusieurs approches basées sur l’apprentissage profond pour réduire ces artefacts ont été étudiées (Ghani & Clem Karl, 2019 ; Puvanasunthararajah et al., 2021 ; Zhang & Yu, 2018). Des approches similaires sont testées pour réduire les artefacts liés au mouvement en IRM (Tamada et al., 2020 ; B. Zhao et al., 2022). 

Les solutions basées sur l'IA pour surveiller la qualité des images réduisent potentiellement la nécessité de rappeler les patients pour répéter les examens d'imagerie, ce qui constitue un problème courant (Schreiber-Zinaman & Rosenkrantz, 2017). Un algorithme basé sur l'apprentissage profond qui identifie la vue radiographique acquise et extrait les mesures liées à la qualité des radiographies de la cheville a pu prédire la qualité de l'image avec une précision d'environ 94 % (Mairhöfer et al., 2021). Une autre approche basée sur l’apprentissage profond était capable de prédire les IRM hépatiques non diagnostiques avec une valeur prédictive négative comprise entre 86 % et 94 % (Esses et al., 2018). Ce contrôle qualité automatisé en temps réel permet potentiellement aux techniciens en radiologie de réexécuter des analyses ou d'exécuter des analyses supplémentaires avec une plus grande valeur diagnostique. 

Priorisation de la lecture par numérisation

Avec le manque de personnel et l'augmentation du nombre d'examens, les radiologues sont confrontés à de longues listes de lecture. Pour optimiser l'efficacité et les soins aux patients, des solutions basées sur l'IA ont été suggérées comme moyen de prioriser les analyses que les radiologues lisent et rapportent en premier, généralement en examinant les images acquises pour les résultats qui nécessitent une intervention urgente (O’Connor & Bhalla, 2021). Ce phénomène a été étudié de manière plus approfondie en neuroradiologie, où le fait de déplacer les tomodensitométries révélant une hémorragie intracrânienne par un outil basé sur l'IA en haut de la liste de lecture a réduit de plusieurs minutes le temps nécessaire aux radiologues pour visualiser les scans (O'Neill et coll., 2021). Une autre étude a révélé que le temps nécessaire au diagnostic (qui comprend le temps écoulé entre l'acquisition de l'image et la visualisation par le radiologue et le temps nécessaire pour lire et rapporter les examens) était réduit de 512 à 19 minutes en ambulatoire lorsqu'une telle priorisation de la liste de travail était utilisée (Arbabshirani et al., 2018). Une étude de simulation utilisant la priorisation des listes de travail basée sur l'IA basée sur l'identification des résultats urgents sur les radiographies thoraciques (telles que le pneumothorax, les épanchements pleuraux et les corps étrangers) a également révélé une réduction substantielle du temps nécessaire pour visualiser et rapporter les examens par rapport à la priorisation standard du flux de travail (Baltruschat et al., 2021). 

Interprétation des images

Actuellement, la majorité des solutions basées sur l’IA disponibles sur le marché en imagerie médicale se concentrent sur certains aspects de l’analyse et de l’interprétation des images (Rezazade Mehrizi et al., 2021 ; van Leeuwen et al., 2021). Cela comprend la segmentation de parties de l'image (pour un ciblage chirurgical ou radiothérapeutique, par exemple), la sensibilisation des radiologues aux zones suspectes, l'extraction de biomarqueurs d'imagerie (radiomique), la comparaison des images dans le temps et l'établissement de diagnostics d'imagerie spécifiques. 

Neurologie

  • Représente 29 à 38 % des applications basées sur l’IA disponibles dans le commerce en radiologie (Rezazade Mehrizi et al., 2021 ; van Leeuwen et al., 2021). 

La plupart des solutions basées sur l'IA disponibles dans le commerce et destinées aux données de neuro-imagerie visent à détecter et à caractériser les accidents vasculaires cérébraux ischémiques, les hémorragies intracrâniennes, la démence et la sclérose en plaques (Olthof et al., 2020). Plusieurs études ont montré une excellente précision des méthodes basées sur l'IA pour la détection et la classification des hémorragies intraparenchymateuses, sous-arachnoïdiennes et sous-durales sur la tomodensitométrie de la tête (Flanders et al., 2020 ; Ker et al., 2019 ; Kuo et al., 2019). Des études ultérieures ont montré que, par rapport aux radiologues, certaines solutions basées sur l’IA ont des taux de faux positifs et négatifs nettement inférieurs (Ginat, 2020 ; Rao et al., 2021). Dans le cas des accidents vasculaires cérébraux ischémiques, les solutions basées sur l'IA se sont largement concentrées sur la quantification du noyau de l'infarctus (Goebel et al., 2018 ; Maegerlein et al., 2019), la détection de l'occlusion des gros vaisseaux (Matsoukas et al., 2022 ; Morey et al., 2021 ; Murray et al., 2020 ; Shlobin et al., 2022) et la prédiction des conséquences d'un AVC (Bacchi et al., 2020 ; Nielsen et al., 2018 ; Y. Yu et al., 2020, 2021). 

Dans la sclérose en plaques, l’IA a été utilisée pour identifier et segmenter les lésions (Nair et al., 2020 ; S.- H. Wang et al., 2018), ce qui peut être particulièrement utile pour le suivi longitudinal des patients. Elle a également été utilisée pour extraire les caractéristiques d’imagerie associées à une maladie évolutive et à la conversion d’un syndrome cliniquement isolé en une sclérose en plaques définitive (Narayana et al., 2020 ; Yoo et al., 2019). D'autres applications de l'IA en neuroradiologie incluent la détection des anévrismes intracrâniens (Faron et al., 2020 ; Nakao et al., 2018 ; Ueda et al., 2019) et la segmentation des tumeurs cérébrales (Kao et al., 2019 ; Mlynarski et al., 2019 ; Zhou et al., 2020) ainsi que la prédiction des marqueurs génétiques des tumeurs cérébrales à partir des données d'imagerie (Choi et al., 2019 ; J. Zhao et al., 2020)

Thorax

  • Represent 24 à 31 % des applications basées sur l’IA disponibles dans le commerce en radiologie (Rezazade Mehrizi et al., 2021 ; van Leeuwen et al., 2021). .

Lors de l’interprétation des radiographies thoraciques, les radiologues ont détecté des résultats beaucoup plus critiques et urgents avec l’aide d’un algorithme basé sur l’apprentissage profond, et ce, beaucoup plus rapidement que sans l’algorithme (Nam et al., 2021). Il a également été constaté que les algorithmes d'interprétation d'images basés sur l'apprentissage profond améliorent de 66 % à 73 % la sensibilité des résidents en radiologie pour détecter les résultats urgents sur les radiographies thoraciques (E. J. Hwang, Nam, et al., 2019). Une autre étude portant sur un plus large éventail de résultats sur les radiographies thoraciques a également révélé que les radiologues aidés par un algorithme basé sur l'apprentissage profond avaient une précision diagnostique plus élevée que les radiologues qui lisaient les radiographies sans assistance (Seah et al., 2021). Les utilisations de l’IA en radiologie thoracique s’étendent également à l’imagerie transversale comme la tomodensitométrie. Un algorithme d'apprentissage profond a été découvert pour détecter l'embolie pulmonaire sur les tomodensitogrammes avec une grande précision (ASC = 0,85) (Huang, Kothari, et al., 2020). De plus, un algorithme d'apprentissage profond était précis à 90 % dans la détection de la dissection aortique sur des tomodensitogrammes sans contraste, similaire aux performances des radiologues (Hata et al., 2021). 

En dehors des situations d'urgence, des solutions basées sur l'IA ont été largement testées et mises en oeuvre pour le dépistage de la tuberculose sur des radiographies thoraciques (E. J. Hwang, Park, et al., 2019 ; S. Hwang et al., 2016 ; Khan et al., 2020 ; Qin et al., 2019 ; Manuel opérationnel de l'OMS sur la tuberculose Module 2 : Dépistage – Dépistage systématique de la tuberculose, n.d.). En outre, elles ont été utiles pour le dépistage du cancer du poumon à la fois en termes de détection des nodules pulmonaires sur la tomodensitométrie (Setio et al., 2017) et sur les radiographies thoraciques (Li et al., 2020) et en classifiant si les nodules sont susceptibles d'être malins ou bénins (Ardila et al., 2019 ; Bonavita et al., 2020 ; Ciompi et al., 2017 ; B. Wu et al., 2018). Les solutions basées sur l’IA sont également très prometteuses pour le diagnostic de la pneumonie, de la maladie pulmonaire obstructive chronique et de la maladie pulmonaire interstitielle (F. Liu et al., 2021). 

Sein

  • Represent 11 % des applications basées sur l’IA disponibles dans le commerce en radiologie (Rezazade Mehrizi et al., 2021 ; van Leeuwen et al., 2021). 

Jusqu’à présent, de nombreux algorithmes basés sur l’IA ciblant l’imagerie du sein visent à réduire la charge de travail des radiologues lisant les mammographies. Les moyens d'y parvenir incluent l'utilisation d'algorithmes basés sur l'IA pour trier les mammographies négatives, ce qui, dans une étude, était associé à une réduction de près d'un cinquième de la charge de travail des radiologues (Yala et al., 2019). D'autres études qui ont remplacé les deuxièmes lecteurs de mammographies par des algorithmes basés sur l'IA ont montré que cela entraîne moins de faux positifs et de faux négatifs et réduit la charge de travail du deuxième lecteur de 88 % (McKinney et al., 2020). 

Il a également été constaté que les solutions basées sur l'IA pour la mammographie augmentent la précision du diagnostic des radiologues (McKinney et al., 2020 ; Rodríguez-Ruiz et al., 2019 ; Watanabe et al., 2019) et certaines se sont révélées très précises dans la détection et la classification indépendantes des lésions mammaires (Agnes et al., 2019 ; Al-Antari et al., 2020 ; Rodriguez-Ruiz et al., 2019). 
Malgré cela, une récente revue systématique de 36 algorithmes basés sur l’IA a révélé que ces études étaient de mauvaise qualité méthodologique et que tous les algorithmes étaient moins précis que le consensus de deux radiologues ou plus (Freeman et al., 2021). Les algorithmes basés sur l’IA ont néanmoins montré leur potentiel pour extraire des caractéristiques prédictives du cancer des mammographies au-delà de la densité mammaire mammographique (Arefan et al., 2020 ; Dembrower et al., 2020 ; Hinton et al., 2019). Au-delà de la mammographie, des solutions basées sur l'IA ont été développées pour détecter et classer les lésions mammaires par échographie (Akkus et al., 2019 ; Park et al., 2019 ; G.-G. Wu et al., 2019) et IRM (Herent et al., 2019).

Cardiaque

  • Represent 11 % des applications basées sur l’IA disponibles dans le commerce en radiologie (Rezazade Mehrizi et al., 2021 ; van Leeuwen et al., 2021). .

La radiologie cardiaque a toujours été particulièrement exigeante en raison des difficultés inhérentes à l'acquisition d'images d'un organe en mouvement constant. Pour cette raison, elle a énormément bénéficié des progrès de la technologie d’imagerie et semble également prête à bénéficier grandement de l’IA (Sermesant et al., 2021). La plupart des applications du système cardiovasculaire basées sur l’IA utilisent des données d’IRM, de TDM ou d’échographie (Weikert et al., 2021). Des exemples marquants incluent le calcul automatisé de la fraction d'éjection sur l'échocardiographie, la quantification de la calcification de l'artère coronaire sur la tomodensitométrie cardiaque, la détermination du volume ventriculaire droit sur l'angiographie pulmonaire par tomodensitométrie et la détermination de la taille et de l'épaisseur de la cavité cardiaque sur l'IRM cardiaque (Medical AI Evaluation, n.d., The Medical Futurist, n.d.). Les solutions basées sur l’IA pour prédire les patients susceptibles de répondre favorablement aux interventions cardiaques, telles que la thérapie de resynchronisation cardiaque, basées sur l’imagerie et les paramètres cliniques, se sont également révélées très prometteuses (Cikes et al., 2019 ; Hu et al., 2019). Les modifications de l’IRM cardiaque, peu visibles pour les lecteurs humains mais potentiellement utiles pour différencier différents types de cardiomyopathies, peuvent également être détectées à l’aide de l’IA grâce à l’analyse de texture (Neisius et al., 2019 ; J. Wang et al., 2020) et d’autres approches radiomiques (Mancio et al., 2022).

Musculo–squelettique

  • 7 à 11 % des applications basées sur l’IA disponibles dans le commerce en radiologie (Rezazade Mehrizi et al., 2021 ; van Leeuwen et al., 2021)..

Les applications prometteuses de l'IA dans l'évaluation des muscles, des os et des articulations comprennent des applications pour lesquelles les lecteurs humains présentent généralement une faible fiabilité inter- et intra-évaluateur, comme la détermination de l'âge du squelette sur la base de radiographies osseuses (Halabi et al., 2019 ; Thodberg et al., 2009) et le dépistage de l'ostéoporose sur des radiographies (Kathirvelu et al., 2019 ; J.-S. Lee et al., 2019) et TDM (Pan et al., 2020). Les solutions basées sur l’IA se sont également révélées prometteuses pour détecter les fractures sur les radiographies et les TDM (Lindsey et al., 2018 ; Olczak et al., 2017 ; Urakawa et al., 2019). Une revue systématique des solutions basées sur l'IA pour la détection des fractures dans plusieurs parties du corps différentes a montré des ASC allant de 0,94 à 1,00 et des précisions de 77 % à 98 % (Langerhuizen et al., 2019). Les solutions basées sur l'IA ont également atteint des précisions similaires à celles des radiologues pour la classification de la gravité des modifications dégénératives de la colonne vertébrale (Jamaludin et al., 2017) et des articulations des extrémités (F. Liu et al., 2018 ; Thomas et al., 2020). Des solutions basées sur l'IA ont également été développées pour déterminer l'origine des métastases squelettiques (Lang et al., 2019) et la classification des tumeurs osseuses primitives (Do et al., 2017). 

Abdomen et bassin

  • 4 % des applications basées sur l’IA disponibles dans le commerce en radiologie (Rezazade Mehrizi et al., 2021 ; van Leeuwen et al., 2021). .

Jusqu'à présent, une grande partie des efforts visant à utiliser l'IA dans l'imagerie abdominale se sont concentrés sur la segmentation automatisée d'organes tels que le foie (Dou et al., 2017), la rate (Moon et al., 2019), le pancréas (Oktay et al., 2018) et les reins (Sharma et al., 2017). En outre, une revue systématique de 11 études utilisant l'apprentissage profond pour la détection de masses hépatiques malignes a montré des précisions allant jusqu'à 97 % et des ASC allant jusqu'à 0,92 (Azer, 2019). 

D'autres applications de l'IA en radiologie abdominale comprennent la détection de la fibrose hépatique (He et al., 2019 ; Yasaka et al., 2018), la stéatose hépatique, la teneur en fer hépatique, la détection des gaz abdominaux libres sur la tomodensitométrie et la volumétrie et la segmentation automatisées de la prostate (AI for Radiology, n.d.).

Obstacles à la mise en oeuvre

Malgré le grand potentiel de l'IA dans le domaine de l'imagerie médicale, sa mise en oeuvre et son impact dans la pratique clinique courante n'ont pas encore été généralisés. Cette traduction de la recherche à la clinique est entravée par plusieurs problèmes complexes et interdépendants qui réduisent directement ou indirectement la probabilité d’adoption de solutions basées sur l’IA. L’un des principaux moyens d’y parvenir est d’améliorer la confiance dans les solutions basées sur l’IA des principales parties prenantes telles que les régulateurs, les professionnels de la santé et les patients (Cadario et al., 2021 ; Esmaeilzadeh, 2020 ; JP Richardson et al., 2021 ; Tucci et coll., 2022).

Généralisation

L’un des défis majeurs consiste à développer des solutions basées sur l’IA qui continuent de fonctionner correctement dans de nouveaux scénarios réels. Dans une vaste revue systématique, près de la moitié des algorithmes d'imagerie médicale basés sur l'IA ont signalé une diminution supérieure à 0,05 de l'ASC lorsqu'ils ont été testés sur de nouvelles données (A. C. Yu et al., 2022). Ce manque de généralisation peut avoir des effets négatifs sur les performances du modèle dans un scénario réel. 

Si une solution fonctionne mal lorsqu'elle est testée sur un ensemble de données avec une distribution similaire ou identique à l'ensemble de données d'entraînement, on dit qu'elle manque de généralisation étroite et est souvent une conséquence d'un surajustement (Eche et al., 2021). Les solutions potentielles au surajustement utilisent des ensembles de données d’entraînement plus volumineux et réduisent la complexité du modèle. Si une solution fonctionne mal lorsqu'elle est testée sur un ensemble de données avec une distribution différente de l'ensemble de données d'entraînement (par exemple, une distribution différente des ethnies des patients), on dit qu'elle manque de généralisation à grande échelle (Eche et al., 2021). Les solutions à une mauvaise généralisabilité à grande échelle incluent des tests de résistance du modèle sur des ensembles de données avec des distributions différentes de l'ensemble de données d'entraînement (Eche et al., 2021). 

Les solutions d’IA sont souvent développées dans un environnement riche en ressources, comme les grandes entreprises technologiques et les centres médicaux universitaires des pays riches. Il est probable que les résultats et les performances dans ces contextes à ressources élevées ne pourront pas être généralisés aux contextes à faibles ressources tels que les petits hôpitaux, les zones rurales ou les pays plus pauvres (Price & Nicholson, 2019), ce qui complique encore davantage le problème. 

Risque de biais

Des biais peuvent survenir dans les solutions basées sur l’IA en raison de données ou de facteurs humains. Le premier cas se produit lorsque les données utilisées pour entraîner la solution d’IA ne représentent pas adéquatement la population cible. Les ensembles de données peuvent ne pas être représentatifs lorsqu’ils sont trop petits ou lorsqu’ils ont été collectés d’une manière qui donne une fausse idée d’une certaine catégorie de population. Les solutions d'IA formées sur des données non représentatives perpétuent les biais et fonctionnent mal dans les catégories de population sous-représentées ou mal représentées dans les données d’entraînement. La présence de tels biais a été démontrée empiriquement dans de nombreuses études d’imagerie médicale basées sur l’IA (Larrazabal et al., 2020 ; Seyyed-Kalantari et al., 2021). 

Les solutions basées sur l'IA sont sujettes à plusieurs décisions subjectives et parfois implicitement ou explicitement préjugées au cours de leur développement par les humains. Ces facteurs humains incluent la manière dont les données de formation sont sélectionnées, la manière dont elles sont étiquetées et la manière dont la décision est prise de se concentrer sur le problème spécifique que la solution basée sur l'IA entend résoudre (Norori et al., 2021). Certaines recommandations et outils sont disponibles pour aider à minimiser le risque de biais dans la recherche sur l’IA (FIA360: A Comprehensive Set of Fairness Metrics for Datasets and Machine Learning Models, Explanations for These Metrics, and Algorithms to Mitigate Bias in Datasets and Models, n.d., IBM Watson Studio - Model Risk Management, n.d. ; Silberg & Manyika, 2019). 

Quantité, qualité et variété des données

Les problèmes tels que les biais et le manque de généralisation peuvent être atténués en garantissant que les données d’entraînement sont d'une quantité, d'une qualité et d'une variété suffisantes. Cependant, cela est difficile à réaliser car les patients sont souvent réticents à partager leurs données à des fins commerciales (Aggarwal, Farag, et al., 2021 ; Ghafur et al., 2020 ; Trinidad et al., 2020), les hôpitaux et les cliniques ne sont généralement pas équipés pour rendre ces données disponibles de manière utilisable et sécurisée, et organiser et étiqueter les données prend du temps et coûte cher. 

De nombreux ensembles de données peuvent être utilisés à diverses fins, et le partage de données entre entreprises peut contribuer à rendre le processus de collecte et d'organisation des données plus efficace, ainsi qu'à augmenter la quantité de données disponibles pour chaque application. Cependant, les développeurs sont souvent réticents à partager des données entre eux, ou même à révéler la source exacte de leurs données, pour rester compétitifs. 

Protection des données et confidentialité

Le développement et la mise en oeuvre de solutions basées sur l’IA nécessitent que les patients soient explicitement informés et donnent leur consentement à l’utilisation de leurs données dans un but particulier et par certaines personnes. Ces données doivent également être protégées de manière adéquate contre les violations de données et les utilisations abusives. Ne pas garantir cela mine considérablement la confiance du public dans les solutions basées sur l’IA et entrave leur adoption. Alors que la réglementation régissant la confidentialité des données de santé stipule que la collecte de données entièrement anonymisées ne nécessite pas le consentement explicite du patient (Règlement général sur la protection des données [RGPD] – Texte juridique officiel, 2016 ; Office des droits civils [OCR], 2012) et protège en théorie contre l’utilisation à mauvais escient des données, la question de savoir si les données d’imagerie peuvent ou non être entièrement anonymisées est controversée (Lotan et al., 2020 ; Murdoch, 2021). La question de savoir si le consentement peut être véritablement éclairé compte tenu de la complexité des données acquises et de la myriade d’utilisations futures potentielles des données qui en résultent est également controversée (Vayena & Blasimme, 2017). 

Infrastructure informatique

Parmi les services hospitaliers, la radiologie a toujours été à la pointe de la numérisation. Les solutions basées sur l'IA axées sur le traitement et l'interprétation des images trouveront probablement l'infrastructure requise dans la plupart des services de radiologie, par exemple pour relier les équipements d'imagerie aux ordinateurs à des fins d'analyse et pour archiver les images et autres résultats. Cependant, la plupart des services de radiologie nécessiteront probablement d’importantes mises à niveau de leur infrastructure pour d’autres applications de l’IA, en particulier celles nécessitant l’intégration d’informations provenant de sources multiples et ayant des résultats complexes. De plus, il est important de garder à l’esprit que la répartition des infrastructures nécessaires est très inégale entre les pays et au sein de ceux-ci (Health Ethics & Governance, 2021). 

En termes de puissance de calcul, les services de radiologie devront soit investir des ressources dans le matériel et le personnel nécessaires pour faire fonctionner ces solutions basées sur l'IA, soit opter pour des solutions basées sur le cloud. Le premier entraîne un coût supplémentaire mais permet le traitement des données dans les limites du réseau local de l'hôpital ou de la clinique. Les solutions informatiques basées sur le cloud (appelées « infrastructure as a service » ou « IaaS ») sont souvent considérées comme l'option la moins sécurisée et la moins fiable, mais cela dépend d'un certain nombre de facteurs et n'est donc pas toujours vrai (Baccianella & Gough, n.d.). Des directives sur les éléments à prendre en compte lors de l'achat de solutions basées sur le cloud dans le domaine de la santé sont disponibles (Cloud Security for Healthcare Services, 2021). 

Manque de standardisation, d’interopérabilité et d’intégrabilité

Le problème de l’infrastructure devient encore plus compliqué si l’on considère la fragmentation actuelle du marché de l’imagerie médicale par l’IA (Alexander et al., 2020). Il est donc probable que dans un avenir proche, un même service disposera simultanément de plusieurs dizaines de solutions basées sur l’IA provenant de différents fournisseurs. Avoir une infrastructure autonome distincte (par exemple un poste de travail ou un serveur) pour chacun de ces éléments serait incroyablement compliqué et difficile à gérer. Parmi les solutions proposées, citons les « places de marché » de solutions d'IA, semblables à des magasins d'applications (Advanced AI Solutions for Radiology, n.d., Curated Marketplace, 2018, Imaging AI Marketplace - Overview, n.d., Sectra Amplifier Marketplace, 2021, The Nuance AI Marketplace for Diagnostic Imaging, n.d.), et le développement d'une infrastructure globale neutre vis-à-vis des fournisseurs (Leiner et al., 2021). La mise en oeuvre réussie de telles solutions nécessite des partenariats étroits entre les développeurs de solutions d’IA, les fournisseurs d’imagerie et les entreprises de technologie de l’information.

Interprétabilité

Il est souvent impossible de comprendre exactement comment les solutions basées sur l’IA arrivent à leurs conclusions, en particulier avec des approches complexes comme l’apprentissage profond. Cela réduit la transparence du processus décisionnel pour l'achat et l'approbation de ces solutions, rend l'identification des préjugés difficile et rend plus difficile pour les cliniciens d'expliquer les résultats de ces solutions à leurs patients et de déterminer si une solution fonctionne correctement ou a mal fonctionné (Char et al., 2018 ; Reddy et al., 2020 ; Vayena et al., 2018 ; Whittlestone et al., 2019). Certains ont suggéré que les techniques qui aident les humains à comprendre comment les algorithmes basés sur l’IA prennent certaines décisions ou prédictions (IA « interprétable » ou « explicable ») pourraient contribuer à atténuer ces défis. Cependant, d’autres ont fait valoir que les techniques actuellement disponibles ne sont pas adaptées pour comprendre les décisions individuelles d’un algorithme et ont mis en garde contre le fait de s’appuyer sur elles pour garantir que les algorithmes fonctionnent de manière sûre et fiable (Ghassemi et al., 2021). 

Responsabilité

Dans les systèmes de santé, un cadre de responsabilité garantit que les agents de santé et les établissements médicaux peuvent être tenus responsables des effets indésirables résultant de leurs actions. La question de savoir qui doit être tenu responsable des échecs d’une solution basée sur l’IA est complexe. Pour les produits pharmaceutiques, par exemple, la responsabilité des défaillances inhérentes au produit ou à son utilisation incombe souvent soit au fabricant, soit au prescripteur. L’une des principales différences réside dans le fait que les systèmes basés sur l’IA évoluent et apprennent continuellement, et fonctionnent donc intrinsèquement d’une manière indépendante de ce que leurs développeurs auraient pu prévoir (Yeung, 2018). Pour l’utilisateur final tel que le professionnel de santé, la solution basée sur l’IA peut être opaque et il se peut donc qu’il ne soit pas en mesure de déterminer si la solution fonctionne mal ou est inexacte (Habli et al., 2020 ; Yeung, 2018). 

Fragilité

Malgré des progrès substantiels dans leur développement au cours des dernières années, les algorithmes d’apprentissage profond restent étonnamment fragiles. Cela signifie que, lorsque l’algorithme est confronté à un scénario qui diffère considérablement de celui auquel il a été confronté lors de l’entraînement, il ne peut pas contextualiser et produit souvent des résultats absurdes ou inexacts. Cela se produit parce que, contrairement aux humains, la plupart des algorithmes apprennent à percevoir les choses dans les limites de certaines hypothèses, mais ne parviennent pas à généraliser en dehors de ces hypothèses. À titre d'exemple de l'utilisation abusive de ce système à des fins malveillantes, des modifications subtiles des images médicales, imperceptibles par les humains, peuvent rendre inexacts les résultats des algorithmes de classification des maladies (Finlayson et al., 2018). Le manque d’interprétabilité de nombreuses solutions basées sur l’IA aggrave ce problème car il rend difficile de déterminer comment elles sont parvenues à la mauvaise conclusion.

Prendre des décisions d'achat

Jusqu’à présent, plus de 100 produits basés sur l’IA ont obtenu le marquage de conformité européenne (CE) ou l’autorisation de la Food and Drug Administration (FDA). Ces produits peuvent être trouvés dans des bases de données en ligne continuellement mises à jour et consultables, organisées par la FDA (Center for Devices & Radiological Health, n.d.), l'American College of Radiology (Assess-AI, n.d.) et d'autres (AI for Radiology, n.d., The Medical Futurist, n.d. ; E. Wu et al., 2021). Le nombre croissant de produits disponibles, la complexité inhérente à bon nombre de ces solutions et le fait que de nombreuses personnes qui prennent habituellement des décisions d'achat dans les hôpitaux ne sont pas familiarisées avec l'évaluation de ces produits, il est important de bien réfléchir avant de décider quel produit acheter. De telles décisions devront être prises après avoir intégré les commentaires des professionnels de santé, des professionnels des technologies de l'information (TI), ainsi que des professionnels de la gestion, des finances, du droit et des ressources humaines au sein des hôpitaux. 

Décider d'acheter ou non une solution basée sur l'IA en radiologie, ainsi que laquelle acheter parmi le nombre croissant de solutions disponibles dans le commerce, inclut des considérations de qualité, de sécurité et de finances. Au cours des dernières années, plusieurs directives ont émergé pour aider les acheteurs potentiels à prendre ces décisions (A Buyer's Guide to AI in Health and Care, 2020 ; Omoumi et al., 2021 ; Reddy et al., 2021), et ces directives sont susceptibles d’évoluer à l’avenir en fonction des attentes changeantes des clients, des organismes de réglementation et des parties prenantes impliquées dans les décisions de remboursement. 

Tout d’abord, l’acheteur potentiel doit comprendre clairement quel est le problème et si l’IA est l’approche appropriée pour cette solution, ou s’il existe des alternatives plus avantageuses dans l’ensemble. Si l'IA est l'approche appropriée, les acheteurs doivent savoir exactement quelle est la portée de la solution d'un produit potentiel basé sur l'IA, c'est-à-dire quel problème spécifique la solution basée sur l'IA est conçue pour résoudre et dans quelles circonstances spécifiques. Cela inclut si la solution est destinée au dépistage, au diagnostic, à la surveillance, à la recommandation de traitement ou à une autre application. Cela inclut également les utilisateurs prévus de la solution et le type de qualifications ou de formation spécifiques qu'ils sont censés posséder pour pouvoir utiliser la solution et interpréter ses résultats. Il doit être clair pour les acheteurs si la solution est destinée à remplacer certaines tâches qui seraient normalement effectuées par l'utilisateur final, à faire office de double lecteur, de mécanisme de tri ou à d'autres tâches comme le contrôle qualité. Les acheteurs doivent également comprendre si la solution est destinée à fournir de « nouvelles » informations (c'est-à-dire des informations qui autrement ne seraient pas disponibles pour l'utilisateur sans la solution), à améliorer les performances d'une tâche existante au-delà des performances d'un humain ou d'une autre solution non basée sur l'IA ou si cela vise à économiser du temps ou d’autres ressources. 

Les acheteurs devraient également avoir accès à des informations leur permettant d’évaluer les avantages potentiels de la solution d’IA, et cela devrait être étayé par des preuves scientifiques publiées sur l’efficacité et la rentabilité de la solution. La manière dont cela sera réalisé dépendra fortement de la solution elle-même et du contexte dans lequel elle devrait être déployée, mais des directives à ce sujet sont disponibles (National Institute for Health and Care Excellence [NICE], n.d.). Voici quelques questions à poser ici : Quelle influence la solution aura-t-elle sur la prise en charge des patients ? Cela améliorera-t-il les performances du diagnostic ? Cela permettra-t-il d'économiser du temps et de l'argent ? Cela affectera-t-il la qualité de vie des patients ? L'acheteur doit également savoir qui exactement est censé bénéficier de l'utilisation de cette solution (radiologues ? Cliniciens ? Patients ? Le système de santé ou la société dans son ensemble ?). 

Comme pour toute intervention de santé, toutes les solutions basées sur l’IA comportent des risques potentiels, et ceux-ci doivent être clairement indiqués à l’acheteur. Certains de ces risques peuvent avoir des conséquences juridiques, comme le risque d’erreur de diagnostic. Ces risques doivent être quantifiés et les acheteurs potentiels doivent disposer d'un cadre pour y faire face, notamment en identifiant un cadre de responsabilité au sein des organisations mettant en oeuvre ces solutions. Les acheteurs doivent également s'assurer qu'ils comprennent clairement les effets négatifs potentiels sur la formation des radiologues et la perturbation potentielle des flux de travail des radiologues associée à l'utilisation de ces solutions.

Les spécificités de la conception de la solution d'IA sont également pertinentes pour la décision de l'acheter ou non. Il s'agit notamment de la robustesse de la solution face aux différences entre les fournisseurs et les paramètres d'analyse, des circonstances dans lesquelles l'algorithme a été entraîné (y compris les facteurs de confusion potentiels) et de la manière dont les performances ont été évaluées. Les acheteurs doivent également savoir clairement si et comment les sources potentielles de biais ont été prises en compte au cours du développement. Étant donné qu'une caractéristique essentielle des solutions basées sur l'IA est leur capacité à apprendre continuellement de nouvelles données, si et comment exactement ce recyclage est intégré dans la solution au fil du temps devrait également être clair pour l'acheteur, y compris si une nouvelle approbation réglementaire est nécessaire ou non avec chaque itération. Cela inclut également la nécessité ou non d'une reconversion professionnelle, par exemple en raison de changements dans l'équipement d'imagerie de l'établissement de l'acheteur. 

Les principaux arguments de vente de nombreuses solutions basées sur l’IA sont la facilité d’utilisation et l’amélioration des flux de travail. Par conséquent, les acheteurs potentiels doivent examiner attentivement la manière dont ces solutions doivent être intégrées aux flux de travail existants, y compris l'interopérabilité avec les PACS et les systèmes de dossiers médicaux électroniques. Le fait que la solution nécessite ou non du matériel supplémentaire (par exemple des unités de traitement graphique) ou des logiciels (par exemple pour la visualisation des résultats de la solution), ou si elle peut facilement être intégrée à l'infrastructure informatique existante de l'organisation de l'acheteur, influence le coût global de la solution pour l'acheteur et constitue donc également une considération cruciale. De plus, le degré d'interaction manuelle requis, tant dans des circonstances normales que pour le dépannage, doit être connu de l'acheteur. Tous les utilisateurs potentiels de la solution d’IA doivent être impliqués dans le processus d’achat pour s’assurer qu’ils la connaissent et qu’elle répond à leurs normes éthiques professionnelles et répond à leurs besoins. 

D'un point de vue réglementaire, il doit être clair pour l'acheteur si la solution est conforme aux réglementations en matière de dispositifs médicaux et de protection des données. La solution a-t-elle été approuvée dans le pays de l'acheteur ? Si oui, sous quelle classification de risque ? Les acheteurs devraient également envisager de créer des cartes de flux de données qui montrent comment les données circulent dans le fonctionnement de la solution basée sur l'IA, y compris qui a accès aux données.

Enfin, il existe d’autres facteurs à prendre en compte qui ne sont pas nécessairement propres aux solutions basées sur l’IA et que les acheteurs pourraient connaître en achetant d’autres types de solutions. Cela inclut le modèle de licence de la solution, la manière dont les utilisateurs doivent être formés à l'utilisation de la solution, la manière dont la solution est entretenue, la manière dont les pannes de la solution sont traitées et si des coûts supplémentaires sont à prévoir lors de l'extension de la mise en oeuvre de la solution (par exemple en utilisant la solution pour plus d'équipements d'imagerie ou plus d'utilisateurs). Cela permet à l’acheteur potentiel d’anticiper les coûts actuels et futurs d’achat de la solution.

Tendances futures

Conclusion

L'IA s'est révélée prometteuse en ayant un impact positif sur pratiquement toutes les facettes du travail d'un service de radiologie - de la planification et du protocole des examens des patients à l'interprétation des images et à l'établissement de diagnostics. Cependant, des recherches prometteuses sur les outils basés sur l’IA en radiologie n’ont pas encore été largement adoptées dans la pratique courante, en raison d’un certain nombre de questions complexes et partiellement liées. Des solutions potentielles existent pour une majorité de ces défis, mais bon nombre de ces solutions nécessitent des affinements et des tests supplémentaires. Entre-temps, des directives émergent pour aider les utilisateurs potentiels de solutions basées sur l’IA en radiologie à naviguer dans le nombre croissant de produits commercialisés. Cela encourage leur adoption dans des scénarios du monde réel, permettant ainsi de découvrir leur véritable potentiel, ainsi que d’identifier et de remédier à leurs faiblesses de manière sûre et efficace. À mesure que ces améliorations progressives seront apportées, ces outils évolueront probablement pour gérer des données plus variées, seront intégrés dans des flux de travail consolidés, deviendront plus transparents et, à terme, plus utiles pour accroître l'efficacité et améliorer les soins aux patients.

Références 

AAMC Report Reinforces Mounting Physician Shortage. (2021). AAMC. https://www.aamc.org/news-insights/press-releases/aamc-report-reinforces-mounting-physician-shortage

A buyer’s guide to AI in health and care. (2020). NHS Transformation Directorate. https://www.nhsx.nhs.uk/ai-lab/explore-all-resources/adopt-ai/a-buyers-guide-to-ai-in-health- and-care/

Advanced AI solutions for radiology. (n.d.). Calantic Website. Retrieved July 3, 2022, from https://aivisions.calantic.com/

Aggarwal, R., Farag, S., Martin, G., Ashrafian, H., & Darzi, A. (2021). Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-sectional Survey. Journal of Medical Internet Research, 23(8), e26162. https://doi.org/10.2196/26162

Aggarwal, R., Sounderajah, V., Martin, G., Ting, D. S. W., Karthikesalingam, A., King, D., Ashrafian, H., & Darzi, A. (2021). Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digital Medicine, 4(1), 65. https://doi.org/10.1038/s41746-021-00438-z

Agnes, S. A., Anitha, J., Pandian, S. I. A., & Peter, J. D. (2019). Classification of Mammogram Images Using Multiscale all Convolutional Neural Network (MA-CNN). Journal of Medical Systems, 44(1), 30. https://doi.org/10.1007/s10916-019-1494-z

AIF360: A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models. (n.d.). Github. Retrieved June 11, 2022, from https://github.com/Trusted-AI/AIF360

AI for radiology. (n.d.). Retrieved June 26, 2022, from https://grand-challenge.org/aiforradiology/?subspeciality=Abdomen&modality=All&ce_ under=All&ce_class=All&fda_class=All&sort_by=last %20 modified&search=

Akagi, M., Nakamura, Y., Higaki, T., Narita, K., Honda, Y., Zhou, J., Yu, Z., Akino, N., & Awai, K. (2019). Deep learning reconstruction improves image quality of abdominal ultra- high-resolution CT. European Radiology, 29(11), 6163–6171. https://doi.org/10.1007/s00330-019-06170-3

Akkus, Z., Cai, J., Boonrod, A., Zeinoddini, A., Weston, A. D., Philbrick, K. A., & Erickson, B. J. (2019). A Survey of Deep- Learning Applications in Ultrasound: Artificial Intelligence- Powered Ultrasound for Improving Clinical Workflow. Journal of the American College of Radiology: JACR, 16(9 Pt B), 1318–1328. https://doi.org/10.1016/j.jacr.2019.06.004

Al-Antari, M. A., Al-Masni, M. A., & Kim, T.-S. (2020). Deep Learning Computer-Aided Diagnosis for Breast Lesion in Digital Mammogram. Advances in Experimental Medicine and Biology, 1213, 59–72. https://doi.org/10.1007/978-3-030-33128-3_4

Alexander, A., Jiang, A., Ferreira, C., & Zurkiya, D. (2020). An Intelligent Future for Medical Imaging: A Market Outlook on Artificial Intelligence for Medical Imaging. Journal of the American College of Radiology: JACR, 17(1 Pt B), 165–170. https://doi.org/10.1016/j.jacr.2019.07.019

Allen, B., Agarwal, S., Coombs, L., Wald, C., & Dreyer, K. (2021). 2020 ACR Data Science Institute Artificial Intelligence Journal of the American College of Radiology: JACR

Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M. A., Al-Amidie, M., & Farhan, L. (2021). Review of deep learning: concepts, CNN architectures, challenges, applications, future directions.

Arbabshirani, M. R., Fornwalt, B. K., Mongelluzzo, G. J., Suever, J. D., Geise, B. D., Patel, A. A., & Moore, G. J. (2018). Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digital Medicine, 1, 9. https://doi.org/10.1038/s41746-017-0015-z

Ardila, D., Kiraly, A. P., Bharadwaj, S., Choi, B., Reicher, J. J., Peng, L., Tse, D., Etemadi, M., Ye, W., Corrado, G., Naidich, D. P., & Shetty, S. (2019). End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature Medicine, 25(6), 954–961. https://doi.org/10.1038/s41591-019-0447-x

Arefan, D., Mohamed, A. A., Berg, W. A., Zuley, M. L., Sumkin, J. H., & Wu, S. (2020). Deep learning modeling using normal mammograms for predicting breast cancer risk. Medical Physics, 47(1), 110–118. https://doi.org/10.1002/mp.13886

Assess-AI. (n.d.). Retrieved July 2, 2022, from https://www.acrdsi.org/DSI-Services/Assess-AI

Azer, S. A. (2019). Deep learning with convolutional neural networks for identification of liver masses and hepatocellular carcinoma: A systematic review. World Journal of Gastrointestinal Oncology, 11(12), 1218–1230. https://doi.org/10.4251/wjgo.v11. i12.1218

Bacchi, S., Zerner, T., Oakden-Rayner, L., Kleinig, T., Patel, S., & Jannes, J. (2020). Deep Learning in the Prediction of Ischaemic Stroke Thrombolysis Functional Outcomes: A Pilot Study. Academic Radiology, 27(2), e19–e23. https://doi. org/10.1016/j.acra.2019.03.015

Baccianella, S., & Gough, T. (n.d.). Why cloud computing is the best option for hospitals adopting AI. Retrieved June 11, 2022, from https://www.aidence.com/articles/cloud-best-option-imaging-ai/

Balint, B. J., Steenburg, S. D., Lin, H., Shen, C., Steele, J. L., & Gunderman, R. B. (2014). Do telephone call interruptions have an impact on radiology resident diagnostic accuracy? Academic Radiology, 21(12), 1623–1628. https://doi.org/10.1016/j.acra.2014.08.001

Baltruschat, I., Steinmeister, L., Nickisch, H., Saalbach, A., Grass, M., Adam, G., Knopp, T., & Ittrich, H. (2021). Smart chest X-ray worklist prioritization using artificial intelligence: a clinical workflow simulation. European Radiology, 31(6), 3837– 3845. https://doi.org/10.1007/s00330-020-07480-7

Bash, S., Johnson, B., Gibbs, W., Zhang, T., Shankaranarayanan, A., & Tanenbaum, L. N. (2021). Deep Learning Image Processing Enables 40 % Faster Spinal MR Scans Which Match or Exceed Quality of Standard of Care : A Prospective Multicenter Multireader Study. Clinical Neuroradiology. https://doi.org/10.1007/s00062-021-01121-2

Bash, S., Wang, L., Airriess, C., Zaharchuk, G., Gong, E., Shankaranarayanan, A., & Tanenbaum, L. N. (2021). Deep Learning Enables 60 % Accelerated Volumetric Brain MRI While Preserving Quantitative Performance: A Prospective, Multicenter, Multireader Trial. AJNR. American Journal of Neuroradiology, 42(12), 2130–2137. https://doi.org/10.3174/ajnr.A7358

Boas, F. E., & Fleischmann, D. (2012). CT artifacts: causes and reduction techniques. Imaging in Medicine, 4(2), 229–240. https://doi.org/10.2217/iim.12.13

Boland, G. W., Duszak, R., Jr, & Kalra, M. (2014). Protocol design and optimization. Journal of the American College of Radiology: JACR, 11(5), 440–441. https://doi.org/10.1016/j. jacr.2014.01.021

Bonavita, I., Rafael-Palou, X., Ceresa, M., Piella, G., Ribas, V., & González Ballester, M. A. (2020). Integration of convolutional neural networks for pulmonary nodule malignancy assessment in a lung cancer classification pipeline. Computer Methods and Programs in Biomedicine, 185, 105172. https://doi.org/10.1016/j.cmpb.2019.105172

Bozkurt, S., Cahan, E. M., Seneviratne, M. G., Sun, R., Lossio- Ventura, J. A., Ioannidis, J. P. A., & Hernandez-Boussard, T. (2020). Reporting of demographic data and representativeness in machine learning models using electronic health records.
Journal of the American Medical Informatics Association: JAMIA, 27(12), 1878–1884. https://doi.org/10.1093/jamia/ocaa164

Brown, A. D., & Marotta, T. R. (2018). Using machine learning for sequence-level automated MRI protocol selection in neuroradiology. Journal of the American Medical Informatics Association: JAMIA, 25(5), 568–571. https://doi.org/10.1093/jamia/ocx125

Bruls, R. J. M., & Kwee, R. M. (2020). Workload for radiologists during on-call hours: dramatic increase in the past 15 years.
Insights into Imaging, 11(1), 121. https://doi.org/10.1186/ s13244-020-00925-z

Buruk, B., Ekmekci, P. E., & Arda, B. (2020). A critical perspective on guidelines for responsible and trustworthy artificial intelligence. Medicine, Health Care, and Philosophy, 23(3), 387–399. https://doi.org/10.1007/s11019-020-09948-1

Cadario, R., Longoni, C., & Morewedge, C. K. (2021). Understanding, explaining, and utilizing medical artificial intelligence. Nature Human Behaviour, 5(12), 1636–1642. https://doi.org/10.1038/s41562-021-01146-0

Center for Devices, & Radiological Health. (n.d.). Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. U.S. Food and Drug Administration; FDA. Retrieved July 2, 2022, from https://www.fda.gov/medical-devices/software- medical-device-samd/artificial-intelligence-and-machine- learning-aiml-enabled-medical-devices

Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing Machine Learning in Health Care - Addressing Ethical Challenges. The New England Journal of Medicine, 378(11), 981–983. https://doi.org/10.1056/NEJMp1714229

Chen, H., Zhang, Y., Kalra, M. K., Lin, F., Chen, Y., Liao, P., Zhou, J., & Wang, G. (2017). Low-Dose CT With a Residual Encoder-Decoder Convolutional Neural Network. IEEE Transactions on Medical Imaging, 36(12), 2524–2535. https://doi.org/10.1109/TMI.2017.2715284

Chen, Y., Stavropoulou, C., Narasinkan, R., Baker, A., & Scarbrough, H. (2021). Professionals’ responses to the introduction of AI innovations in radiology and their implications for future adoption: a qualitative study. BMC Health Services Research, 21(1), 813. https://doi.org/10.1186/ s12913-021-06861-y

Choe, J., Lee, S. M., Do, K.-H., Lee, G., Lee, J.-G., Lee, S. M., & Seo, J. B. (2019). Deep Learning-based Image Conversion of CT Reconstruction Kernels Improves Radiomics Reproducibility for Pulmonary Nodules or Masses. Radiology, 292(2), 365–373. https://doi.org/10.1148/radiol.2019181960

Choi, K. S., Choi, S. H., & Jeong, B. (2019). Prediction of IDH genotype in gliomas with dynamic susceptibility contrast perfusion MR imaging using an explainable recurrent neural network. Neuro-Oncology, 21(9), 1197–1209. https://doi.org/10.1093/neuonc/noz095

Chong, L. R., Tsai, K. T., Lee, L. L., Foo, S. G., & Chang, P. C. (2020). Artificial Intelligence Predictive Analytics in the Management of Outpatient MRI Appointment No-Shows. AJR. American Journal of Roentgenology, 215(5), 1155–1162. https://doi.org/10.2214/AJR.19.22594

Cikes, M., Sanchez-Martinez, S., Claggett, B., Duchateau, N., Piella, G., Butakoff, C., Pouleur, A. C., Knappe, D., Biering- Sørensen, T., Kutyifa, V., Moss, A., Stein, K., Solomon, S. D., & Bijnens, B. (2019). Machine learning-based phenogrouping in heart failure to identify responders to cardiac resynchronization therapy. European Journal of Heart Failure, 21(1), 74–85. https://doi.org/10.1002/ejhf.1333

Ciompi, F., Chung, K., van Riel, S. J., Setio, A. A. A., Gerke, P. K., Jacobs, C., Scholten, E. T., Schaefer-Prokop, C., Wille,
M. M. W., Marchianò, A., Pastorino, U., Prokop, M., & van Ginneken, B. (2017). Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Scientific Reports, 7, 46479. https://doi.org/10.1038/srep46479

Clinical radiology UK workforce census 2019 report. (2019). https://www.rcr.ac.uk/publication/clinical-radiology-uk-workforce-census-2019-report

Cloud security for healthcare services. (2021, January 14). ENISA. https://www.enisa.europa.eu/publications/cloud- security-for-healthcare-services/

CONSORT-AI and SPIRIT-AI Steering Group. (2019). Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed. Nature Medicine, 25(10), 1467–1468. https://doi.org/10.1038/s41591-019-0603-3

Curated marketplace. (2018, May 22). Blackford. https://www.blackfordanalysis.com/applications/

Dance, A. (2021). AI spots cell structures that humans can’t. Nature. 592 (7852), 154–155.

Dantas, L. F., Fleck, J. L., Cyrino Oliveira, F. L., & Hamacher, S. (2018). No-shows in appointment scheduling - a systematic literature review. Health Policy, 122(4), 412–421. https://doi.org/10.1016/j.healthpol.2018.02.002

Deák, Z., Grimm, J. M., Treitl, M., Geyer, L. L., Linsenmaier, U., Körner, M., Reiser, M. F., & Wirth, S. (2013). Filtered back projection, adaptive statistical iterative reconstruction, and a model-based iterative reconstruction in abdominal CT: an experimental clinical study. Radiology, 266(1), 197–206. https://doi.org/10.1148/radiol.12112707

Dembrower, K., Liu, Y., Azizpour, H., Eklund, M., Smith, K., Lindholm, P., & Strand, F. (2020). Comparison of a Deep
Learning Risk Score and Standard Mammographic Density Score for Breast Cancer Risk Prediction. Radiology, 294(2), 265–272. https://doi.org/10.1148/radiol.2019190872

Do, B. H., Langlotz, C., & Beaulieu, C. F. (2017). Bone Tumor Diagnosis Using a Naïve Bayesian Model of Demographic and Radiographic Features. Journal of Digital Imaging, 30(5), 640–647. https://doi.org/10.1007/s10278-017-0001-7

Dou, Q., Yu, L., Chen, H., Jin, Y., Yang, X., Qin, J., & Heng, P.-A. (2017). 3D deeply supervised network for automated segmentation of volumetric medical images. Medical Image Analysis, 41, 40–54. https://doi.org/10.1016/j.media.2017.05.001

Eche, T., Schwartz, L. H., Mokrane, F.-Z., & Dercle, L. (2021). Toward Generalizability in the Deployment of Artificial Intelligence in Radiology: Role of Computation Stress Testing to Overcome Underspecification. Radiology. Artificial Intelligence, 3(6), e210097. https://doi.org/10.1148/ryai.2021210097

England, N. H. S., & Improvement, N. H. S. (2019). NHS diagnostic waiting times and activity data. NHS. https://www. england.nhs.uk/statistics/wp-content/uploads/sites/2/2021/12/ DWTA-Report-October-2021_M43D4.pdf

Esmaeilzadeh, P. (2020). Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives. BMC Medical Informatics and Decision Making, 20(1), 170. https://doi. org/10.1186/s12911-020-01191-1

Esses, S. J., Lu, X., Zhao, T., Shanbhogue, K., Dane, B., Bruno, M., & Chandarana, H. (2018). Automated image quality evaluation of T2 -weighted liver MRI utilizing deep learning architecture. Journal of Magnetic Resonance Imaging: JMRI, 47(3), 723–728. https://doi.org/10.1002/jmri.25779

European Society of Radiology (ESR). (2022). Current practical experience with artificial intelligence in clinical radiology:
a survey of the European Society of Radiology. Insights into Imaging, 13(1), 107. https://doi.org/10.1186/s13244-022- 01247-y

Faron, A., Sichtermann, T., Teichert, N., Luetkens, J. A., Keulers, A., Nikoubashman, O., Freiherr, J., Mpotsaris, A., & Wiesmann, M. (2020). Performance of a Deep-Learning Neural Network to Detect Intracranial Aneurysms from 3D TOF-MRA Compared to Human Readers. Clinical Neuroradiology, 30(3), 591–598. https://doi.org/10.1007/s00062-019-00809-w

Feng, J., Phillips, R. V., Malenica, I., Bishara, A., Hubbard, A. E., Celi, L. A., & Pirracchio, R. (2022). Clinical artificial intelligence quality improvement: towards continual monitoring and updating of AI algorithms in healthcare. NPJ Digital Medicine, 5(1), 66. https://doi.org/10.1038/s41746-022- 00611-y

Finlayson, S. G., Chung, H. W., Kohane, I. S., & Beam, A. L. (2018). Adversarial Attacks Against Medical Deep Learning Systems. In arXiv [cs.CR]. arXiv. https://doi.org/10.1145/nnnnnnn. 

Flanders, A. E., Prevedello, L. M., Shih, G., Halabi, S. S., Kalpathy-Cramer, J., Ball, R., Mongan, J. T., Stein, A., Kitamura, F. C., Lungren, M. P., Choudhary, G., Cala, L., Coelho, L., Mogensen, M., Morón, F., Miller, E., Ikuta, I., Zohrabian, V., McDonnell, O., … RSNA-ASNR 2019 Brain Hemorrhage CT Annotators. (2020). Construction of a Machine Learning Dataset through Collaboration: The RSNA 2019 Brain CT Hemorrhage Challenge. Radiology. Artificial Intelligence, 2(3), e190211. https://doi.org/10.1148/ryai.2020190211

Freeman, K., Geppert, J., Stinton, C., Todkill, D., Johnson, S., Clarke, A., & Taylor-Phillips, S. (2021). Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy. BMJ , 374, n1872. https://doi.org/10.1136/bmj.n1872

General Data Protection Regulation (GDPR) – Official Legal Text. (2016, July 13). General Data Protection Regulation (GDPR). https://gdpr-info.eu/

Ghafur, S., Van Dael, J., Leis, M., Darzi, A., & Sheikh, A. (2020). Public perceptions on data sharing: key insights from the UK and the USA. The Lancet. Digital Health, 2(9), e444–e446. https://doi.org/10.1016/S2589-7500(20)30161-8

Ghani, M. U., & Clem Karl, W. (2019). Fast Enhanced CT Metal Artifact Reduction using Data Domain Deep Learning. In arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1904.04691

Ghassemi, M., Oakden-Rayner, L., & Beam, A. L. (2021). The false hope of current approaches to explainable artificial intelligence in health care. The Lancet. Digital Health, 3(11), e745–e750. https://doi.org/10.1016/S2589-7500(21)00208-9

Ginat, D. T. (2020). Analysis of head CT scans flagged by deep learning software for acute intracranial hemorrhage. Neuroradiology, 62(3), 335–340. https://doi.org/10.1007/s00234-019-02330-w

Goebel, J., Stenzel, E., Guberina, N., Wanke, I., Koehrmann, M., Kleinschnitz, C., Umutlu, L., Forsting, M., Moenninghoff, C., & Radbruch, A. (2018). Automated ASPECT rating: comparison between the Frontier ASPECT Score software and the Brainomix software. Neuroradiology, 60(12), 1267–1272. https://doi.org/10.1007/s00234-018-2098-x

Habli, I., Lawton, T., & Porter, Z. (2020). Artificial intelligence in health care: accountability and safety. Bulletin of the World Health Organization, 98(4), 251–256. https://doi.org/10.2471/ BLT.19.237487

Halabi, S. S., Prevedello, L. M., Kalpathy-Cramer, J., Mamonov, A. B., Bilbily, A., Cicero, M., Pan, I., Pereira, L. A., Sousa, R. T., Abdala, N., Kitamura, F. C., Thodberg, H. H., Chen, L., Shih, G., Andriole, K., Kohli, M. D., Erickson, B. J., & Flanders, A. E. (2019). The RSNA Pediatric Bone Age Machine Learning Challenge. Radiology, 290(2), 498–503. https://doi. org/10.1148/radiol.2018180736

Hargreaves, B. A., Worters, P. W., Pauly, K. B., Pauly, J. M., Koch, K. M., & Gold, G. E. (2011). Metal-induced artifacts in MRI. AJR. American Journal of Roentgenology, 197(3), 547–555. https://doi.org/10.2214/AJR.11.7364

Harry, E., Sinsky, C., Dyrbye, L. N., Makowski, M. S., Trockel, M., Tutty, M., Carlasare, L. E., West, C. P., & Shanafelt, T. D. (2021). Physician Task Load and the Risk of Burnout Among US Physicians in a National Survey. Joint Commission Journal on Quality and Patient Safety / Joint Commission Resources, 47(2), 76–85. https://doi.org/10.1016/j.jcjq.2020.09.011

Hata, A., Yanagawa, M., Yamagata, K., Suzuki, Y., Kido, S., Kawata, A., Doi, S., Yoshida, Y., Miyata, T., Tsubamoto, M., Kikuchi, N., & Tomiyama, N. (2021). Deep learning algorithm for detection of aortic dissection on non-contrast-enhanced CT. European Radiology, 31(2), 1151–1159. https://doi.org/10.1007/s00330-020-07213-w

Hauptmann, A., Arridge, S., Lucka, F., Muthurangu, V., & Steeden, J. A. (2019). Real-time cardiovascular MR with
spatio-temporal artifact suppression using deep learning-proof of concept in congenital heart disease. Magnetic Resonance in Medicine: Official Journal of the Society of Magnetic Resonance in Medicine / Society of Magnetic Resonance in Medicine, 81(2), 1143–1156. https://doi.org/10.1002/mrm.27480

Health Ethics & Governance. (2021, June 28). Ethics and governance of artificial intelligence for health. World Health Organization. https://www.who.int/publications/i/item/9789240029200

He, L., Li, H., Dudley, J. A., Maloney, T. C., Brady, S. L., Somasundaram, E., Trout, A. T., & Dillman, J. R. (2019). Machine Learning Prediction of Liver Stiffness Using Clinical and T2-Weighted MRI Radiomic Data. AJR. American Journal of Roentgenology, 213(3), 592–601. https://doi.org/10.2214/AJR.19.21082

Herent, P., Schmauch, B., Jehanno, P., Dehaene, O., Saillard, C., Balleyguier, C., Arfi-Rouche, J., & Jégou, S. (2019).
Detection and characterization of MRI breast lesions using deep learning. Diagnostic and Interventional Imaging, 100(4), 219–225. https://doi.org/10.1016/j.diii.2019.02.008

Hinton, B., Ma, L., Mahmoudzadeh, A. P., Malkov, S., Fan, B., Greenwood, H., Joe, B., Lee, V., Kerlikowske, K., & Shepherd,
J. (2019). Deep learning networks find unique mammographic differences in previous negative mammograms between interval and screen-detected cancers: a case-case study. Cancer Imaging: The Official Publication of the International Cancer Imaging Society, 19(1), 41. https://doi.org/10.1186/s40644-019-0227-3

Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain? In arXiv [cs.AI]. arXiv. http://arxiv.org/abs/1712.09923

Hötker, A. M., Da Mutten, R., Tiessen, A., Konukoglu, E., & Donati, O. F. (2021). Improving workflow in prostate MRI:
AI-based decision-making on biparametric or multiparametric MRI. Insights into Imaging, 12(1), 112. https://doi.org/10.1186/ s13244-021-01058-7

Huang, S.-C., Kothari, T., Banerjee, I., Chute, C., Ball, R. L., Borus, N., Huang, A., Patel, B. N., Rajpurkar, P., Irvin, J., Dunnmon, J., Bledsoe, J., Shpanskaya, K., Dhaliwal, A., Zamanian, R., Ng, A. Y., & Lungren, M. P. (2020). PENet-a scalable deep-learning model for automated diagnosis of pulmonary embolism using volumetric CT imaging. NPJ Digital Medicine, 3, 61. https://doi.org/10.1038/s41746-020-0266-y

Huang, S.-C., Pareek, A., Seyyedi, S., Banerjee, I., & Lungren, M. P. (2020). Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines. NPJ Digital Medicine, 3, 136. https:// doi.org/10.1038/s41746-020-00341-z

Huisman, M., Ranschaert, E., Parker, W., Mastrodicasa, D., Koci, M., Pinto de Santos, D., Coppola, F., Morozov, S., Zins, M., Bohyn, C., Koç, U., Wu, J., Veean, S., Fleischmann, D., Leiner, T., & Willemink, M. J. (2021). An international survey on AI in radiology in 1,041 radiologists and radiology residents part 1: fear of replacement, knowledge, and attitude. European Radiology, 31(9), 7058–7066. https://doi.org/10.1007/s00330-021-07781-5

Hu, S.-Y., Santus, E., Forsyth, A. W., Malhotra, D., Haimson, J., Chatterjee, N. A., Kramer, D. B., Barzilay, R., Tulsky,
J. A., & Lindvall, C. (2019). Can machine learning improve patient selection for cardiac resynchronization therapy? PloS One, 14(10), e0222397. https://doi.org/10.1371/journal. pone.0222397

Hwang, E. J., Nam, J. G., Lim, W. H., Park, S. J., Jeong, Y. S., Kang, J. H., Hong, E. K., Kim, T. M., Goo, J. M., Park, S., Kim, K. H., & Park, C. M. (2019). Deep Learning for Chest Radiograph Diagnosis in the Emergency Department. Radiology, 293(3), 573–580. https://doi.org/10.1148/radiol.2019191225

Hwang, E. J., Park, S., Jin, K.-N., Kim, J. I., Choi, S. Y., Lee, J. H., Goo, J. M., Aum, J., Yim, J.-J., Park, C. M., & Deep Learning- Based Automatic Detection Algorithm Development and Evaluation Group. (2019). Development and Validation of a Deep Learning-based Automatic Detection Algorithm for Active Pulmonary Tuberculosis on Chest Radiographs. Clinical Infectious Diseases: An Official Publication of the Infectious Diseases Society of America, 69(5), 739–747. https://doi.org/10.1093/cid/ciy967

Hwang, S., Kim, H.-E., Jeong, J., & Kim, H.-J. (2016). A novel approach for tuberculosis screening based on deep convolutional neural networks. In G. D. Tourassi & S. G. Armato (Eds.), Medical Imaging 2016: Computer-Aided Diagnosis. SPIE. https://doi.org/10.1117/12.2216198

IBM Watson Studio - Model Risk Management. (n.d.). Retrieved June 11, 2022, from https://www.ibm.com/cloud/ watson-studio/model-risk-management

Imaging AI Marketplace - overview. (n.d.). Retrieved June 11, 2022, from https://www.ibm.com/products/imaging-ai- marketplace

Jamaludin, A., Lootus, M., Kadir, T., Zisserman, A., Urban, J., Battié, M. C., Fairbank, J., McCall, I., & Genodisc Consortium. (2017). ISSLS PRIZE IN BIOENGINEERING SCIENCE 2017:
Automation of reading of radiological features from magnetic resonance images (MRIs) of the lumbar spine without human intervention is comparable with an expert radiologist. European Spine Journal: Official Publication of the European Spine Society, the European Spinal Deformity Society, and the European Section of the Cervical Spine Research Society, 26(5), 1374–1383. https:// doi.org/10.1007/s00586-017-4956-3

Kaissis, G. A., Makowski, M. R., Rückert, D., & Braren, R. F. (2020). Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence, 2(6), 305–311. https://doi.org/10.1038/s42256-020-0186-1

Kaissis, G., Ziller, A., Passerat-Palmbach, J., Ryffel, T., Usynin, D., Trask, A., Lima, I., Mancuso, J., Jungmann, F., Steinborn, M.-M., Saleh, A., Makowski, M., Rueckert, D., & Braren, R. (2021). End-to-end privacy preserving deep learning on multi-institutional medical imaging. Nature Machine Intelligence, 3(6), 473–484. https://doi.org/10.1038/s42256-021-00337-8

Kalra, A., Chakraborty, A., Fine, B., & Reicher, J. (2020). Machine Learning for Automation of Radiology Protocols for Quality and Efficiency Improvement. Journal of the American College of Radiology: JACR, 17(9), 1149–1158. https://doi.org/10.1016/j.jacr.2020.03.012

Kao, P.-Y., Chen, J. W., & Manjunath, B. S. (2019). Improving 3D U-Net for Brain Tumor Segmentation by Utilizing Lesion Prior. In arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1907.00281

Kapoor, N., Lacson, R., & Khorasani, R. (2020). Workflow Applications of Artificial Intelligence in Radiology and an Overview of Available Tools. Journal of the American College of Radiology: JACR, 17(11), 1363–1370. https://doi.org/10.1016/j.jacr.2020.08.016

Kathirvelu, D., Vinupritha, P., & Kalpana, V. (2019). A computer aided diagnosis system for measurement of mandibular cortical thickness on dental panoramic radiographs in prediction of women with low bone mineral density. Journal of Medical Systems, 43(6), 148. https://doi.org/10.1007/s10916-019-1268-7

Ker, J., Singh, S. P., Bai, Y., Rao, J., Lim, T., & Wang, L. (2019). Image Thresholding Improves 3-Dimensional Convolutional Neural Network Diagnosis of Different Acute Brain Hemorrhages on Computed Tomography Scans. Sensors, 19(9). https://doi.org/10.3390/s19092167

Khan, F. A., Majidulla, A., Tavaziva, G., Nazish, A., Abidi, S. K., Benedetti, A., Menzies, D., Johnston, J. C., Khan, A. J., & Saeed, S. (2020). Chest x-ray analysis with deep learning- based software as a triage test for pulmonary tuberculosis: a prospective study of diagnostic accuracy for culture-confirmed disease. The Lancet. Digital Health, 2(11), e573–e581. https://doi.org/10.1016/S2589-7500(20)30221-1

Kim, D. W., Jang, H. Y., Kim, K. W., Shin, Y., & Park, S. H. (2019). Design Characteristics of Studies Reporting the Performance of Artificial Intelligence Algorithms for Diagnostic Analysis of Medical Images: Results from Recently Published Papers. Korean Journal of Radiology: Official Journal of the Korean Radiological Society, 20(3), 405–410. https://doi.org/10.3348/kjr.2019.0025

Kim, K. H., & Park, S.-H. (2017). Artificial neural network for suppression of banding artifacts in balanced steady-state free precession MRI. Magnetic Resonance Imaging, 37, 139–146. https://doi.org/10.1016/j.mri.2016.11.020

Korteling, J. E. H., van de Boer-Visschedijk, G. C., Blankendaal, R. A. M., Boonekamp, R. C., & Eikelboom, A. R. (2021). Human- versus Artificial Intelligence. Frontiers in Artificial Intelligence 4, 622364. https://doi.org/10.3389/ frai.2021.622364

Kühl, N., Goutier, M., Baier, L., Wolff, C., & Martin, D. (2020). Human vs. supervised machine learning: Who learns patterns faster? In arXiv [cs.AI] arXiv. http://arxiv.org/abs/2012.03661

Kuo, W., Häne, C., Mukherjee, P., Malik, J., & Yuh, E. L. (2019). Expert-level detection of acute intracranial hemorrhage on head computed tomography using deep learning. Proceedings of the National Academy of Sciences of the United States of America, 116(45), 22737–22745. https://doi.org/10.1073/ pnas.1908021116

Langerhuizen, D. W. G., Janssen, S. J., Mallee, W. H., van den Bekerom, M. P. J., Ring, D., Kerkhoffs, G. M. M. J., Jaarsma,
R. L., & Doornberg, J. N. (2019). What Are the Applications and Limitations of Artificial Intelligence for Fracture Detection and Classification in Orthopaedic Trauma Imaging? A Systematic Review. Clinical Orthopaedics and Related Research, 477(11), 2482–2491. https://doi.org/10.1097/CORR.0000000000000848

Lang, N., Zhang, Y., Zhang, E., Zhang, J., Chow, D., Chang, P., Yu, H. J., Yuan, H., & Su, M.-Y. (2019). Differentiation of spinal metastases originated from lung and other cancers using radiomics and deep learning based on DCE-MRI. Magnetic Resonance Imaging, 64, 4–12. https://doi.org/10.1016/j. mri.2019.02.013

Larrazabal, A. J., Nieto, N., Peterson, V., Milone, D. H., & Ferrante, E. (2020). Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proceedings of the National Academy of Sciences of the United States of America, 117(23), 12592–12594. https://doi.org/10.1073/pnas.1919012117

Lee, J.-S., Adhikari, S., Liu, L., Jeong, H.-G., Kim, H., & Yoon, S.-J. (2019). Osteoporosis detection in panoramic radiographs using a deep convolutional neural network-based computer- assisted diagnosis system: a preliminary study. Dento Maxillo Facial Radiology, 48(1), 20170344. https://doi.org/10.1259/ dmfr.20170344

Lee, Y. H. (2018). Efficiency Improvement in a Busy Radiology Practice: Determination of Musculoskeletal Magnetic Resonance Imaging Protocol Using Deep-Learning Convolutional Neural Networks. Journal of Digital Imaging, 31(5), 604–610. https://doi.org/10.1007/s10278-018-0066-y

Leiner, T., Bennink, E., Mol, C. P., Kuijf, H. J., & Veldhuis, W. B. (2021). Bringing AI to the clinic: blueprint for a vendor-neutral AI deployment infrastructure. Insights into Imaging, 12(1), 11. https://doi.org/10.1186/s13244-020-00931-1

Lekadir, K., Osuala, R., Gallin, C., Lazrak, N., Kushibar, K., Tsakou, G., Aussó, S., Alberich, L. C., Marias, K., Tsiknakis, M., Colantonio, S., Papanikolaou, N., Salahuddin, Z., Woodruff, H. C., Lambin, P., & Martí-Bonmatí, L. (2021). FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging. In arXiv [cs.CV]. arXiv. http://arxiv.org/abs/2109.09658

Letourneau-Guillon, L., Camirand, D., Guilbert, F., & Forghani, R. (2020). Artificial Intelligence Applications for Workflow, Process Optimization and Predictive Analytics. Neuroimaging Clinics of North America, 30(4), e1–e15. https://doi.org/10.1016/j.nic.2020.08.008

Levin, D. C., Parker, L., & Rao, V. M. (2017). Recent Trends in Imaging Use in Hospital Settings: Implications for Future Planning. Journal of the American College of Radiology: JACR, 14(3), 331–336. https://doi.org/10.1016/j.jacr.2016.08.025

Lindsey, R., Daluiski, A., Chopra, S., Lachapelle, A., Mozer, M., Sicular, S., Hanel, D., Gardner, M., Gupta, A., Hotchkiss, R., & Potter, H. (2018). Deep neural network improves fracture detection by clinicians. Proceedings of the National Academy of Sciences of the United States of America, 115(45), 11591–11596. https://doi.org/10.1073/pnas.1806905115

Liu, F., Tang, J., Ma, J., Wang, C., Ha, Q., Yu, Y., & Zhou, Z. (2021). The application of artificial intelligence to chest medical image analysis. Intelligent Medicine, 1(3), 104–117. https://doi.org/10.1016/j.imed.2021.06.004

Liu, F., Zhou, Z., Samsonov, A., Blankenbaker, D., Larison, W., Kanarek, A., Lian, K., Kambhampati, S., & Kijowski, R. (2018). Deep Learning Approach for Evaluating Knee MR Images: Achieving High Diagnostic Performance for Cartilage Lesion Detection. Radiology, 289(1), 160–169. https://doi.org/10.1148/ radiol.2018172986

Liu, X., Cruz Rivera, S., Moher, D., Calvert, M. J., Denniston, A. K., & SPIRIT-AI and CONSORT-AI Working Group. (2020). Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nature Medicine, 26(9), 1364–1374. https://doi.org/10.1038/ s41591-020-1034-x

Liu, X., Faes, L., Kale, A. U., Wagner, S. K., Fu, D. J., Bruynseels, A., Mahendiran, T., Moraes, G., Shamdas, M., Kern, C., Ledsam, J. R., Schmid, M. K., Balaskas, K., Topol, E. J., Bachmann, L. M., Keane, P. A., & Denniston, A. K. (2019). A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. The Lancet. Digital Health, 1(6), e271–e297. https://doi.org/10.1016/S2589-7500(19)30123-2

Li, X., Shen, L., Xie, X., Huang, S., Xie, Z., Hong, X., & Yu, J. (2020). Multi-resolution convolutional networks for chest X-ray radiograph based lung nodule detection. Artificial Intelligence in Medicine, 103, 101744. https://doi.org/10.1016/j. artmed.2019.101744

Lotan, E., Tschider, C., Sodickson, D. K., Caplan, A. L., Bruno, M., Zhang, B., & Lui, Y. W. (2020). Medical Imaging and Privacy in the Era of Artificial Intelligence: Myth, Fallacy, and the Future. Journal of the American College of Radiology: JACR, 17(9), 1159–1162. https://doi.org/10.1016/j.jacr.2020.04.007

Maegerlein, C., Fischer, J., Mönch, S., Berndt, M., Wunderlich, S., Seifert, C. L., Lehm, M., Boeckh-Behrens, T., Zimmer, C., & Friedrich, B. (2019). Automated Calculation of the Alberta Stroke Program Early CT Score: Feasibility and
Reliability. Radiology, 291(1), 141–148. https://doi.org/10.1148/ radiol.2019181228

Mairhöfer, D., Laufer, M., Simon, P. M., Sieren, M., Bischof, A., Käster, T., Barth, E., Barkhausen, J., & Martinetz, T. (2021). An AI-based Framework for Diagnostic Quality Assessment of Ankle Radiographs. https://openreview.net/ pdf?id=bj04hJss_xZ

Mancio, J., Pashakhanloo, F., El-Rewaidy, H., Jang, J., Joshi, G., Csecs, I., Ngo, L., Rowin, E., Manning, W., Maron, M., & Nezafat, R. (2022). Machine learning phenotyping of scarred myocardium from cine in hypertrophic cardiomyopathy. European Heart Journal Cardiovascular Imaging, 23(4), 532–542. https://doi.org/10.1093/ehjci/jeab056

Matsoukas, S., Morey, J., Lock, G., Chada, D., Shigematsu, T., Marayati, N. F., Delman, B. N., Doshi, A., Majidi, S.,
De Leacy, R., Kellner, C. P., & Fifi, J. T. (2022). AI software detection of large vessel occlusion stroke on CT angiography: a real-world prospective diagnostic test accuracy study. Journal of Neurointerventional Surgery. https://doi.org/10.1136/ neurintsurg-2021-018391

McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., Back, T., Chesus, M., Corrado, G. S., Darzi, A., Etemadi, M., Garcia-Vicente, F., Gilbert, F. J., Halling-Brown, M., Hassabis, D., Jansen, S., Karthikesalingam, A., Kelly, C. J., King, D., … Shetty, S. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577(7788), 89–94. https://doi.org/10.1038/s41586-019-1799-6

McLeavy, C. M., Chunara, M. H., Gravell, R. J., Rauf, A., Cushnie, A., Staley Talbot, C., & Hawkins, R. M. (2021). The future of CT: deep learning reconstruction. Clinical Radiology, 76(6), 407–415. https://doi.org/10.1016/j.crad.2021.01.010

Medical AI evaluation. (n.d.). Retrieved June 26, 2022, from https://ericwu09.github.io/medical-ai-evaluation/

Mlynarski, P., Delingette, H., Criminisi, A., & Ayache, N. (2019). Deep learning with mixed supervision for brain tumor segmentation. Journal of Medical Imaging (Bellingham, Wash.), 6(3), 034002. https://doi.org/10.1117/1.JMI.6.3.034002

Mongan, J., Moy, L., & Kahn, C. E., Jr. (2020). Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers. Radiology. Artificial Intelligence, 2(2), e200029. https://doi.org/10.1148/ryai.2020200029

Moon, H., Huo, Y., Abramson, R. G., Peters, R. A., Assad, A., Moyo, T. K., Savona, M. R., & Landman, B. A. (2019). Acceleration of spleen segmentation with end-to-end deep learning method and automated pipeline. Computers in Biology and Medicine, 107, 109–117. https://doi.org/10.1016/j.compbiomed.2019.01.018

Morey, J. R., Zhang, X., Yaeger, K. A., Fiano, E., Marayati, N. F., Kellner, C. P., De Leacy, R. A., Doshi, A., Tuhrim, S., & Fifi, J. T. (2021). Real-World Experience with Artificial Intelligence- Based Triage in Transferred Large Vessel Occlusion Stroke Patients. Cerebrovascular Diseases, 50(4), 450–455. https://doi. org/10.1159/000515320

Murdoch, B. (2021). Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Medical Ethics, 22(1), 122. https://doi.org/10.1186/s12910-021-00687-3

Murray, N. M., Unberath, M., Hager, G. D., & Hui, F. K. (2020). Artificial intelligence to diagnose ischemic stroke and identify large vessel occlusions: a systematic review. Journal of Neurointerventional Surgery, 12(2), 156–164. https://doi.org/10.1136/neurintsurg-2019-015135

Nagendran, M., Chen, Y., Lovejoy, C. A., Gordon, A. C., Komorowski, M., Harvey, H., Topol, E. J., Ioannidis, J. P. A., Collins, G. S., & Maruthappu, M. (2020). Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ, 368. https://doi.org/10.1136/bmj.m689

Nair, T., Precup, D., Arnold, D. L., & Arbel, T. (2020). Exploring uncertainty measures in deep networks for Multiple sclerosis lesion detection and segmentation. Medical Image Analysis, 59, 101557. https://doi.org/10.1016/j.media.2019.101557

Nakao, T., Hanaoka, S., Nomura, Y., Sato, I., Nemoto, M., Miki, S., Maeda, E., Yoshikawa, T., Hayashi, N., & Abe, O. (2018). Deep neural network-based computer-assisted detection of cerebral aneurysms in MR angiography. Journal of Magnetic Resonance Imaging: JMRI, 47(4), 948–953. https://doi.org/10.1002/jmri.25842

Nam, J. G., Kim, M., Park, J., Hwang, E. J., Lee, J. H., Hong, J. H., Goo, J. M., & Park, C. M. (2021). Development and validation of a deep learning algorithm detecting 10 common abnormalities on chest radiographs. The European Respiratory Journal: Official Journal of the European Society for Clinical Respiratory Physiology, 57(5). https://doi.org/10.1183/13993003.03061-2020

Narayana, P. A., Coronado, I., Sujit, S. J., Wolinsky, J. S., Lublin, F. D., & Gabr, R. E. (2020). Deep Learning for Predicting Enhancing Lesions in Multiple Sclerosis from Noncontrast. MRI. Radiology, 294(2), 398–404. https://doi.org/10.1148/radiol.2019191061

National Institute for Health and Care Excellence (NICE). (n.d.). Evidence standards framework for digital health technologies. Retrieved June 10, 2022, from https://www.nice.org.uk/corporate/ecd7

Neisius, U., El-Rewaidy, H., Nakamori, S., Rodriguez, J., Manning, W. J., & Nezafat, R. (2019). Radiomic Analysis of Myocardial Native T1 Imaging Discriminates Between Hypertensive Heart Disease and Hypertrophic Cardiomyopathy. JACC. Cardiovascular Imaging, 12(10), 1946–1954. https://doi. org/10.1016/j.jcmg.2018.11.024

Nelson, A., Herron, D., Rees, G., & Nachev, P. (2019). Predicting scheduled hospital attendance with artificial intelligence. Npj Digital Medicine, 2(1), 26. https://doi.org/10.1038/s41746-019-0103-3

Nielsen, A., Hansen, M. B., Tietze, A., & Mouridsen, K. (2018). Prediction of Tissue Outcome and Assessment of Treatment Effect in Acute Ischemic Stroke Using Deep Learning. Stroke; a Journal of Cerebral Circulation, STROKEAHA.117.019740. https://doi.org/10.1161/STROKEAHA.117.019740

Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns (New York, N.Y.), 2(10), 100347. https://doi.org/10.1016/j.patter.2021.100347

O’Connor, S. D., & Bhalla, M. (2021). Should Artificial Intelligence Tell Radiologists Which Study to Read Next? [Review of Should Artificial Intelligence Tell Radiologists Which Study to Read Next?]. Radiology. Artificial Intelligence, 3(2), e210009. https://doi.org/10.1148/ryai.2021210009

Office for Civil Rights (OCR). (2012, September 7). Guidance Regarding Methods for De-identification of Protected Health Information in Accordance with the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule. HHS.gov; US Department of Health and Human Services. https://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de- identification/index.html

Oktay, O., Schlemper, J., Le Folgoc, L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N. Y.,
Kainz, B., Glocker, B., & Rueckert, D. (2018). Attention U-Net: Learning Where to Look for the Pancreas. In arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1804.03999

Olczak, J., Fahlberg, N., Maki, A., Razavian, A. S., Jilert, A., Stark, A., Sköldenberg, O., & Gordon, M. (2017). Artificial intelligence for analyzing orthopedic trauma radiographs. Acta Orthopaedica, 88(6), 581–586. https://doi.org/10.1080/1745367
4.2017.1344459

Olthof, A. W., van Ooijen, P. M. A., & Rezazade Mehrizi, M. H. (2020). Promises of artificial intelligence in neuroradiology: a systematic technographic review. Neuroradiology, 62(10), 1265–1278. https://doi.org/10.1007/s00234-020-02424-w

Omoumi, P., Ducarouge, A., Tournier, A., Harvey, H., Kahn, C. E., Jr, Louvet-de Verchère, F., Pinto Dos Santos, D., Kober, T., & Richiardi, J. (2021). To buy or not to buy-evaluating commercial AI solutions in radiology (the ECLAIR guidelines). European Radiology, 31(6), 3786–3796. https://doi.org/10.1007/ s00330-020-07684-x

O’Neill, T. J., Xi, Y., Stehel, E., Browning, T., Ng, Y. S., Baker, C., & Peshock, R. M. (2021). Active Reprioritization of the Reading Worklist Using Artificial Intelligence Has a Beneficial Effect on the Turnaround Time for Interpretation of Head CT with Intracranial Hemorrhage. Radiology. Artificial Intelligence, 3(2), e200024. https://doi.org/10.1148/ryai.2020200024

Ooi, S. K. G., Makmur, A., Soon, A. Y. Q., Fook-Chong, S., Liew, C., Sia, S. Y., Ting, Y. H., & Lim, C. Y. (2021). Attitudes toward artificial intelligence in radiology with learner needs assessment within radiology residency programmes: a national multi-programme survey. Singapore Medical Journal, 62(3), 126–134. https://doi.org/10.11622/smedj.2019141

Pan, Y., Shi, D., Wang, H., Chen, T., Cui, D., Cheng, X., & Lu, Y. (2020). Automatic opportunistic osteoporosis screening using low-dose chest computed tomography scans obtained for lung cancer screening. European Radiology, 30(7), 4107–4116. https://doi.org/10.1007/s00330-020-06679-y

Park, H. J., Kim, S. M., La Yun, B., Jang, M., Kim, B., Jang, J. Y., Lee, J. Y., & Lee, S. H. (2019). A computer-aided diagnosis system using artificial intelligence for the diagnosis and characterization of breast masses on ultrasound: Added value for the inexperienced breast radiologist. Medicine, 98(3), e14146. https://doi.org/10.1097/MD.0000000000014146

Price, I. I., & Nicholson, W. (2019). Medical AI and Contextual Bias. https://papers.ssrn.com/abstract=3347890

Puvanasunthararajah, S., Fontanarosa, D., Wille, M.-L., & Camps, S. M. (2021). The application of metal artifact reduction methods on computed tomography scans for radiotherapy applications: A literature review. Journal of Applied Clinical Medical Physics / American College of Medical Physics, 22(6), 198–223. https://doi.org/10.1002/acm2.13255

Qin, Z. Z., Sander, M. S., Rai, B., Titahong, C. N., Sudrungrot, S., Laah, S. N., Adhikari, L. M., Carter, E. J., Puri, L., Codlin, A. J., & Creswell, J. (2019). Using artificial intelligence to read chest radiographs for tuberculosis detection: A multi-site evaluation of the diagnostic accuracy of three deep learning systems. Scientific Reports, 9(1), 15000. https://doi.org/10.1038/ s41598-019-51503-3

Ramspek, C. L., Jager, K. J., Dekker, F. W., Zoccali, C., & van Diepen, M. (2021). External validation of prognostic models: what, why, how, when and where?

Rao, B., Zohrabian, V., Cedeno, P., Saha, A., Pahade, J., & Davis, M. A. (2021). Utility of Artificial Intelligence Tool as a Prospective Radiology Peer Reviewer - Detection of Unreported Intracranial Hemorrhage. Academic Radiology, 28(1), 85–93. https://doi.org/10.1016/j.acra.2020.01.035

Reddy, S., Allan, S., Coghlan, S., & Cooper, P. (2020). A governance model for the application of AI in health care. Journal of the American Medical Informatics Association: JAMIA, 27(3), 491–497. https://doi.org/10.1093/jamia/ocz192

Reddy, S., Rogers, W., Makinen, V.-P., Coiera, E., Brown, P., Wenzel, M., Weicken, E., Ansari, S., Mathur, P., Casey, A., & Kelly, B. (2021). Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health & Care Informatics, 28(1). https://doi.org/10.1136/ bmjhci-2021-100444

Reyes, M., Meier, R., Pereira, S., Silva, C. A., Dahlweid, F.-M., von Tengg-Kobligk, H., Summers, R. M., & Wiest, R. (2020). On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities. Radiology. Artificial Intelligence, 2(3), e190043. https://doi.org/10.1148/ryai.2020190043

Rezazade Mehrizi, M. H., van Ooijen, P., & Homan, M. (2021). Applications of artificial intelligence (AI) in diagnostic radiology: a technography study. European Radiology, 31(4), 1805–1811. https://doi.org/10.1007/s00330-020-07230-9

Richardson, J. P., Smith, C., Curtis, S., Watson, S., Zhu, X., Barry, B., & Sharp, R. R. (2021). Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digital Medicine, 4(1), 140. https://doi.org/10.1038/s41746-021-00509-1

Richardson, M. L., Garwood, E. R., Lee, Y., Li, M. D., Lo, H. S., Nagaraju, A., Nguyen, X. V., Probyn, L., Rajiah, P., Sin, J., Wasnik, A. P., & Xu, K. (2021). Noninterpretive Uses of Artificial Intelligence in Radiology. Academic Radiology, 28(9), 1225– 1235. https://doi.org/10.1016/j.acra.2020.01.012

Rockenbach, M. A. B. (2021, June 13). Multimodal AI in healthcare: Closing the gaps. CodeX. https://medium.com/codex/ multimodal-ai-in-healthcare-1f5152e83be2

Rodríguez-Ruiz, A., Krupinski, E., Mordang, J.-J., Schilling, K., Heywang-Köbrunner, S. H., Sechopoulos, I., & Mann, R. M. (2019). Detection of Breast Cancer with Mammography: Effect of an Artificial Intelligence Support System. Radiology, 290(2), 305–314. https://doi.org/10.1148/radiol.2018181371

Rodriguez-Ruiz, A., Lång, K., Gubern-Merida, A., Broeders, M., Gennaro, G., Clauser, P., Helbich, T. H., Chevalier, M., Tan, T., Mertelmeier, T., Wallis, M. G., Andersson, I., Zackrisson, S., Mann, R. M., & Sechopoulos, I. (2019). Stand- Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison With 101 Radiologists. Journal of the National Cancer Institute, 111(9), 916–922. https://doi.org/10.1093/jnci/djy222

Santomartino, S. M., & Yi, P. H. (2022). Systematic Review of Radiologist and Medical Student Attitudes on the Role and Impact of AI in Radiology. Academic Radiology. https://doi.org/10.1016/j.acra.2021.12.032

Schemmel, A., Lee, M., Hanley, T., Pooler, B. D., Kennedy, T., Field, A., Wiegmann, D., & Yu, J.-P. J. (2016). Radiology Workflow Disruptors: A Detailed Analysis. Journal of the American College of Radiology: JACR, 13(10), 1210–1214. https://doi.org/10.1016/j.jacr.2016.04.009

Schreiber-Zinaman, J., & Rosenkrantz, A. B. (2017). Frequency and reasons for extra sequences in clinical abdominal MRI examinations. Abdominal Radiology (New York), 42(1), 306–311. https://doi.org/10.1007/s00261-016-0877-6

Scott, I. A., Carter, S. M., & Coiera, E. (2021). Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health & Care Informatics, 28(1). https://doi.org/10.1136/ bmjhci-2021-100450

Seah, J. C. Y., Tang, C. H. M., Buchlak, Q. D., Holt, X. G., Wardman, J. B., Aimoldin, A., Esmaili, N., Ahmad, H., Pham, H., Lambert, J. F., Hachey, B., Hogg, S. J. F., Johnston, B. P., Bennett, C., Oakden-Rayner, L., Brotchie, P., & Jones, C. M. (2021). Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study. The Lancet. Digital Health, 3(8), e496–e506. https://doi.org/10.1016/S2589-7500(21)00106-0

Sectra Amplifier Marketplace. (2021, July 5). Sectra Medical. https://medical.sectra.com/product/sectra-amplifier- marketplace/

Sermesant, M., Delingette, H., Cochet, H., Jaïs, P., & Ayache, N. (2021). Applications of artificial intelligence in cardiovascular imaging. Nature Reviews. Cardiology, 18(8), 600–609. https://doi.org/10.1038/s41569-021-00527-2

Setio, A. A. A., Traverso, A., de Bel, T., Berens, M. S. N., van den Bogaard, C., Cerello, P., Chen, H., Dou, Q., Fantacci, M. E., Geurts, B., Gugten, R. van der, Heng, P. A., Jansen, B., de Kaste, M. M. J., Kotov, V., Lin, J. Y.-H., Manders, J. T. M. C., Sóñora-Mengana, A., García-Naranjo, J. C., … Jacobs, C. (2017). Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge. Medical Image Analysis, 42, 1–13. https://doi.org/10.1016/j.media.2017.06.015

Seyyed-Kalantari, L., Zhang, H., McDermott, M. B. A., Chen, I. Y., & Ghassemi, M. (2021). Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under- served patient populations. Nature Medicine, 27(12), 2176– 2182. https://doi.org/10.1038/s41591-021-01595-0

Shan, H., Padole, A., Homayounieh, F., Kruger, U., Khera, R. D., Nitiwarangkul, C., Kalra, M. K., & Wang, G. (2019). Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction. Nature Machine Intelligence, 1(6), 269–276. https://doi.org/10.1038/s42256-019-0057-9

Sharma, K., Rupprecht, C., Caroli, A., Aparicio, M. C., Remuzzi, A., Baust, M., & Navab, N. (2017). Automatic Segmentation of Kidneys using Deep Learning for Total Kidney Volume Quantification in Autosomal Dominant Polycystic Kidney Disease. Scientific Reports, 7(1), 2049. https://doi.org/10.1038/s41598-017-01779-0

Shelmerdine, S. C., Arthurs, O. J., Denniston, A., & Sebire, N. J. (2021). Review of study reporting guidelines for clinical studies using artificial intelligence in healthcare. BMJ Health & Care Informatics, 28(1). https://doi.org/10.1136/ bmjhci-2021-100385

Shinagare, A. B., Ip, I. K., Abbett, S. K., Hanson, R., Seltzer, S. E., & Khorasani, R. (2014). Inpatient imaging utilization: trends of the past decade. AJR. American Journal of Roentgenology, 202(3), W277–W283. https://doi.org/10.2214/AJR.13.10986

Shlobin, N. A., Baig, A. A., Waqas, M., Patel, T. R., Dossani, R. H., Wilson, M., Cappuzzo, J. M., Siddiqui, A. H., Tutino, V. M., & Levy, E. I. (2022). Artificial Intelligence for Large-Vessel Occlusion Stroke: A Systematic Review. World Neurosurgery, 159, 207–220.e1. https://doi.org/10.1016/j.wneu.2021.12.004

Silberg, J., & Manyika, J. (2019, June 6). Tackling bias in artificial intelligence (and in humans). McKinsey & Company. https://www.mckinsey.com/featured-insights/artificial- intelligence/tackling-bias-in-artificial-intelligence-and-in- humans

Singh, S., Kalra, M. K., Hsieh, J., Licato, P. E., Do, S., Pien, H. H., & Blake, M. A. (2010). Abdominal CT: comparison of adaptive statistical iterative and filtered back projection reconstruction techniques. Radiology, 257(2), 373–383. https://doi.org/10.1148/radiol.10092212

Smith-Bindman, R., Kwan, M. L., Marlow, E. C., Theis, M. K., Bolch, W., Cheng, S. Y., Bowles, E. J. A., Duncan, J. R.,
Greenlee, R. T., Kushi, L. H., Pole, J. D., Rahm, A. K., Stout, N. K., Weinmann, S., & Miglioretti, D. L. (2019). Trends in Use of Medical Imaging in US Health Care Systems and in Ontario, Canada, 2000-2016. JAMA: The Journal of the American Medical Association, 322(9), 843–856. https://doi.org/10.1001/jama.2019.11456

Sutherland, G., Russell, N., Gibbard, R., & Dobrescu, A. (n.d.). The value of radiology, part II. https://car.ca/wp-content/ uploads/2019/07/value-of-radiology-part-2-en.pdf

Tamada, D., Kromrey, M.-L., Ichikawa, S., Onishi, H., & Motosugi, U. (2020). Motion Artifact Reduction Using a Convolutional Neural Network for Dynamic Contrast Enhanced MR Imaging of the Liver. Magnetic Resonance in Medical Sciences: MRMS: An Official Journal of Japan Society of Magnetic Resonance in Medicine, 19(1), 64–76. https://doi.org/10.2463/ mrms.mp.2018-0156

The Medical Futurist. (n.d.). The Medical Futurist. Retrieved February 23, 2022, from https://medicalfuturist.com/fda- approved-ai-based-algorithms/

The Nuance AI Marketplace for Diagnostic Imaging. (n.d.). https://www.nuance.com/content/dam/nuance/en_us/collateral/ healthcare/data-sheet/ds-ai-marketplace-for-diagnostic- imaging-en-us.pdf

Thodberg, H. H., Kreiborg, S., Juul, A., & Pedersen, K. D. (2009). The BoneXpert method for automated determination of skeletal maturity. IEEE Transactions on Medical Imaging, 28(1), 52–66. https://doi.org/10.1109/TMI.2008.926067

Thomas, K. A., Kidziński, Ł., Halilaj, E., Fleming, S. L., Venkataraman, G. R., Oei, E. H. G., Gold, G. E., & Delp, S. L. (2020). Automated Classification of Radiographic Knee Osteoarthritis Severity Using Deep Neural Networks. Radiology. Artificial Intelligence, 2(2), e190065. https://doi.org/10.1148/ ryai.2020190065

Towards trustable machine learning. (2018). Nature Biomedical Engineering, 2(10), 709–710. https://doi.org/10.1038/ s41551-018-0315-x

Trinidad, M. G., Platt, J., & Kardia, S. L. R. (2020). The public’s comfort with sharing health data with third-party commercial companies. Humanities and Social Sciences Communications, 7(1), 1–10. https://doi.org/10.1057/s41599-020-00641-5

Trivedi, H., Mesterhazy, J., Laguna, B., Vu, T., & Sohn, J. H. (2018). Automatic Determination of the Need for Intravenous Contrast in Musculoskeletal MRI Examinations Using IBM Watson’s Natural Language Processing Algorithm. Journal of Digital Imaging, 31(2), 245–251. https://doi.org/10.1007/ s10278-017-0021-3

Tsao, D. N. (2020, July 27). AI in medical diagnostics 2020- 2030: Image recognition, players, clinical applications, forecasts: IDTechEx. https://www.idtechex.com/en/research-report/ai-in- medical-diagnostics-2020-2030-image-recognition-players- clinical-applications-forecasts/766

Tucci, V., Saary, J., & Doyle, T. E. (2022). Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review. Journal of Medical Artificial Intelligence, 5, 4–4. https://doi.org/10.21037/jmai-21-25

Ueda, D., Yamamoto, A., Nishimori, M., Shimono, T., Doishita, S., Shimazaki, A., Katayama, Y., Fukumoto, S., Choppin, A., Shimahara, Y., & Miki, Y. (2019). Deep Learning for MR Angiography: Automated Detection of Cerebral Aneurysms. Radiology, 290(1), 187–194. https://doi.org/10.1148/ radiol.2018180901

Urakawa, T., Tanaka, Y., Goto, S., Matsuzawa, H., Watanabe, K., & Endo, N. (2019). Detecting intertrochanteric hip fractures with orthopedist-level accuracy using a deep convolutional neural network. Skeletal Radiology, 48(2), 239–244. https://doi.org/10.1007/s00256-018-3016-3

van Leeuwen, K. G., Schalekamp, S., Rutten, M. J. C. M., van Ginneken, B., & de Rooij, M. (2021). Artificial intelligence in radiology: 100 commercially available products and their scientific evidence. European Radiology, 31(6), 3797–3804. https://doi.org/10.1007/s00330-021-07892-z

Vayena, E., & Blasimme, A. (2017). Biomedical Big Data: New Models of Control Over Access, Use and Governance. Journal of Bioethical Inquiry, 14(4), 501–513. https://doi.org/10.1007/ s11673-017-9809-6

Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLoS Medicine, 15(11), e1002689. https://doi.org/10.1371/journal. pmed.1002689

Wang, J., Yang, F., Liu, W., Sun, J., Han, Y., Li, D., Gkoutos, G. V., Zhu, Y., & Chen, Y. (2020). Radiomic Analysis of Native T1 Mapping Images Discriminates Between MYH7 and MYBPC3- Related Hypertrophic Cardiomyopathy. Journal of Magnetic Resonance Imaging: JMRI, 52(6), 1714–1721. https://doi.org/10.1002/jmri.27209

Wang, S.-H., Tang, C., Sun, J., Yang, J., Huang, C., Phillips, P., & Zhang, Y.-D. (2018). Multiple Sclerosis Identification by 14-Layer Convolutional Neural Network With Batch Normalization, Dropout, and Stochastic Pooling. Frontiers in Neuroscience, 12, 818. https://doi.org/10.3389/fnins.2018.00818

Watanabe, A. T., Lim, V., Vu, H. X., Chim, R., Weise, E., Liu, J., Bradley, W. G., & Comstock, C. E. (2019). Improved
Cancer Detection Using Artificial Intelligence: a Retrospective Evaluation of Missed Cancers on Mammography. Journal
of Digital Imaging, 32(4), 625–637. https://doi.org/10.1007/ s10278-019-00192-5

Weikert, T., Francone, M., Abbara, S., Baessler, B., Choi, B. W., Gutberlet, M., Hecht, E. M., Loewe, C., Mousseaux, E., Natale, L., Nikolaou, K., Ordovas, K. G., Peebles, C., Prieto, C., Salgado, R., Velthuis, B., Vliegenthart, R., Bremerich, J., & Leiner, T. (2021). Machine learning in cardiovascular radiology: ESCR position statement on design requirements, quality assessment, current applications, opportunities, and challenges. European Radiology, 31(6), 3909–3922. https://doi.org/10.1007/ s00330-020-07417-0

Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., & Cave, S. (2019). Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research.
London: Nuffield Foundation. https://www.nuffieldfoundation. org/sites/default/files/files/Ethical-and-Societal-Implications-of- Data-and-AI-report-Nuffield-Foundat.pdf

WHO operational handbook on tuberculosis Module 2: Screening – Systematic screening for tuberculosis disease. (n.d.). Retrieved June 19, 2022, from https://www.who.int/ publications-detail-redirect/9789240022614

Willemink, M. J., & Noël, P. B. (2019). The evolution of image reconstruction for CT-from filtered back projection to artificial intelligence. European Radiology, 29(5), 2185–2195. https://doi.org/10.1007/s00330-018-5810-7

Winder, M., Owczarek, A. J., Chudek, J., Pilch-Kowalczyk, J., & Baron, J. (2021). Are We Overdoing It? Changes in Diagnostic Imaging Workload during the Years 2010-2020 including the Impact of the SARS-CoV-2 Pandemic. Healthcare (Basel, Switzerland), 9(11). https://doi.org/10.3390/healthcare9111557

Wong, T. T., Kazam, J. K., & Rasiej, M. J. (2019). Effect of Analytics-Driven Worklists on Musculoskeletal MRI
Interpretation Times in an Academic Setting. AJR. American Journal of Roentgenology, 1–5. https://doi.org/10.2214/ AJR.18.20434

Wu, B., Zhou, Z., Wang, J., & Wang, Y. (2018). Joint learning for pulmonary nodule segmentation, attributes and malignancy prediction. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 1109–1113. https://doi.org/10.1109/ISBI.2018.8363765

Wu, E., Wu, K., Daneshjou, R., Ouyang, D., Ho, D. E., & Zou, J. (2021). How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nature Medicine, 27(4), 582–584. https://doi.org/10.1038/s41591-021-01312-x

Wu, G.-G., Zhou, L.-Q., Xu, J.-W., Wang, J.-Y., Wei, Q., Deng, Y.-B., Cui, X.-W., & Dietrich, C. F. (2019). Artificial intelligence in breast ultrasound. World Journal of Radiology, 11(2), 19–26. https://doi.org/10.4329/wjr.v11.i2.19

Yala, A., Schuster, T., Miles, R., Barzilay, R., & Lehman, C. (2019). A Deep Learning Model to Triage Screening Mammograms: A Simulation Study. Radiology, 293(1), 38–46. https://doi.org/10.1148/radiol.2019182908

Yasaka, K., Akai, H., Kunimatsu, A., Abe, O., & Kiryu, S. (2018). Liver Fibrosis: Deep Convolutional Neural Network for Staging by Using Gadoxetic Acid-enhanced Hepatobiliary Phase MR Images. Radiology, 287(1), 146–155. https://doi.org/10.1148/ radiol.2017171928

Yeung, K. (2018). A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of
Responsibility Within a Human Rights Framework. https://papers. ssrn.com/abstract=3286027

Yoo, Y., Tang, L. Y. W., Li, D. K. B., Metz, L., Kolind, S., Traboulsee, A. L., & Tam, R. C. (2019). Deep learning of brain lesion patterns and user-defined clinical and MRI features for predicting conversion to multiple sclerosis from clinically isolated syndrome. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 7(3), 250–259. https://doi.org/10.1080/21681163.2017.1356750

Yu, A. C., Mohajer, B., & Eng, J. (2022). External Validation of Deep Learning Algorithms for Radiologic Diagnosis: A Systematic Review. Radiology. Artificial Intelligence, 4(3), e210064. https://doi.org/10.1148/ryai.210064

Yu, J.-P. J., Kansagra, A. P., & Mongan, J. (2014). The radiologist’s workflow environment: evaluation of disruptors and potential implications. Journal of the American College of Radiology: JACR, 11(6), 589–593. https://doi.org/10.1016/j. jacr.2013.12.026

Yusuf, M., Atal, I., Li, J., Smith, P., Ravaud, P., Fergie, M., Callaghan, M., & Selfe, J. (2020). Reporting quality of studies using machine learning models for medical diagnosis: a systematic review. BMJ Open, 10(3), e034568. https://doi.org/10.1136/bmjopen-2019-034568

Yu, Y., Xie, Y., Thamm, T., Gong, E., Ouyang, J., Christensen, S., Marks, M. P., Lansberg, M. G., Albers, G. W., & Zaharchuk,
G. (2021). Tissue at Risk and Ischemic Core Estimation Using Deep Learning in Acute Stroke. AJNR. American Journal of Neuroradiology, 42(6), 1030–1037. https://doi.org/10.3174/ajnr. A7081

Yu, Y., Xie, Y., Thamm, T., Gong, E., Ouyang, J., Huang, C., Christensen, S., Marks, M. P., Lansberg, M. G., Albers, G. W., & Zaharchuk, G. (2020). Use of Deep Learning to Predict Final Ischemic Stroke Lesions From Initial Magnetic Resonance Imaging. JAMA Network Open, 3(3), e200772. https://doi.org/10.1001/jamanetworkopen.2020.0772

Zhang, Y., & Yu, H. (2018). Convolutional Neural Network Based Metal Artifact Reduction in X-Ray Computed Tomography. IEEE Transactions on Medical Imaging, 37(6), 1370–1381. https://doi.org/10.1109/TMI.2018.2823083

Zhao, B., Liu, Z., Ding, S., Liu, G., Cao, C., & Wu, H. (2022). Motion artifact correction for MR images based on convolutional neural network. Optoelectronics Letters, 18(1), 54–58. https://doi.org/10.1007/s11801-022-1084-z

Zhao, J., Huang, Y., Song, Y., Xie, D., Hu, M., Qiu, H., & Chu, J. (2020). Diagnostic accuracy and potential covariates for machine learning to identify IDH mutations in glioma patients: evidence from a meta-analysis. European Radiology, 30(8), 4664–4674. https://doi.org/10.1007/s00330-020-06717-9

Zhou, C., Ding, C., Wang, X., Lu, Z., & Tao, D. (2020). One-pass Multi-task Networks with Cross-task Guided Attention for Brain Tumor Segmentation. IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society. https://doi.org/10.1109/TIP.2020.2973510

Zicari, R. V., Brodersen, J., Brusseau, J., Düdder, B., Eichhorn, T., Ivanov, T., Kararigas, G., Kringen, P., McCullough, M., Möslein, F., Mushtaq, N., Roig, G., Stürtz, N., Tolle, K., Tithi, J. J., van Halem, I., & Westerlund, M. (2021). Z-Inspection®: A Process to Assess Trustworthy AI. IEEE Transactions on Technology and Society, 1–1. https://doi.org/10.1109/TTS.2021.3066209
 

 

Guide to Artificial Intelligence in Radiology

    Artificial intelligence (AI) is playing a growing role in all our lives and has shown promise in addressing some of the greatest current and upcoming societal challenges we face. The healthcare industry, though notoriously complex and resistant to disruption, potentially has a lot to gain from the use of AI. With an established history of leading digital transformation in healthcare and an urgent need for improved efficiency, radiology has been at the forefront of harnessing AI’s potential.

    This book covers how and why AI can address challenges faced by radiology departments, provides an overview of the fundamental concepts related to AI, and describes some of the most promising use cases for AI in radiology. In addition, the major challenges associated with the adoption of AI into routine radiological practice are discussed. The book also covers some crucial points radiology departments should keep in mind when deciding on which AI-based solutions to purchase. Finally, it provides an outlook on what new and evolving aspects of AI in radiology to expect in the near future.

    The healthcare industry has experienced a number of trends over the past few decades that demand a change in the way certain things are done. These trends are particularly salient in radiology, where the diagnostic quality of imaging scans has improved dramatically while scan times have decreased. As a result, the amount and complexity of medical imaging data acquired have increased substantially over the past few decades (Smith-Bindman et al., 2019; Winder et al., 2021) and are expected to continue to increase (Tsao, 2020). This issue is complicated by a widespread global shortage of radiologists (AAMC Report Reinforces Mounting Physician Shortage, 2021, Clinical Radiology UK Workforce Census 2019 Report, 2019). Healthcare workers, including radiologists, have an increasing workload (Bruls & Kwee, 2020; Levin et al., 2017) that contributes to burnout and medical errors (Harry et al., 2021). Being an essential service provider to virtually all other hospital departments, staff shortages within radiology have significant effects that spread throughout the hospital and to society as a whole (England & Improvement, 2019; Sutherland et al., n.d.).

    With an ageing global population and a rising burden of chronic illnesses, these issues are expected to pose even more of a challenge to the healthcare industry in the future.

    AI-based medical imaging solutions have the potential to ameliorate these challenges for several reasons. They are particularly suited to handling large, complex datasets (Alzubaidi et al., 2021). Moreover, they are well suited to automate some of the tasks traditionally performed by radiologists and radiographers, potentially freeing up time and making workflows within radiology departments more efficient (Allen et al., 2021; Baltruschat et al., 2021; Kalra et al., 2020; O’Neill et al., 2021; van Leeuwen et al., 2021; Wong et al., 2019). AI is also capable of detecting complex patterns in data that humans cannot necessarily find or quantify (Dance, 2021; Korteling et al., 2021; Kühl et al., 2020).

    The term “artificial intelligence” refers to the use of computer systems to solve specific problems in a way that simulates human reasoning. One fundamental characteristic of AI is that, like humans, these systems can tailor their solutions to changing circumstances. Note that, while these systems are meant to mimic on a fundamental level how humans think, their capacity to do so (e.g. in terms of the amount of data they can handle at one time, the nature and amount of patterns they can find in the data, and the speed at which they do so) often exceeds that of humans.

    AI solutions come in the form of computer algorithms, which are pieces of computer code representing instructions to be followed to solve a specific problem. In its most fundamental form, the algorithm takes data as an input, performs some computation on that data, and returns an output.

    An AI algorithm can be explicitly programmed to solve a specific task, analogous to a step-by-step recipe for baking a cake. On the other hand, the algorithm can be programmed to look for patterns within the data in order to solve the problem. These types of algorithms are known as machine learning algorithms. Thus, all machine learning algorithms are AI, but not all AI is machine learning. The patterns in the data that the algorithm can be explicitly programmed to look for or that it can “discover” by itself are known as features. An important characteristic of machine learning is that such algorithms learn from the data itself, and their performance improves the more data they are given.

    One of the most common uses of machine learning is in classification - assigning a piece of data a particular label. For example, a machine learning algorithm might be used to tell if a photo (the input) shows a dog or a cat (the label). The algorithm can learn to do so in a supervised or unsupervised way.

    Supervised learning

    In supervised learning, the machine learning algorithm is given data that has been labelled with the ground truth, in this example, photos of dogs and cats that have been labelled as such. The process then goes through the following phases:

    1.Training phase: The algorithm learns the features associated with dogs and cats using the aforementioned data (training data).
    2.Test phase: The algorithm is then given a new set of photos (the test data), it labels them and the performance of the algorithm on that data is assessed.

    In some cases, there is a phase in between training and test, known as the validation phase. In this phase, the algorithm is given a new set of photos (not included in either the training or test data), its performance is assessed on this data, and the model is tweaked and retrained on the training data. This is repeated until some predefined performance-based criterion is reached, and the algorithm then enters the test phase.

    Unsupervised learning

    In unsupervised learning, the algorithm identifies features within the input data that allow it to assign classes to the individual data points without being told explicitly what those classes are or should be. Such algorithms can identify patterns or group data points together without human intervention and include clustering and dimensionality reduction algorithms. Not all machine learning algorithms perform classification. Some are used to predict a continuous metric (e.g. the temperature in four weeks’ time) instead of a discrete label (e.g. cats vs dogs). These are known as regression algorithms.

    Neural networks and deep learning

    A neural network is made up of an input layer and an output layer, which are themselves composed of nodes. In simple neural networks, features that are manually derived from a dataset are fed into the input layer, which performs some computations, the results of which are relayed to the output layer. In deep learning, multiple “hidden” layers exist between the input and output layers. Each node of the hidden layers performs calculations using certain weights and relays the output to the next hidden layer until the output layer is reached.

    In the beginning, random values are assigned to the weights and the accuracy of the algorithm is calculated. The values of the weights are then iteratively adjusted until a set of weight values that maximize accuracy is found. This iterative adjustment of the weight values is usually done by moving backwards from the output layer to the input layer, a technique called backpropagation. This entire process is done on the training data.

    Performance evaluation

    Understanding how the performance of AI algorithms is assessed is key to interpreting the AI literature. Several performance metrics exist for assessing how well a model performs certain tasks. No single metric is perfect, so a combination of several metrics provides a fuller picture of model performance.

    In regression, the most commonly used metrics include:

    • Mean absolute error (MAE): the average difference between the predicted values and the ground truth.
    • Root mean square error (RMSE): the differences between the predicted values and the ground truth are squared and then averaged over the sample. Then the square root of the average is taken. Unlike the MAE, the RMSE thus gives higher weight to larger differences.
    • R2: the proportion of the total variance in the ground truth explained by the variance in the predicted values. It ranges from 0 to 1.

    The following metrics are commonly used in classification tasks:

    • Accuracy: this is the proportion of all predictions that were predicted correctly. It ranges from 0 to 1.
    • Sensitivity: also known as the true positive rate (TPR) or recall, this is the proportion of true positives that were predicted correctly. It ranges from 0 to 1.
    • Specificity: Also known as the true negative rate (TNR), this is the proportion of true negatives that were predicted correctly. It ranges from 0 to 1.
    • Precision: also known as positive predictive value (PPV), this is the proportion of positive classifications that were predicted correctly. It ranges from 0 to 1.

    An inherent trade-off exists between sensitivity and specificity. The relevant importance of each, as well as their interpretation, highly depends on the specific research question and classification task.

    Importantly, although classification models are meant to reach a binary conclusion, they are inherently probability-based. This means that these models will output a probability that a data point belongs to one class or another. In order to reach a conclusion on the most likely class, a threshold is used. Metrics such as accuracy, sensitivity, specificity and precision refer to the performance of the algorithm based on a certain threshold. The area under the receiver operating characteristic curve (AUC) is a threshold-independent performance metric. The AUC can be interpreted as the probability that a random positive example is ranked higher by the algorithm than a random negative example.

    In image segmentation tasks, which are a type of classification task, the following metrics are commonly used:

    • Dice similarity coefficient: a measure of overlap between two sets (e.g. two images) that is calculated as two times the number of elements common to the sets divided by the sum of the number of elements in each set. It ranges from 0 (no overlap) to 1 (perfect overlap).
    • Hausdorff distance: a measure of how far two sets (e.g. two images) within a space are far from each other. It is basically the largest distance from one point in one set to the closest point in the other set.

    Internal and external validity

    Internally valid models perform well in their task on the data being used to train and validate them. The degree to which they are internally valid is assessed using the performance metrics outlined above and depends on the characteristics of the model itself and the quality of the data that the model was trained and validated on.

    Externally valid models perform well in their tasks on new data (Ramspek et al., 2021). The better the model performs on data that differs from the data the models were trained and validated on, the higher the external validity. In practice, this often requires the performance of the models to be tested on data from hospitals or geographical areas that were not part of the model’s training and validation datasets.

    Guidelines for evaluating AI research

    Several guidelines have been developed to assess the evidence behind AI-based interventions in healthcare (X. Liu et al., 2020; Mongan et al., 2020; Shelmerdine et al., 2021; Weikert et al., 2021). These provide a template for those doing AI research in healthcare and ensure that relevant information is reported transparently and comprehensively, but can also be used by other stakeholders to assess the quality of published research. This helps ensure that AI-based solutions with substantial potential or actual limitations, particularly those caused by poor reporting (Bozkurt et al., 2020; D. W. Kim et al., 2019; X. Liu et al., 2019; Nagendran et al., 2020; Yusuf et al., 2020), are not prematurely adopted (CONSORT-AI and SPIRIT-AI Steering Group, 2019). Guidelines have also been proposed for evaluating the trustworthiness of AI-based solutions in terms of transparency, confidentiality, security, and accountability (Buruk et al., 2020; Lekadir et al., 2021; Zicari et al., 2021).

    Over the past few years, AI has shown great potential in addressing a broad range of tasks within a medical imaging department, including many that happen before the patient is scanned. Implementations of AI to improve the efficiency of radiology workflows prior to patient scanning are sometimes referred to as “upstream AI” (Kapoor et al., 2020; M. L. Richardson et al., 2021).

    Scheduling

    One promising upstream AI application is predicting whichpatients arelikelytomisstheirscanappointments. Missed appointments are associated with significantly increased workload and costs (Dantas et al., 2018). Using a Gradient Boosting approach, Nelson et al. predicted missed hospital magnetic resonance imaging (MRI) appointments in the United Kingdom’s National Health Service (NHS) with high accuracy (Nelson et al., 2019). Their simulations also suggested that acting on the predictions of this model by targeting patients who are likely to miss their appointments would potentially yield a net benefit of several pounds per appointment across a range of model thresholds and missed appointment rates (Nelson et al., 2019). Similar results were recently found in a study of a single hospital in Singapore. For the 6-month period following the deployment of the predictive tool they were able to significantly reduce the no show rate from 19.3 % tp 15.9 % which translated into a potential economic benefit of $180,000 (Chong et. al., 2020).

    Scheduling scans in a radiology department is a challenging endeavour because, although it is largely an administrative task, it depends heavily on medical information. The task of assigning patients to specific appointments thus often requires the input of someone with domain knowledge, which stipulates that either the person making the appointments must be a radiologist or radiology technician, or these people will have to provide input regularly. In either scenario, the process is somewhat inefficient and can potentially be streamlined using AI-based algorithms that check scan indications and contraindications and provide the people scheduling the scans with information about scan urgency (Letourneau-Guillon et al., 2020).

    Protocolling

    Depending on hospital or clinic policy, the decision on what exact scan protocol a patient receives is usually made based on the information on the referring physician’s scan request and the judgement of the radiologist. This is often supplemented by direct communication between the referring physician and radiologist and the radiologist’s review of the patient’s medical information. This process improves patient care (Boland et al., 2014) but can be time-consuming and inefficient, particularly with modalities like MRI, where a large number of protocol permutations exist. In one study, protocolling alone accounted for about 6 % of the radiologist’s working time (Schemmel et al., 2016). Radiologists are also often interrupted by tasks such as protocolling when interpreting images, despite the fact that the latter is considered a radiologist’s primary responsibility (Balint et al., 2014; J.-P. J. Yu et al., 2014).

    Interpretation of the narrative text of the referring physician’s scan request has been attempted using natural language classifiers, the same technology used in chatbots and virtual assistants. Natural language classifiers based on deep learning have shown promise in assigning patients to either a contrast-enhanced or non-enhanced MRI protocol for musculoskeletal MRI, with an accuracy of 83 % (Trivedi et al., 2018) and 94 % (Y. H. Lee, 2018). Similar algorithms have shown an accuracy of 95 % for predicting the appropriate brain MRI protocol using a combination of up to 41 different MRI sequences (Brown & Marotta, 2018). Across a wide range of body regions, a deep-learning-based natural language classifier decided based on the narrative text of the scan requests whether to automatically assign a specific computed tomography (CT) or MRI protocol (which it did with 95 % accuracy) or, in more difficult cases, recommend a list of three most appropriate protocols to the radiologist (which it did with 92 % accuracy) (Kalra et al., 2020).

    AI has also been used to decide whether already protocolled scans need to be extended, a decision which has to be made in real-time while the patient is inside the scanner. One such example is in prostate MRI, where a decision on whether to administer a contrast agent is often made after the non-contrast sequences. Hötker et al. found that a convolutional neural network (CNN) assigned 78 % of patients to the appropriate prostate MRI protocol (Hötker et al., 2021). The sensitivity of the CNN for the need for contrast was 94.4 % with a specificity of 68.8 % and only 2 % of patients in their study would have had to be called back for a contrast- enhanced scan (Hötker et al., 2021).

    Image quality improvement and monitoring

    Many AI-based solutions that work in the background of radiology workflows to improve image quality have recently been established. These include solutions for monitoring image quality, reducing image artefacts, improving spatial resolution, and speeding up scans.

    Such solutions are entering the radiology mainstream, particularly for computed tomography, which for decades used established but artefact-prone methods for reconstructing interpretable images from the raw sensor data (Deák et al., 2013; Singh et al., 2010).

    These are gradually being replaced by deep-learning- based reconstruction methods, which improve image quality while maintaining low radiation doses (Akagi et al., 2019; H. Chen et al., 2017; Choe et al., 2019; Shan et al., 2019). This reconstruction is performed on supercomputers on the CT scanner itself or on the cloud. The balance between radiation dose and image quality can be adjusted on a protocol-specific basis to tailor scans to individual patients and clinical scenarios (McLeavy et al., 2021; Willemink & Noël, 2019). Such approaches have found particular use when scanning children, pregnant women, and obese patients as well as CT scans of the urinary tract and heart (McLeavy et al., 2021).

    AI-based solutions have also been used to speed up scans while maintaining diagnostic quality. Scan time reduction not only improves overall efficiency but also contributes to an overall better patient experience and compliance with imaging examination. A multi- centre study of spine MRI showed that a deep-learning- based image reconstruction algorithm that enhanced images using filtering and detail-preserving noise reduction reduced scan times by 40 % (Bash, Johnson, et al., 2021). For T1-weighted MRI scans of the brain, a similar algorithm that improves image sharpness and reduces image noise reduced scan times by 60 % while maintaining the accuracy of brain region volumetry compared to standard scans (Bash, Wang, et al., 2021).

    In routine radiological practice, images often contain artefacts that reduce their interpretability. These artefacts are the result of characteristics of the specific imaging modality or protocol used or factors intrinsic to the patient being scanned, such as the presence of foreign bodies or the patient moving during the scan. Particularly with MRI, imaging protocols that demand fast scanning often introduce certain artefacts to the reconstructed image. In one study, a deep-learning- based algorithm reduced banding artefacts associated with balanced steady-state free precession MRI sequences of the brain and knee (K. H. Kim & Park, 2017). For real-time imaging of the heart using MRI, another study found that the aliasing artefacts introduced by the data undersampling were reduced by using a deep-learning-based approach (Hauptmann et al., 2019). The presence of metallic foreign bodies such as dental, orthopaedic or vascular implants is a common patient-related factor causing image artefacts in both CT and MRI (Boas & Fleischmann, 2012; Hargreaves et al., 2011). Although not yet well established, several deep-learning-based approaches for reducing these artefacts have been investigated (Ghani & Clem Karl, 2019; Puvanasunthararajah et al., 2021; Zhang & Yu, 2018). Similar approaches are being tested for reducing motion-related artefacts in MRI (Tamada et al., 2020; B. Zhao et al., 2022).

    AI-based solutions for monitoring image quality potentially reduce the need to call patients back to repeat imaging examinations, which is a common problem (Schreiber-Zinaman & Rosenkrantz, 2017). A deep-learning-based algorithm that identifies the radiographic view acquired and extracts quality-related metrics from ankle radiographs was able to predict image quality with about 94 % accuracy (Mairhöfer et al., 2021). Another deep-learning-based approach was capable of predicting nondiagnostic liver MRI scans with a negative predictive value of between 86 % and 94 % (Esses et al., 2018). This real-time automated quality control potentially allows radiology technicians to rerun scans or run additional scans with greater diagnostic value.

    Scan reading prioritization

    With staff shortages and increasing scan numbers, radiologists face long reading lists. To optimize efficiency and patient care, AI-based solutions have been suggested as a way to prioritize which scans radiologists read and report first, usually by screening acquired images for findings that require urgent intervention (O’Connor & Bhalla, 2021). This has been most extensively studied in neuroradiology, where moving CT scans that were found to have intracranial haemorrhage by an AI-based tool to the top of the reading list reduced the time it took radiologists to view the scans by several minutes (O’Neill et al., 2021). Another study found that the time-to diagnosis (which includes the time from image acquisition to viewing by the radiologist and the time to read and report the scans) was reduced from 512 to 19 minutes in an outpatient setting when such a worklist prioritization was used (Arbabshirani et al., 2018). A simulation study using AI-based worklist prioritization based on identifying urgent findings on chest radiographs (such as pneumothorax, pleural effusions, and foreign bodies) also found a substantial reduction in the time it took to view and report the scans compared to standard workflow prioritization (Baltruschat et al., 2021).

    Image interpretation

    Currently, the majority of commercially available AI- based solutions in medical imaging focus on some aspect of analyzing and interpreting images (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021). This includes segmenting parts of the image (for surgical or radiation therapy targeting, for example), bringing suspicious areas to radiologists’ attention, extracting imaging biomarkers (radiomics), comparing images across time, and reaching specific imaging diagnoses.

    Neurology

    ¡ 29–38 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    Most commercially available AI-based solutions targeted at neuroimaging data aim to detect and characterize ischemic stroke, intracranial haemorrhage, dementia, and multiple sclerosis (Olthof et al., 2020). Several studies have shown excellent accuracy of AI- based methods for the detection and classification of intraparenchymal, subarachnoid, and subdural haemorrhage on head CT (Flanders et al., 2020; Ker et al., 2019; Kuo et al., 2019). Subsequent studies showed that, compared to radiologists, some AI-based solutions have substantially lower false positive and negative rates (Ginat, 2020; Rao et al., 2021). In ischemic stroke, AI-based solutions have largely focused on the quantification of the infarct core (Goebel et al., 2018; Maegerlein et al., 2019), the detection of large vessel occlusion (Matsoukas et al., 2022; Morey et al., 2021; Murray et al., 2020; Shlobin et al., 2022), and the prediction of stroke outcomes (Bacchi et al., 2020; Nielsen et al., 2018; Y. Yu et al., 2020, 2021).

    In multiple sclerosis, AI has been used to identify and segment lesions (Nair et al., 2020; S.-H. Wang et al., 2018), which can be particularly helpful for the longitudinal follow-up of patients. It has also been used to extract imaging features associated with progressive disease and conversion from clinically isolated syndrome to definite multiple sclerosis (Narayana et al., 2020; Yoo et al., 2019). Other applications of AI in neuroradiology include the detection of intracranial aneurysms (Faron et al., 2020; Nakao et al., 2018; Ueda et al., 2019) and the segmentation of brain tumours (Kao et al., 2019; Mlynarski et al., 2019; Zhou et al., 2020) as well as the prediction of brain tumour genetic markers from imaging data (Choi et al., 2019; J. Zhao et al., 2020)

    Chest

    ¡ 24 %–31 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    When interpreting chest radiographs, radiologists detected substantially more critical and urgent findings when aided by a deep-learning-based algorithm, and did so much faster than without the algorithm (Nam et al., 2021). Deep-learning-based image interpretation algorithms have also been found to improve radiology residents’ sensitivity for detecting urgent findings on chest radiographs from 66 % to 73 % (E. J. Hwang, Nam, et al., 2019). Another study which focused on a broader range of findings on chest radiographs also found that radiologists aided by a deep-learning-based algorithm had higher diagnostic accuracy than radiologists who read the radiographs without assistance (Seah et al., 2021). The uses of AI in chest radiology also extend to cross-sectional imaging like CT. A deep learning algorithm was found to detect pulmonary embolism on CT scans with high accuracy (AUC = 0.85) (Huang, Kothari, et al., 2020). Moreover, a deep learning algorithm was 90 % accurate in detecting aortic dissection on non-contrast-enhanced CT scans, similar to the performance of radiologists (Hata et al., 2021).

    Outside the emergency setting, AI-based solutions have been widely tested and implemented for tuberculosis screening on chest radiographs (E. J. Hwang, Park, et al., 2019; S. Hwang et al., 2016; Khan et al., 2020; Qin et al., 2019; WHO Operational Handbook on Tuberculosis Module 2: Screening – Systematic Screening for Tuberculosis Disease, n.d.). In addition, they have been useful for lung cancer screening both in terms of detecting lung nodules on CT (Setio et al., 2017) and chest radiographs (Li et al., 2020) and by classifying whether nodules are likely to be malignant or benign (Ardila et al., 2019; Bonavita et al., 2020; Ciompi et al., 2017; B. Wu et al., 2018). AI-based solutions also show great promise for the diagnosis of pneumonia, chronic obstructive pulmonary disease, and interstitial lung disease (F. Liu et al., 2021).

    Breast

    ¡ 11 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    So far, many of the AI-based algorithms targeting breast imaging aim to reduce the workload of radiologists reading mammograms. Ways to do this have included using AI-based algorithms to triage out negative mammograms, which in one study was associated with a reduction in radiologists’ workload by almost one-fifth (Yala et al., 2019). Other studies that have replaced second readers of mammograms with AI- based algorithms have shown that this leads to fewer false positives and false negatives as well as reduces the workload of the second reader by 88 % (McKinney et al., 2020).

    AI-based solutions for mammography have also been found to increase the diagnostic accuracy of radiologists (McKinney et al., 2020; Rodríguez-Ruiz et al., 2019; Watanabe et al., 2019) and some have been found to be highly accurate in independently detecting and classifying breast lesions (Agnes et al., 2019; Al- Antari et al., 2020; Rodriguez-Ruiz et al., 2019).
    Despite this, a recent systematic review of 36 AI- based algorithms found that these studies were of poor methodological quality and that all algorithms were less accurate than the consensus of two or more radiologists (Freeman et al., 2021). AI-based algorithms have nonetheless shown potential for extracting cancer-predictive features from mammograms beyond mammographic breast density (Arefan et al., 2020; Dembrower et al., 2020; Hinton et al., 2019). Beyond mammography, AI-based solutions have been developed for detecting and classifying breast lesions on ultrasound (Akkus et al., 2019; Park et al., 2019; G.- G. Wu et al., 2019) and MRI (Herent et al., 2019).

    Cardiac

    ¡ 11 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    Cardiac radiology has always been particularly challenging because of the difficulties inherent in acquiring images of a constantly moving organ. Because of this, it has benefited immensely from advances in imaging technology and seems set to benefit greatly from AI as well (Sermesant et al., 2021). Most of the AI-based applications of the cardiovascular system use MRI, CT or ultrasound data (Weikert et al., 2021). Prominent examples include the automated calculation of ejection fraction on echocardiography, quantification of coronary artery calcification on cardiac CT, determination of right ventricular volume on CT pulmonary angiography, and determination of heart chamber size and thickness on cardiac MRI (Medical AI Evaluation, n.d., The Medical Futurist, n.d.). AI-based solutions for the prediction of patients likely to respond favourably to cardiac interventions, such as cardiac resynchronization therapy, based on imaging and clinical parameters have also shown great promise (Cikes et al., 2019; Hu et al., 2019). Changes in cardiac MRI not readily visible to human readers but potentially useful for differentiating different types of cardiomyopathies can also be detected using AI through texture analysis (Neisius et al., 2019; J. Wang et al., 2020) and other radiomic approaches (Mancio et al., 2022).

    Musculoskeletal

    ¡ 7–11 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    Promising applications of AI in the assessment of muscles, bones and joints include applications where human readers generally show poor between- and within-rater reliability, such as the determination of skeletal age based on bone radiographs (Halabi et al., 2019; Thodberg et al., 2009) and screening for osteoporosis on radiographs (Kathirvelu et al., 2019; J.-S. Lee et al., 2019) and CT (Pan et al., 2020). AI- based solutions have also shown promise for detecting fractures on radiographs and CT (Lindsey et al., 2018; Olczak et al., 2017; Urakawa et al., 2019). One systematic review of AI-based solutions for fracture detection in several different body parts showed AUCs ranging from 0.94 to 1.00 and accuracies of 77 % to 98 % (Langerhuizen et al., 2019). AI-based solutions have also achieved accuracies similar to radiologists for classification of the severity of degenerative changes of the spine (Jamaludin et al., 2017) and extremity joints (F. Liu et al., 2018; Thomas et al., 2020). AI-based solutions have also been developed to determine the origin of skeletal metastases (Lang et al., 2019) and the classification of primary bone tumours (Do et al., 2017).

    Abdomen and pelvis

    ¡ 4 % of commercially available AI-based applications in radiology (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021).

    Much of the efforts in using AI in abdominal imaging have thus far concentrated on the automated segmentation of organs such as the liver (Dou et al., 2017), spleen (Moon et al., 2019), pancreas (Oktay et al., 2018), and kidneys (Sharma et al., 2017). In addition, a systematic review of 11 studies using deep learning for the detection of malignant liver masses showed accuracies of up to 97 % and AUCs of up to 0.92 (Azer, 2019).

    Other applications of AI in abdominal radiology include the detection of liver fibrosis (He et al., 2019; Yasaka et al., 2018), fatty liver disease, hepatic iron content, the detection of free abdominal gas on CT, and automated volumetry and segmentation of the prostate (AI for Radiology, n.d.).

    Despite the great potential of AI in medical imaging, it has yet to find widespread implementation and impact in routine clinical practice. This research-to- clinic translation is being hindered by several complex and interrelated issues that directly or indirectly lower the likelihood of AI-based solutions being adopted. One major way they do so is by creating a lack of trust in AI- based solutions by key stakeholders such as regulators, healthcare professionals and patients (Cadario et al., 2021; Esmaeilzadeh, 2020; J. P. Richardson et al., 2021; Tucci et al., 2022).

    Generalizability

    One major challenge is to develop AI-based solutions that continue to perform well in new, real-world scenarios. In a large systematic review, almost half of the studied AI-based medical imaging algorithms reported a greater than 0.05 decrease in the AUC when tested on new data (A. C. Yu et al., 2022). This lack of generalizability can lead to adverse effects on how well the model performs in a real-world scenario.

    If a solution performs poorly when tested on a dataset with a similar or identical distribution to the training dataset, it is said to lack narrow generalizability and is often a consequence of overfitting (Eche et al., 2021). Potential solutions for overfitting are using larger training datasets and reducing the model’s complexity. If a solution performs poorly when tested on a dataset with a different distribution to the training dataset (e.g. a different distribution of patient ethnicities), it is said to lack broad generalizability (Eche et al., 2021). Solutions to poor broad generalizability include stress-testing the model on datasets with different distributions from the training dataset (Eche et al., 2021).

    AI solutions are often developed in a high-resource environment such as large technology companies and academic medical centres in wealthy countries. It is likely that findings and performance in these high-resource contexts will fail to generalize to lower- resource contexts such as smaller hospitals, rural areas or poorer countries (Price & Nicholson, 2019), which complicates the issue further.

    Risk of bias

    Biases can arise in AI-based solutions due to data or human factors. The former occurs when the data used to train the AI solution does not adequately represent the target population. Datasets can be unrepresentative when they are too small or have been collected in a way that misrepresents a certain population category. AI solutions trained on unrepresentative data perpetuate biases and perform poorly in the population categories underrepresented or misrepresented in the training data. The presence of such biases has been empirically shown in many AI-based medical imaging studies (Larrazabal et al., 2020; Seyyed-Kalantari et al., 2021).

    AI-based solutions are prone to several subjective and sometimes implicitly or explicitly prejudiced decisions during their development by humans. These human factors include how the training data is selected, how it is labelled, and how the decision is made to focus on the specific problem the AI-based solution intends to solve (Norori et al., 2021). Some recommendations and tools are available to help minimize the risk of bias in AI research (AIF360: A Comprehensive Set of Fairness Metrics for Datasets and Machine Learning Models, Explanations for These Metrics, and Algorithms to Mitigate Bias in Datasets and Models, n.d., IBM Watson Studio - Model Risk Management, n.d.; Silberg & Manyika, 2019).

    Data quantity, quality and variety

    Problems such as bias and lack of generalizability can be mitigated by ensuring that training data is of sufficient quantity, quality and variety. However, this is difficult to do because patients are often reluctant to share their data for commercial purposes (Aggarwal, Farag, et al., 2021; Ghafur et al., 2020; Trinidad et al., 2020), hospitals and clinics are usually not equipped to make this data available in a useable and secure manner, and organizing and labelling the data is time- consuming and expensive.

    Many datasets can be used for a number of different purposes, and sharing data between companies can help make the process of data collection and organization more efficient, as well as increase the amount of data available for each application. However, developers are often reluctant to share data with each other, or even reveal the exact source of their data, to stay competitive.

    Data protection and privacy

    The development and implementation of AI-based solutions require that patients are explicitly informed about, and give their consent to, the use of their data for a particular purpose and by certain people. This data also has to be adequately protected from data breaches and misuse. Failure to ensure this greatly undermines the public’s trust in AI-based solutions and hinders their adoption. While regulations governing health data privacy state that the collection of fully anonymized data does not require explicit patient consent (General Data Protection Regulation (GDPR) – Official Legal Text, 2016; Office for Civil Rights (OCR), 2012) and in theory protects from the data being misused, whether or not imaging data can be fully anonymized is controversial (Lotan et al., 2020; Murdoch, 2021). Whether consent can be truly informed considering the complexity of the data being acquired, and the resulting myriad of potential future uses of the data, is also disputed (Vayena & Blasimme, 2017).

    IT infrastructure

    Among hospital departments, radiology has always been at the forefront ofdigitalization. AI-based solutions that focus on image processing and interpretation are likely to find the prerequisite infrastructure in most radiology departments, for example for linking imaging equipment to computers for analysis and for archiving images and other outputs. However, most radiology departments are likely to require significant infrastructure upgrades for other applications of AI, particularly those requiring the integration of information from multiple sources and having complex outputs. Moreover, it is important to keep in mind that the distribution of necessary infrastructure is highly unequal across and within countries (Health Ethics & Governance, 2021).

    In terms of computing power, radiology departments will either have to invest resources into the hardware and personnel necessary to run these AI-based solutions or opt for cloud-based solutions. The former comes with an extra cost but allows data processing within the confines of the hospital or clinic’s local network. Cloud-based solutions for computing (known as “infrastructure as a service” or “IaaS”) are often considered the less secure and less trustworthy option, but this depends on a number of factors and is thus not always true (Baccianella & Gough, n.d.). Guidelines on what to consider when procuring cloud-based solutions in healthcare are available (Cloud Security for Healthcare Services, 2021).

    Lack of standardization, interoperability, and integrability

    The problem of infrastructure becomes even more complicated when considering how fragmented the AI medical imaging market currently is (Alexander et al., 2020). It is therefore likely that in the near future a single department will have several dozen AI-based solutions from different vendors running simultaneously. Having a separate self-contained infrastructure (e.g. a workstation or server) for each of these would be incredibly complicated and difficult to manage. Suggested solutions for this have included AI solution “marketplaces”, similar to app stores (Advanced AI Solutions for Radiology, n.d., Curated Marketplace, 2018, Imaging AI Marketplace - Overview, n.d., Sectra Amplifier Marketplace, 2021, The Nuance AI Marketplace for Diagnostic Imaging, n.d.), and development of an overarching vendor-neutral infrastructure (Leiner et al., 2021). The successful implementation of such solutions requires close partnerships between AI solution developers, imaging vendors and information technology companies.

    Interpretability

    It is often impossible to understand exactly how AI- based solutions come to their conclusions, particularly with complex approaches like deep learning. This reduces how transparent the decision-making process for procuring and approving these solutions can be, makes the identification of biases difficult, and makes it harder for clinicians to explain the outputs of these solutions to their patients and to determine whether a solution is working properly or has malfunctioned (Char et al., 2018; Reddy et al., 2020; Vayena et al., 2018; Whittlestone et al., 2019). Some have suggested that techniques that help humans understand how AI- based algorithms made certain decisions or predictions (“interpretable” or “explainable” AI) might help mitigate these challenges. However, others have argued that currently available techniques are unsuitable for understanding individual decisions of an algorithm and have warned against relying on them for ensuring that algorithms work in a safe and reliable way (Ghassemi et al., 2021).

    Liability

    In healthcare systems, a framework of accountability ensures that healthcare workers and medical institutions can be held responsible for adverse effects resulting from their actions. The question of who should be held accountable for the failures of an AI- based solution is complicated. For pharmaceuticals, for example, the accountability for inherent failures in the product or its use often lies with either the manufacturer or the prescriber. One key difference is that AI-based systems are continuously evolving and learning, and so inherently work in a way that is independent of what their developers could have foreseen (Yeung, 2018). To the end-user such as the healthcare worker, the AI- based solution may be opaque and so they may not be able to tell when the solution is malfunctioning or inaccurate (Habli et al., 2020; Yeung, 2018).

    Brittleness

    Despite substantial progress in their development over the past few years, deep learning algorithms are still surprising brittle. This means that, when the algorithm faces a scenario that differs substantially from what it faced during training, it cannot contextualize and often produces nonsensical or inaccurate results. This happens because, unlike humans, most algorithms learn to perceive things within the confines of certain assumptions, but fail to generalize outside these assumptions. As an example of how this can be abused with malicious intent, subtle changes to medical images, imperceptible by humans, can render the results of disease-classifying algorithms inaccurate (Finlayson et al., 2018). The lack of interpretability of many AI-based solutions compounds this problem because it makes it difficult to troubleshoot how they reached the wrong conclusion.

    So far, more than 100 AI-based products have gained conformité européenne (CE) marking or Food and Drug Adminstration (FDA) clearance. These products can be found in continuously updated and searchable online databases curated by the FDA (Center for Devices & Radiological Health, n.d.), the American College of Radiology (Assess-AI, n.d.), and others (AI for Radiology, n.d., The Medical Futurist, n.d.; E. Wu et al., 2021). The increasing number of available products, the inherent complexity of many of these solutions, and the fact that many people who usually make purchasing decisions in hospitals are not familiar with evaluating such products make it important to think carefully when deciding on which product to purchase. Such decisions will need to be made after incorporating input from healthcare workers, information technology (IT) professionals, as well as management, finance, legal, and human resources professionals within hospitals.

    Deciding on whether to purchase an AI-based solution in radiology, as well as which of the increasing number of commercially available solutions to purchase, includes considerations of quality, safety, and finances. Over the past few years, several guidelines have emerged to help potential buyers make these decisions (A Buyer’s Guide to AI in Health and Care, 2020; Omoumi et al., 2021; Reddy et al., 2021), and these guidelines are likely to evolve in the future with changing expectations from customers, regulatory bodies, and stakeholders involved in reimbursement decisions.

    First of all, it has to be clear to the potential buyer what the problem is and whether AI is the appropriate approach to this solution, or whether alternatives exist that are more advantageous on balance. If AI is the appropriate approach, buyers should know exactly what a potential AI-based product’s scope of the solution is - i.e. what specific problem the AI-based solution is designed to solve and in what specific circumstances. This includes whether the solution is intended for screening, diagnosis, monitoring, treatment recommendation or another application. It also includes the intended users of the solution and what kind of specific qualifications or training they are expected to have in order to be able to operate the solution and interpret its outputs. It needs to be clear to buyers whether the solution is intended to replace certain tasks that would normally be performed by the end-user, act as a double-reader, as a triaging mechanism, or for other tasks like quality control. Buyers should also understand whether the solution is intended to provide “new” information (i.e. information that would otherwise be unavailable to the user without the solution), improve the performance of an existing task beyond a human’s or other non-AI-based solution’s performance or if it is intended to save time or other resources.

    Buyers should also have access to information that allows them to assess the potential benefits of the AI solution, and this should be backed up by published scientific evidence for the efficacy and cost-efficiency of the solution. How this is done will depend highly on the solution itself and the context in which it is expected to be deployed, but guidelines for this are available (National Institute for Health and Care Excellence (NICE), n.d.). Some questions to ask here would be: How much of an influence will the solution have on patient management? Will it improve diagnostic performance? Will it save time and money? Will it affect patients’ quality of life? It should also be clear to the buyer who exactly is expected to benefit from the use of this solution (Radiologists? Clinicians? Patients? The healthcare system or society as a whole?).

    As with any healthcare intervention, all AI-based solutions come with potential risks, and these should be made clear to the buyer. Some of these risks might have legal consequences, such as the potential for misdiagnosis. These risks should be quantified, and potential buyers should have a framework for dealing with them, including identifying a framework for accountability within the organizations implementing these solutions. Buyers should also ensure they clearly understand the potential negative effects on radiologists’ training and the potential disruption to radiologists’ workflows associated with the use of these solutions.

    Specifics of the AI solution’s design are also relevant to the decision on whether or not to purchase it. These include how robust the solution is to differences between vendors and scanning parameters, the circumstances under which the algorithm was trained (including potential confounding factors), and the way that performance was assessed. It should also be clear to buyers if and how potential sources of bias were accounted for during development. Because a core characteristic of AI-based solutions is their ability to continuously learn from new data, whether and how exactly this retraining is incorporated into the solution with time should also be clear to the buyer, including whether or not new regulatory approval is needed with each iteration. This also includes whether or not retraining is required, for example, due to changes in imaging equipment at the buyer’s institution.

    The main selling points of many AI-based solutions are ease-of-use and improved workflows. Therefore, potential buyers should carefully scrutinize how these solutions are to be integrated into existing workflows, including inter-operability with PACS and electronic medical record systems. Whether or not the solution requires extra hardware (e.g. graphical processing units) or software (e.g. for visualization of the solution’s outputs), or if it can readily be integrated into the existing information technology infrastructure of the buyer’s organization influences the overall cost of the solution for the buyer and is therefore also a critical consideration. In addition, the degree of manual interaction required, both under normal circumstances and for troubleshooting, should be known to the buyer. All potential users of the AI solution should be involved in the purchasing process to ensure that they are familiar with it and that it meets their professional ethical standards and suits their needs.

    From a regulatory perspective, it should be clear to the buyer whether the solution complies with medical device and data protection regulations. Has the solution been approved in the buyer’s country? If so, under which risk classification? Buyers should also consider creating data flow maps that display how the data flows in the operation of the AI-based solution, including who has access to the data.

    Finally, there are other factors to consider which are not necessarily unique to AI-based solutions and which buyers might be familiar with from purchasing other types of solutions. This includes the licensing model of the solution, how users are to be trained on using the solution, how the solution is maintained, how failures in the solution are dealt with, and whether additional costs are to be expected when scaling up the solution’s implementation (e.g. using the solution for more imaging equipment or more users). This allows the potential buyer to anticipate the current and future costs of purchasing the solution.

    The past decade of increasing interest and progress in AI-based solutions for medical imaging has set the stage for a number of trends that are likely to appear or intensify in the near future.

    Firstly, there is an increasing sentiment that, although AI holds a great deal of promise for interpretive applications (such as the detection of pathology), non-interpretive AI-based solutions might hold the most potential in terms of instilling efficiency into radiology workflows and improving patient experiences. This trend towards involving AI earlier in the patient management process is likely to extend to AI increasingly acting as a clinical decision support system to guide when and which imaging scans are performed.

    For this to happen, AI needs to be integrated into existing clinical information systems, and the specific algorithms used need to be able to handle more varied data. This will likely pave the way for the development of algorithms that are capable of integrating demographic, clinical, and laboratory patient data to make recommendations about patient management (Huang, Pareek, et al., 2020; Rockenbach, 2021). The previously mentioned natural language processing algorithms that have been used to interpret scan requests may be useful candidates for this.

    In addition, we are likely to see AI algorithms that can interpret multiple different types of imaging data from the same patient. Currently, less than 5 % of commercially available AI-based solutions in medical imaging work with more than one imaging modality (Rezazade Mehrizi et al., 2021; van Leeuwen et al., 2021) despite the fact that the typical patient in a hospital receives multiple imaging scans during their stay (Shinagare et al., 2014). With this, it is also likely that more AI-based solutions will be developed that target hitherto neglected modalities such as nuclear imaging techniques and ultrasound.

    The current market for AI-based solutions in radiology is spread across a relatively large number of companies (Alexander et al., 2020). Potential users are likely to expect a streamlined integration of these products in their workflows, which can be challenging in such a fragmented market. Improved integration can be achieved in several different ways, including with vendor-neutral marketplaces or by the gradual consolidation of providers of AI-based solutions.

    With the expanding use of AI, the issue of trust between AI developers, healthcare professionals, regulators, and patients will become more relevant. It is therefore likely that efforts will intensify to take steps towards strengthening that trust. This will potentially include raising the expected standards of evidence for AI- based solutions (Aggarwal, Sounderajah, et al., 2021; X. Liu et al., 2019; van Leeuwen et al., 2021; Yusuf et al., 2020), making them more transparent through the use and improvement of interpretable AI techniques (Holzinger et al., 2017; Reyes et al., 2020; “Towards Trustable Machine Learning,” 2018), and enhancing techniques for maintaining patient data privacy (G. Kaissis et al., 2021; G. A. Kaissis et al., 2020).

    Furthermore, while most existing regulations stipulate that AI-based algorithms cannot be modified after regulatory approval, this is likely to change in the future. The potential for these algorithms to learn from data acquired after approval and adapt to changing circumstances is a major advantage of AI. Still, frameworks for doing so have thus far been lacking in the healthcare sector. However, promising ideas have recently emerged, including adapting existing hospital quality assurance and improvement frameworks to monitor AI-based algorithms’ performance and the data they are trained on and update the algorithms accordingly (Feng et al., 2022). This will likely require the development of multidisciplinary teams within hospitals consisting of clinicians, IT professionals, and biostatisticians who closely collaborate with model developers and regulators (Feng et al., 2022).

    While the obstacles discussed in previous sections might slow down the adoption of AI in radiology somewhat, the fear of AI potentially replacing radiologists is unlikely to be one of them. A recent survey from Europe showed that most radiologists did not perceive a reduction in their clinical workload after adopting AI-based solutions (European Society of Radiology (ESR), 2022), likely because, at the same time, demand for radiologists’ services has been continuously rising. Studies from around the world have shown that radiology professionals, particularly those with AI exposure and experience, are generally optimistic about the role of AI in their practice (Y. Chen et al., 2021; Huisman et al., 2021; Ooi et al., 2021; Santomartino & Yi, 2022; Scott et al., 2021).

    AI has shown promise in positively impacting virtually every facet of a radiology department’s work - from scheduling and protocolling patient scans to interpreting images and reaching diagnoses. Promising research on AI-based tools in radiology has not yet been widely translated to adoption in routine practice, however, because of a number of complex, partially intertwined issues. Potential solutions exist for many of these challenges, but many of these solutions require further refinement and testing. In the meantime, guidelines are emerging to help potential users of AI-based solutions in radiology navigate the increasing number of commercial products. This encourages their adoption in real-world scenarios, thus allowing their true potential to be uncovered, as well as their weaknesses to be identified and addressed in a safe and effective way. As these incremental improvements are made, these tools will likely evolve to handle more varied data, become integrated into consolidated workflows, become more transparent, and ultimately more useful for increasing efficiency and improving patient care.

    AAMC Report Reinforces Mounting Physician Shortage. (2021). AAMC. https://www.aamc.org/news-insights/press- releases/aamc-report-reinforces-mounting-physician-shortage

    A buyer’s guide to AI in health and care. (2020). NHS Transformation Directorate. https://www.nhsx.nhs.uk/ai-lab/ explore-all-resources/adopt-ai/a-buyers-guide-to-ai-in-health- and-care/

    Advanced AI solutions for radiology. (n.d.). Calantic Website. Retrieved July 3, 2022, from https://aivisions.calantic.com/

    Aggarwal, R., Farag, S., Martin, G., Ashrafian, H., & Darzi, A. (2021). Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-sectional Survey. Journal of Medical Internet Research, 23(8), e26162. https://doi.org/10.2196/26162

    Aggarwal, R., Sounderajah, V., Martin, G., Ting, D. S. W., Karthikesalingam, A., King, D., Ashrafian, H., & Darzi, A. (2021). Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digital Medicine, 4(1), 65. https://doi.org/10.1038/s41746-021-00438-z

    Agnes, S. A., Anitha, J., Pandian, S. I. A., & Peter, J. D. (2019). Classification of Mammogram Images Using Multiscale all Convolutional Neural Network (MA-CNN). Journal of Medical Systems, 44(1), 30. https://doi.org/10.1007/s10916-019-1494-z

    AIF360: A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models. (n.d.). Github. Retrieved June 11, 2022, from https:// github.com/Trusted-AI/AIF360

    AI for radiology. (n.d.). Retrieved June 26, 2022, from https://grand-challenge.org/
    aiforradiology/?subspeciality=Abdomen&modality=All&ce_ under=All&ce_class=All&fda_class=All&sort_by=last %20 modified&search=

    Akagi, M., Nakamura, Y., Higaki, T., Narita, K., Honda, Y., Zhou, J., Yu, Z., Akino, N., & Awai, K. (2019). Deep learning reconstruction improves image quality of abdominal ultra- high-resolution CT. European Radiology, 29(11), 6163–6171. https://doi.org/10.1007/s00330-019-06170-3

    Akkus, Z., Cai, J., Boonrod, A., Zeinoddini, A., Weston, A. D., Philbrick, K. A., & Erickson, B. J. (2019). A Survey of Deep- Learning Applications in Ultrasound: Artificial Intelligence- Powered Ultrasound for Improving Clinical Workflow. Journal of the American College of Radiology: JACR, 16(9 Pt B), 1318–1328. https://doi.org/10.1016/j.jacr.2019.06.004

    Al-Antari, M. A., Al-Masni, M. A., & Kim, T.-S. (2020). Deep Learning Computer-Aided Diagnosis for Breast Lesion in Digital Mammogram. Advances in Experimental Medicine and Biology, 1213, 59–72. https://doi.org/10.1007/978-3-030-33128-3_4

    Alexander, A., Jiang, A., Ferreira, C., & Zurkiya, D. (2020). An Intelligent Future for Medical Imaging: A Market Outlook on Artificial Intelligence for Medical Imaging. Journal of the American College of Radiology: JACR, 17(1 Pt B), 165–170. https:// doi.org/10.1016/j.jacr.2019.07.019

    Allen, B., Agarwal, S., Coombs, L., Wald, C., & Dreyer, K. (2021). 2020 ACR Data Science Institute Artificial Intelligence Journal of the American College of Radiology: JACR

    Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M. A., Al-Amidie, M., & Farhan, L. (2021). Review of deep learning: concepts, CNN architectures, challenges, applications, future directions.

    Arbabshirani, M. R., Fornwalt, B. K., Mongelluzzo, G. J., Suever, J. D., Geise, B. D., Patel, A. A., & Moore, G. J. (2018). Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digital Medicine, 1, 9. https://doi.org/10.1038/s41746-017-0015-z

    Ardila, D., Kiraly, A. P., Bharadwaj, S., Choi, B., Reicher, J. J., Peng, L., Tse, D., Etemadi, M., Ye, W., Corrado, G., Naidich, D. P., & Shetty, S. (2019). End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature Medicine, 25(6), 954–961. https://doi. org/10.1038/s41591-019-0447-x

    Arefan, D., Mohamed, A. A., Berg, W. A., Zuley, M. L., Sumkin, J. H., & Wu, S. (2020). Deep learning modeling using normal mammograms for predicting breast cancer risk. Medical Physics, 47(1), 110–118. https://doi.org/10.1002/mp.13886

    Assess-AI. (n.d.). Retrieved July 2, 2022, from https://www. acrdsi.org/DSI-Services/Assess-AI

    Azer, S. A. (2019). Deep learning with convolutional neural networks for identification of liver masses and hepatocellular carcinoma: A systematic review. World Journal of Gastrointestinal Oncology, 11(12), 1218–1230. https://doi.org/10.4251/wjgo.v11. i12.1218

    Bacchi, S., Zerner, T., Oakden-Rayner, L., Kleinig, T., Patel, S., & Jannes, J. (2020). Deep Learning in the Prediction of Ischaemic Stroke Thrombolysis Functional Outcomes: A Pilot Study. Academic Radiology, 27(2), e19–e23. https://doi. org/10.1016/j.acra.2019.03.015

    Baccianella, S., & Gough, T. (n.d.). Why cloud computing is the best option for hospitals adopting AI. Retrieved June 11, 2022, from https://www.aidence.com/articles/cloud-best-option- imaging-ai/

    Balint, B. J., Steenburg, S. D., Lin, H., Shen, C., Steele, J. L., & Gunderman, R. B. (2014). Do telephone call interruptions have an impact on radiology resident diagnostic accuracy? Academic Radiology, 21(12), 1623–1628. https://doi.org/10.1016/j. acra.2014.08.001

    Baltruschat, I., Steinmeister, L., Nickisch, H., Saalbach, A., Grass, M., Adam, G., Knopp, T., & Ittrich, H. (2021). Smart chest X-ray worklist prioritization using artificial intelligence: a clinical workflow simulation. European Radiology, 31(6), 3837– 3845. https://doi.org/10.1007/s00330-020-07480-7

    Bash, S., Johnson, B., Gibbs, W., Zhang, T., Shankaranarayanan, A., & Tanenbaum, L. N. (2021). Deep Learning Image Processing Enables 40 % Faster Spinal MR Scans Which Match or Exceed Quality of Standard of Care : A Prospective Multicenter Multireader Study. Clinical Neuroradiology. https://doi.org/10.1007/s00062-021-01121-2

    Bash, S., Wang, L., Airriess, C., Zaharchuk, G., Gong, E., Shankaranarayanan, A., & Tanenbaum, L. N. (2021). Deep Learning Enables 60 % Accelerated Volumetric Brain MRI While Preserving Quantitative Performance: A Prospective, Multicenter, Multireader Trial. AJNR. American Journal of Neuroradiology, 42(12), 2130–2137. https://doi.org/10.3174/ajnr.A7358

    Boas, F. E., & Fleischmann, D. (2012). CT artifacts: causes and reduction techniques. Imaging in Medicine, 4(2), 229–240. https://doi.org/10.2217/iim.12.13

    Boland, G. W., Duszak, R., Jr, & Kalra, M. (2014). Protocol design and optimization. Journal of the American College of Radiology: JACR, 11(5), 440–441. https://doi.org/10.1016/j. jacr.2014.01.021

    Bonavita, I., Rafael-Palou, X., Ceresa, M., Piella, G., Ribas, V., & González Ballester, M. A. (2020). Integration of convolutional neural networks for pulmonary nodule malignancy assessment in a lung cancer classification pipeline. Computer Methods and Programs in Biomedicine, 185, 105172. https://doi.org/10.1016/j.cmpb.2019.105172

    Bozkurt, S., Cahan, E. M., Seneviratne, M. G., Sun, R., Lossio- Ventura, J. A., Ioannidis, J. P. A., & Hernandez-Boussard, T. (2020). Reporting of demographic data and representativeness in machine learning models using electronic health records.
    Journal of the American Medical Informatics Association: JAMIA, 27(12), 1878–1884. https://doi.org/10.1093/jamia/ocaa164

    Brown, A. D., & Marotta, T. R. (2018). Using machine learning for sequence-level automated MRI protocol selection in neuroradiology. Journal of the American Medical Informatics Association: JAMIA, 25(5), 568–571. https://doi.org/10.1093/ jamia/ocx125

    Bruls, R. J. M., & Kwee, R. M. (2020). Workload for radiologists during on-call hours: dramatic increase in the past 15 years.
    Insights into Imaging, 11(1), 121. https://doi.org/10.1186/ s13244-020-00925-z

    Buruk, B., Ekmekci, P. E., & Arda, B. (2020). A critical perspective on guidelines for responsible and trustworthy artificial intelligence. Medicine, Health Care, and Philosophy, 23(3), 387–399. https://doi.org/10.1007/s11019-020-09948-1

    Cadario, R., Longoni, C., & Morewedge, C. K. (2021). Understanding, explaining, and utilizing medical artificial intelligence. Nature Human Behaviour, 5(12), 1636–1642. https://doi.org/10.1038/s41562-021-01146-0

    Center for Devices, & Radiological Health. (n.d.). Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. U.S. Food and Drug Administration; FDA. Retrieved July 2, 2022, from https://www.fda.gov/medical-devices/software- medical-device-samd/artificial-intelligence-and-machine- learning-aiml-enabled-medical-devices

    Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing Machine Learning in Health Care - Addressing Ethical Challenges. The New England Journal of Medicine, 378(11), 981–983. https://doi.org/10.1056/NEJMp1714229

    Chen, H., Zhang, Y., Kalra, M. K., Lin, F., Chen, Y., Liao, P., Zhou, J., & Wang, G. (2017). Low-Dose CT With a Residual Encoder-Decoder Convolutional Neural Network. IEEE Transactions on Medical Imaging, 36(12), 2524–2535. https://doi.org/10.1109/TMI.2017.2715284

    Chen, Y., Stavropoulou, C., Narasinkan, R., Baker, A., & Scarbrough, H. (2021). Professionals’ responses to the introduction of AI innovations in radiology and their implications for future adoption: a qualitative study. BMC Health Services Research, 21(1), 813. https://doi.org/10.1186/ s12913-021-06861-y

    Choe, J., Lee, S. M., Do, K.-H., Lee, G., Lee, J.-G., Lee, S. M., & Seo, J. B. (2019). Deep Learning-based Image Conversion of CT Reconstruction Kernels Improves Radiomics Reproducibility for Pulmonary Nodules or Masses. Radiology, 292(2), 365–373. https://doi.org/10.1148/radiol.2019181960

    Choi, K. S., Choi, S. H., & Jeong, B. (2019). Prediction of IDH genotype in gliomas with dynamic susceptibility contrast perfusion MR imaging using an explainable recurrent neural network. Neuro-Oncology, 21(9), 1197–1209. https://doi.org/10.1093/neuonc/noz095

    Chong, L. R., Tsai, K. T., Lee, L. L., Foo, S. G., & Chang, P. C. (2020). Artificial Intelligence Predictive Analytics in the Management of Outpatient MRI Appointment No-Shows. AJR. American Journal of Roentgenology, 215(5), 1155–1162. https://doi.org/10.2214/AJR.19.22594

    Cikes, M., Sanchez-Martinez, S., Claggett, B., Duchateau, N., Piella, G., Butakoff, C., Pouleur, A. C., Knappe, D., Biering- Sørensen, T., Kutyifa, V., Moss, A., Stein, K., Solomon, S. D., & Bijnens, B. (2019). Machine learning-based phenogrouping in heart failure to identify responders to cardiac resynchronization therapy. European Journal of Heart Failure, 21(1), 74–85. https://doi.org/10.1002/ejhf.1333

    Ciompi, F., Chung, K., van Riel, S. J., Setio, A. A. A., Gerke, P. K., Jacobs, C., Scholten, E. T., Schaefer-Prokop, C., Wille,
    M. M. W., Marchianò, A., Pastorino, U., Prokop, M., & van Ginneken, B.
    (2017). Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Scientific Reports, 7, 46479. https://doi.org/10.1038/srep46479

    Clinical radiology UK workforce census 2019 report. (2019). https://www.rcr.ac.uk/publication/clinical-radiology-uk- workforce-census-2019-report

    Cloud security for healthcare services. (2021, January 14). ENISA. https://www.enisa.europa.eu/publications/cloud- security-for-healthcare-services/

    CONSORT-AI and SPIRIT-AI Steering Group. (2019). Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed. Nature Medicine, 25(10), 1467–1468. https://doi.org/10.1038/s41591-019-0603-3

    Curated marketplace. (2018, May 22). Blackford. https://www.blackfordanalysis.com/applications/

    Dance, A. (2021). AI spots cell structures that humans can’t. Nature. 592 (7852), 154–155.

    Dantas, L. F., Fleck, J. L., Cyrino Oliveira, F. L., & Hamacher, S. (2018). No-shows in appointment scheduling - a systematic literature review. Health Policy, 122(4), 412–421. https://doi.org/10.1016/j.healthpol.2018.02.002

    Deák, Z., Grimm, J. M., Treitl, M., Geyer, L. L., Linsenmaier, U., Körner, M., Reiser, M. F., & Wirth, S. (2013). Filtered back projection, adaptive statistical iterative reconstruction, and a model-based iterative reconstruction in abdominal CT: an experimental clinical study. Radiology, 266(1), 197–206. https://doi.org/10.1148/radiol.12112707

    Dembrower, K., Liu, Y., Azizpour, H., Eklund, M., Smith, K., Lindholm, P., & Strand, F. (2020). Comparison of a Deep
    Learning Risk Score and Standard Mammographic Density Score for Breast Cancer Risk Prediction. Radiology, 294(2), 265–272. https://doi.org/10.1148/radiol.2019190872

    Do, B. H., Langlotz, C., & Beaulieu, C. F. (2017). Bone Tumor Diagnosis Using a Naïve Bayesian Model of Demographic and Radiographic Features. Journal of Digital Imaging, 30(5), 640–647. https://doi.org/10.1007/s10278-017-0001-7

    Dou, Q., Yu, L., Chen, H., Jin, Y., Yang, X., Qin, J., & Heng, P.-A. (2017). 3D deeply supervised network for automated segmentation of volumetric medical images. Medical Image Analysis, 41, 40–54. https://doi.org/10.1016/j. media.2017.05.001

    Eche, T., Schwartz, L. H., Mokrane, F.-Z., & Dercle, L. (2021). Toward Generalizability in the Deployment of Artificial Intelligence in Radiology: Role of Computation Stress Testing to Overcome Underspecification. Radiology. Artificial Intelligence, 3(6), e210097. https://doi.org/10.1148/ryai.2021210097

    England, N. H. S., & Improvement, N. H. S. (2019). NHS diagnostic waiting times and activity data. NHS. https://www. england.nhs.uk/statistics/wp-content/uploads/sites/2/2021/12/ DWTA-Report-October-2021_M43D4.pdf

    Esmaeilzadeh, P. (2020). Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives. BMC Medical Informatics and Decision Making, 20(1), 170. https://doi. org/10.1186/s12911-020-01191-1

    Esses, S. J., Lu, X., Zhao, T., Shanbhogue, K., Dane, B., Bruno, M., & Chandarana, H. (2018). Automated image quality evaluation of T2 -weighted liver MRI utilizing deep learning architecture. Journal of Magnetic Resonance Imaging: JMRI, 47(3), 723–728. https://doi.org/10.1002/jmri.25779

    European Society of Radiology (ESR). (2022). Current practical experience with artificial intelligence in clinical radiology:
    a survey of the European Society of Radiology. Insights into Imaging, 13(1), 107. https://doi.org/10.1186/s13244-022- 01247-y

    Faron, A., Sichtermann, T., Teichert, N., Luetkens, J. A., Keulers, A., Nikoubashman, O., Freiherr, J., Mpotsaris, A., & Wiesmann, M. (2020). Performance of a Deep-Learning Neural Network to Detect Intracranial Aneurysms from 3D TOF-MRA Compared to Human Readers. Clinical Neuroradiology, 30(3), 591–598. https://doi.org/10.1007/s00062-019-00809-w

    Feng, J., Phillips, R. V., Malenica, I., Bishara, A., Hubbard, A. E., Celi, L. A., & Pirracchio, R. (2022). Clinical artificial intelligence quality improvement: towards continual monitoring and updating of AI algorithms in healthcare. NPJ Digital Medicine, 5(1), 66. https://doi.org/10.1038/s41746-022- 00611-y

    Finlayson, S. G., Chung, H. W., Kohane, I. S., & Beam, A. L. (2018). Adversarial Attacks Against Medical Deep Learning Systems. In arXiv [cs.CR]. arXiv. https://doi.org/10.1145/nnnnnnn. nnnnnnn

    Flanders, A. E., Prevedello, L. M., Shih, G., Halabi, S. S., Kalpathy-Cramer, J., Ball, R., Mongan, J. T., Stein, A., Kitamura, F. C., Lungren, M. P., Choudhary, G., Cala, L., Coelho, L., Mogensen, M., Morón, F., Miller, E., Ikuta, I., Zohrabian, V., McDonnell, O., … RSNA-ASNR 2019 Brain Hemorrhage CT Annotators. (2020). Construction of a Machine Learning Dataset through Collaboration: The RSNA 2019 Brain CT Hemorrhage Challenge. Radiology. Artificial Intelligence, 2(3), e190211. https://doi.org/10.1148/ryai.2020190211

    Freeman, K., Geppert, J., Stinton, C., Todkill, D., Johnson, S., Clarke, A., & Taylor-Phillips, S. (2021). Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy. BMJ , 374, n1872. https://doi.org/10.1136/bmj.n1872

    General Data Protection Regulation (GDPR) – Official Legal Text. (2016, July 13). General Data Protection Regulation (GDPR). https://gdpr-info.eu/

    Ghafur, S., Van Dael, J., Leis, M., Darzi, A., & Sheikh, A. (2020). Public perceptions on data sharing: key insights from the UK and the USA. The Lancet. Digital Health, 2(9), e444–e446. https://doi.org/10.1016/S2589-7500(20)30161-8

    Ghani, M. U., & Clem Karl, W. (2019). Fast Enhanced CT Metal Artifact Reduction using Data Domain Deep Learning. In arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1904.04691

    Ghassemi, M., Oakden-Rayner, L., & Beam, A. L. (2021). The false hope of current approaches to explainable artificial intelligence in health care. The Lancet. Digital Health, 3(11), e745–e750. https://doi.org/10.1016/S2589-7500(21)00208-9

    Ginat, D. T. (2020). Analysis of head CT scans flagged by deep learning software for acute intracranial hemorrhage. Neuroradiology, 62(3), 335–340. https://doi.org/10.1007/ s00234-019-02330-w

    Goebel, J., Stenzel, E., Guberina, N., Wanke, I., Koehrmann, M., Kleinschnitz, C., Umutlu, L., Forsting, M., Moenninghoff, C., & Radbruch, A. (2018). Automated ASPECT rating: comparison between the Frontier ASPECT Score software and the Brainomix software. Neuroradiology, 60(12), 1267–1272. https://doi.org/10.1007/s00234-018-2098-x

    Habli, I., Lawton, T., & Porter, Z. (2020). Artificial intelligence in health care: accountability and safety. Bulletin of the World Health Organization, 98(4), 251–256. https://doi.org/10.2471/ BLT.19.237487

    Halabi, S. S., Prevedello, L. M., Kalpathy-Cramer, J., Mamonov, A. B., Bilbily, A., Cicero, M., Pan, I., Pereira, L. A., Sousa, R. T., Abdala, N., Kitamura, F. C., Thodberg, H. H., Chen, L., Shih, G., Andriole, K., Kohli, M. D., Erickson, B. J., & Flanders, A. E. (2019). The RSNA Pediatric Bone Age Machine Learning Challenge. Radiology, 290(2), 498–503. https://doi. org/10.1148/radiol.2018180736

    Hargreaves, B. A., Worters, P. W., Pauly, K. B., Pauly, J. M., Koch, K. M., & Gold, G. E. (2011). Metal-induced artifacts in MRI. AJR. American Journal of Roentgenology, 197(3), 547–555. https://doi.org/10.2214/AJR.11.7364

    Harry, E., Sinsky, C., Dyrbye, L. N., Makowski, M. S., Trockel, M., Tutty, M., Carlasare, L. E., West, C. P., & Shanafelt, T. D. (2021). Physician Task Load and the Risk of Burnout Among US Physicians in a National Survey. Joint Commission Journal on Quality and Patient Safety / Joint Commission Resources, 47(2), 76–85. https://doi.org/10.1016/j.jcjq.2020.09.011

    Hata, A., Yanagawa, M., Yamagata, K., Suzuki, Y., Kido, S., Kawata, A., Doi, S., Yoshida, Y., Miyata, T., Tsubamoto, M., Kikuchi, N., & Tomiyama, N. (2021). Deep learning algorithm for detection of aortic dissection on non-contrast-enhanced CT. European Radiology, 31(2), 1151–1159. https://doi.org/10.1007/ s00330-020-07213-w

    Hauptmann, A., Arridge, S., Lucka, F., Muthurangu, V., & Steeden, J. A. (2019). Real-time cardiovascular MR with
    spatio-temporal artifact suppression using deep learning-proof of concept in congenital heart disease. Magnetic Resonance in Medicine: Official Journal of the Society of Magnetic Resonance in Medicine / Society of Magnetic Resonance in Medicine, 81(2), 1143–1156. https://doi.org/10.1002/mrm.27480

    Health Ethics & Governance. (2021, June 28). Ethics and governance of artificial intelligence for health. World Health Organization. https://www.who.int/publications/i/ item/9789240029200

    He, L., Li, H., Dudley, J. A., Maloney, T. C., Brady, S. L., Somasundaram, E., Trout, A. T., & Dillman, J. R. (2019). Machine Learning Prediction of Liver Stiffness Using Clinical and T2-Weighted MRI Radiomic Data. AJR. American Journal of Roentgenology, 213(3), 592–601. https://doi.org/10.2214/ AJR.19.21082

    Herent, P., Schmauch, B., Jehanno, P., Dehaene, O., Saillard, C., Balleyguier, C., Arfi-Rouche, J., & Jégou, S. (2019).
    Detection and characterization of MRI breast lesions using deep learning. Diagnostic and Interventional Imaging, 100(4), 219–225. https://doi.org/10.1016/j.diii.2019.02.008

    Hinton, B., Ma, L., Mahmoudzadeh, A. P., Malkov, S., Fan, B., Greenwood, H., Joe, B., Lee, V., Kerlikowske, K., & Shepherd,
    J.
    (2019). Deep learning networks find unique mammographic differences in previous negative mammograms between interval and screen-detected cancers: a case-case study. Cancer Imaging: The Official Publication of the International Cancer Imaging Society, 19(1), 41. https://doi.org/10.1186/s40644-019-0227-3

    Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain? In arXiv [cs.AI]. arXiv. http://arxiv.org/ abs/1712.09923

    Hötker, A. M., Da Mutten, R., Tiessen, A., Konukoglu, E., & Donati, O. F. (2021). Improving workflow in prostate MRI:
    AI-based decision-making on biparametric or multiparametric MRI. Insights into Imaging, 12(1), 112. https://doi.org/10.1186/ s13244-021-01058-7

    Huang, S.-C., Kothari, T., Banerjee, I., Chute, C., Ball, R. L., Borus, N., Huang, A., Patel, B. N., Rajpurkar, P., Irvin, J., Dunnmon, J., Bledsoe, J., Shpanskaya, K., Dhaliwal, A., Zamanian, R., Ng, A. Y., & Lungren, M. P. (2020). PENet-a scalable deep-learning model for automated diagnosis of pulmonary embolism using volumetric CT imaging. NPJ Digital Medicine, 3, 61. https://doi.org/10.1038/s41746-020-0266-y

    Huang, S.-C., Pareek, A., Seyyedi, S., Banerjee, I., & Lungren, M. P. (2020). Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines. NPJ Digital Medicine, 3, 136. https:// doi.org/10.1038/s41746-020-00341-z

    Huisman, M., Ranschaert, E., Parker, W., Mastrodicasa, D., Koci, M., Pinto de Santos, D., Coppola, F., Morozov, S., Zins, M., Bohyn, C., Koç, U., Wu, J., Veean, S., Fleischmann, D., Leiner, T., & Willemink, M. J. (2021). An international survey on AI in radiology in 1,041 radiologists and radiology residents part 1: fear of replacement, knowledge, and attitude. European Radiology, 31(9), 7058–7066. https://doi.org/10.1007/s00330-021-07781-5

    Hu, S.-Y., Santus, E., Forsyth, A. W., Malhotra, D., Haimson, J., Chatterjee, N. A., Kramer, D. B., Barzilay, R., Tulsky,
    J. A., & Lindvall, C.
    (2019). Can machine learning improve patient selection for cardiac resynchronization therapy? PloS One, 14(10), e0222397. https://doi.org/10.1371/journal. pone.0222397

    Hwang, E. J., Nam, J. G., Lim, W. H., Park, S. J., Jeong, Y. S., Kang, J. H., Hong, E. K., Kim, T. M., Goo, J. M., Park, S., Kim, K. H., & Park, C. M. (2019). Deep Learning for Chest Radiograph Diagnosis in the Emergency Department. Radiology, 293(3), 573–580. https://doi.org/10.1148/radiol.2019191225

    Hwang, E. J., Park, S., Jin, K.-N., Kim, J. I., Choi, S. Y., Lee, J. H., Goo, J. M., Aum, J., Yim, J.-J., Park, C. M., & Deep Learning- Based Automatic Detection Algorithm Development and Evaluation Group. (2019). Development and Validation of a Deep Learning-based Automatic Detection Algorithm for Active Pulmonary Tuberculosis on Chest Radiographs. Clinical Infectious Diseases: An Official Publication of the Infectious Diseases Society of America, 69(5), 739–747. https://doi.org/10.1093/cid/ciy967

    Hwang, S., Kim, H.-E., Jeong, J., & Kim, H.-J. (2016). A novel approach for tuberculosis screening based on deep convolutional neural networks. In G. D. Tourassi & S. G. Armato (Eds.), Medical Imaging 2016: Computer-Aided Diagnosis. SPIE. https://doi.org/10.1117/12.2216198

    IBM Watson Studio - Model Risk Management. (n.d.). Retrieved June 11, 2022, from https://www.ibm.com/cloud/ watson-studio/model-risk-management

    Imaging AI Marketplace - overview. (n.d.). Retrieved June 11, 2022, from https://www.ibm.com/products/imaging-ai- marketplace

    Jamaludin, A., Lootus, M., Kadir, T., Zisserman, A., Urban, J., Battié, M. C., Fairbank, J., McCall, I., & Genodisc Consortium. (2017). ISSLS PRIZE IN BIOENGINEERING SCIENCE 2017:
    Automation of reading of radiological features from magnetic resonance images (MRIs) of the lumbar spine without human intervention is comparable with an expert radiologist. European Spine Journal: Official Publication of the European Spine Society, the European Spinal Deformity Society, and the European Section of the Cervical Spine Research Society, 26(5), 1374–1383. https:// doi.org/10.1007/s00586-017-4956-3

    Kaissis, G. A., Makowski, M. R., Rückert, D., & Braren, R. F. (2020). Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence, 2(6), 305–311. https://doi.org/10.1038/s42256-020-0186-1

    Kaissis, G., Ziller, A., Passerat-Palmbach, J., Ryffel, T., Usynin, D., Trask, A., Lima, I., Mancuso, J., Jungmann, F., Steinborn, M.-M., Saleh, A., Makowski, M., Rueckert, D., & Braren, R. (2021). End-to-end privacy preserving deep learning on multi-institutional medical imaging. Nature Machine Intelligence, 3(6), 473–484. https://doi.org/10.1038/s42256-021-00337-8

    Kalra, A., Chakraborty, A., Fine, B., & Reicher, J. (2020). Machine Learning for Automation of Radiology Protocols for Quality and Efficiency Improvement. Journal of the American College of Radiology: JACR, 17(9), 1149–1158. https://doi.org/10.1016/j.jacr.2020.03.012

    Kao, P.-Y., Chen, J. W., & Manjunath, B. S. (2019). Improving 3D U-Net for Brain Tumor Segmentation by Utilizing Lesion Prior. In arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1907.00281

    Kapoor, N., Lacson, R., & Khorasani, R. (2020). Workflow Applications of Artificial Intelligence in Radiology and an Overview of Available Tools. Journal of the American College of Radiology: JACR, 17(11), 1363–1370. https://doi.org/10.1016/j. jacr.2020.08.016

    Kathirvelu, D., Vinupritha, P., & Kalpana, V. (2019). A computer aided diagnosis system for measurement of mandibular cortical thickness on dental panoramic radiographs in prediction of women with low bone mineral density. Journal of Medical Systems, 43(6), 148. https://doi.org/10.1007/s10916-019-1268-7

    Ker, J., Singh, S. P., Bai, Y., Rao, J., Lim, T., & Wang, L. (2019). Image Thresholding Improves 3-Dimensional Convolutional Neural Network Diagnosis of Different Acute Brain Hemorrhages on Computed Tomography Scans. Sensors, 19(9). https://doi.org/10.3390/s19092167

    Khan, F. A., Majidulla, A., Tavaziva, G., Nazish, A., Abidi, S. K., Benedetti, A., Menzies, D., Johnston, J. C., Khan, A. J., & Saeed, S. (2020). Chest x-ray analysis with deep learning- based software as a triage test for pulmonary tuberculosis: a prospective study of diagnostic accuracy for culture-confirmed disease. The Lancet. Digital Health, 2(11), e573–e581. https://doi.org/10.1016/S2589-7500(20)30221-1

    Kim, D. W., Jang, H. Y., Kim, K. W., Shin, Y., & Park, S. H. (2019). Design Characteristics of Studies Reporting the Performance of Artificial Intelligence Algorithms for Diagnostic Analysis of Medical Images: Results from Recently Published Papers. Korean Journal of Radiology: Official Journal of the Korean Radiological Society, 20(3), 405–410. https://doi.org/10.3348/ kjr.2019.0025

    Kim, K. H., & Park, S.-H. (2017). Artificial neural network for suppression of banding artifacts in balanced steady-state free precession MRI. Magnetic Resonance Imaging, 37, 139–146. https://doi.org/10.1016/j.mri.2016.11.020

    Korteling, J. E. H., van de Boer-Visschedijk, G. C., Blankendaal, R. A. M., Boonekamp, R. C., & Eikelboom, A. R. (2021). Human- versus Artificial Intelligence. Frontiers in Artificial Intelligence 4, 622364. https://doi.org/10.3389/ frai.2021.622364

    Kühl, N., Goutier, M., Baier, L., Wolff, C., & Martin, D. (2020). Human vs. supervised machine learning: Who learns patterns faster? In arXiv [cs.AI] arXiv. http://arxiv.org/abs/2012.03661

    Kuo, W., Häne, C., Mukherjee, P., Malik, J., & Yuh, E. L. (2019). Expert-level detection of acute intracranial hemorrhage on head computed tomography using deep learning. Proceedings of the National Academy of Sciences of the United States of America, 116(45), 22737–22745. https://doi.org/10.1073/ pnas.1908021116

    Langerhuizen, D. W. G., Janssen, S. J., Mallee, W. H., van den Bekerom, M. P. J., Ring, D., Kerkhoffs, G. M. M. J., Jaarsma,
    R. L., & Doornberg, J. N.
    (2019). What Are the Applications and Limitations of Artificial Intelligence for Fracture Detection and Classification in Orthopaedic Trauma Imaging? A Systematic Review. Clinical Orthopaedics and Related Research, 477(11), 2482–2491. https://doi.org/10.1097/CORR.0000000000000848

    Lang, N., Zhang, Y., Zhang, E., Zhang, J., Chow, D., Chang, P., Yu, H. J., Yuan, H., & Su, M.-Y. (2019). Differentiation of spinal metastases originated from lung and other cancers using radiomics and deep learning based on DCE-MRI. Magnetic Resonance Imaging, 64, 4–12. https://doi.org/10.1016/j. mri.2019.02.013

    Larrazabal, A. J., Nieto, N., Peterson, V., Milone, D. H., & Ferrante, E. (2020). Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proceedings of the National Academy of Sciences of the United States of America, 117(23), 12592–12594. https://doi.org/10.1073/pnas.1919012117

    Lee, J.-S., Adhikari, S., Liu, L., Jeong, H.-G., Kim, H., & Yoon, S.-J. (2019). Osteoporosis detection in panoramic radiographs using a deep convolutional neural network-based computer- assisted diagnosis system: a preliminary study. Dento Maxillo Facial Radiology, 48(1), 20170344. https://doi.org/10.1259/ dmfr.20170344

    Lee, Y. H. (2018). Efficiency Improvement in a Busy Radiology Practice: Determination of Musculoskeletal Magnetic Resonance Imaging Protocol Using Deep-Learning Convolutional Neural Networks. Journal of Digital Imaging, 31(5), 604–610. https://doi.org/10.1007/s10278-018-0066-y

    Leiner, T., Bennink, E., Mol, C. P., Kuijf, H. J., & Veldhuis, W. B. (2021). Bringing AI to the clinic: blueprint for a vendor-neutral AI deployment infrastructure. Insights into Imaging, 12(1), 11. https://doi.org/10.1186/s13244-020-00931-1

    Lekadir, K., Osuala, R., Gallin, C., Lazrak, N., Kushibar, K., Tsakou, G., Aussó, S., Alberich, L. C., Marias, K., Tsiknakis, M., Colantonio, S., Papanikolaou, N., Salahuddin, Z., Woodruff, H. C., Lambin, P., & Martí-Bonmatí, L. (2021). FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging. In arXiv [cs.CV]. arXiv. http://arxiv.org/ abs/2109.09658

    Letourneau-Guillon, L., Camirand, D., Guilbert, F., & Forghani, R. (2020). Artificial Intelligence Applications for Workflow, Process Optimization and Predictive Analytics. Neuroimaging Clinics of North America, 30(4), e1–e15. https://doi.org/10.1016/j.nic.2020.08.008

    Levin, D. C., Parker, L., & Rao, V. M. (2017). Recent Trends in Imaging Use in Hospital Settings: Implications for Future Planning. Journal of the American College of Radiology: JACR, 14(3), 331–336. https://doi.org/10.1016/j.jacr.2016.08.025

    Lindsey, R., Daluiski, A., Chopra, S., Lachapelle, A., Mozer, M., Sicular, S., Hanel, D., Gardner, M., Gupta, A., Hotchkiss, R., & Potter, H. (2018). Deep neural network improves fracture detection by clinicians. Proceedings of the National Academy of Sciences of the United States of America, 115(45), 11591–11596. https://doi.org/10.1073/pnas.1806905115

    Liu, F., Tang, J., Ma, J., Wang, C., Ha, Q., Yu, Y., & Zhou, Z. (2021). The application of artificial intelligence to chest medical image analysis. Intelligent Medicine, 1(3), 104–117. https://doi.org/10.1016/j.imed.2021.06.004

    Liu, F., Zhou, Z., Samsonov, A., Blankenbaker, D., Larison, W., Kanarek, A., Lian, K., Kambhampati, S., & Kijowski, R. (2018). Deep Learning Approach for Evaluating Knee MR Images: Achieving High Diagnostic Performance for Cartilage Lesion Detection. Radiology, 289(1), 160–169. https://doi.org/10.1148/ radiol.2018172986

    Liu, X., Cruz Rivera, S., Moher, D., Calvert, M. J., Denniston, A. K., & SPIRIT-AI and CONSORT-AI Working Group. (2020). Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nature Medicine, 26(9), 1364–1374. https://doi.org/10.1038/ s41591-020-1034-x

    Liu, X., Faes, L., Kale, A. U., Wagner, S. K., Fu, D. J., Bruynseels, A., Mahendiran, T., Moraes, G., Shamdas, M., Kern, C., Ledsam, J. R., Schmid, M. K., Balaskas, K., Topol, E. J., Bachmann, L. M., Keane, P. A., & Denniston, A. K. (2019). A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. The Lancet. Digital Health, 1(6), e271–e297. https://doi.org/10.1016/S2589- 7500(19)30123-2

    Li, X., Shen, L., Xie, X., Huang, S., Xie, Z., Hong, X., & Yu, J. (2020). Multi-resolution convolutional networks for chest X-ray radiograph based lung nodule detection. Artificial Intelligence in Medicine, 103, 101744. https://doi.org/10.1016/j. artmed.2019.101744

    Lotan, E., Tschider, C., Sodickson, D. K., Caplan, A. L., Bruno, M., Zhang, B., & Lui, Y. W. (2020). Medical Imaging and Privacy in the Era of Artificial Intelligence: Myth, Fallacy, and the Future. Journal of the American College of Radiology: JACR, 17(9), 1159–1162. https://doi.org/10.1016/j.jacr.2020.04.007

    Maegerlein, C., Fischer, J., Mönch, S., Berndt, M., Wunderlich, S., Seifert, C. L., Lehm, M., Boeckh-Behrens, T., Zimmer, C., & Friedrich, B. (2019). Automated Calculation of the Alberta Stroke Program Early CT Score: Feasibility and
    Reliability. Radiology, 291(1), 141–148. https://doi.org/10.1148/ radiol.2019181228

    Mairhöfer, D., Laufer, M., Simon, P. M., Sieren, M., Bischof, A., Käster, T., Barth, E., Barkhausen, J., & Martinetz, T. (2021). An AI-based Framework for Diagnostic Quality Assessment of Ankle Radiographs. https://openreview.net/ pdf?id=bj04hJss_xZ

    Mancio, J., Pashakhanloo, F., El-Rewaidy, H., Jang, J., Joshi, G., Csecs, I., Ngo, L., Rowin, E., Manning, W., Maron, M., & Nezafat, R. (2022). Machine learning phenotyping of scarred myocardium from cine in hypertrophic cardiomyopathy. European Heart Journal Cardiovascular Imaging, 23(4), 532–542. https://doi.org/10.1093/ehjci/jeab056

    Matsoukas, S., Morey, J., Lock, G., Chada, D., Shigematsu, T., Marayati, N. F., Delman, B. N., Doshi, A., Majidi, S.,
    De Leacy, R., Kellner, C. P., & Fifi, J. T.
    (2022). AI software detection of large vessel occlusion stroke on CT angiography: a real-world prospective diagnostic test accuracy study. Journal of Neurointerventional Surgery. https://doi.org/10.1136/ neurintsurg-2021-018391

    McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., Back, T., Chesus, M., Corrado, G. S., Darzi, A., Etemadi, M., Garcia-Vicente, F., Gilbert, F. J., Halling-Brown, M., Hassabis, D., Jansen, S., Karthikesalingam, A., Kelly, C. J., King, D., … Shetty, S. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577(7788), 89–94. https://doi.org/10.1038/s41586-019-1799-6

    McLeavy, C. M., Chunara, M. H., Gravell, R. J., Rauf, A., Cushnie, A., Staley Talbot, C., & Hawkins, R. M. (2021). The future of CT: deep learning reconstruction. Clinical Radiology, 76(6), 407–415. https://doi.org/10.1016/j.crad.2021.01.010

    Medical AI evaluation. (n.d.). Retrieved June 26, 2022, from https://ericwu09.github.io/medical-ai-evaluation/

    Mlynarski, P., Delingette, H., Criminisi, A., & Ayache, N. (2019). Deep learning with mixed supervision for brain tumor segmentation. Journal of Medical Imaging (Bellingham, Wash.), 6(3), 034002. https://doi.org/10.1117/1.JMI.6.3.034002

    Mongan, J., Moy, L., & Kahn, C. E., Jr. (2020). Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers. Radiology. Artificial Intelligence, 2(2), e200029. https://doi.org/10.1148/ryai.2020200029

    Moon, H., Huo, Y., Abramson, R. G., Peters, R. A., Assad, A., Moyo, T. K., Savona, M. R., & Landman, B. A. (2019). Acceleration of spleen segmentation with end-to-end deep learning method and automated pipeline. Computers in Biology and Medicine, 107, 109–117. https://doi.org/10.1016/j.compbiomed.2019.01.018

    Morey, J. R., Zhang, X., Yaeger, K. A., Fiano, E., Marayati, N. F., Kellner, C. P., De Leacy, R. A., Doshi, A., Tuhrim, S., & Fifi, J. T. (2021). Real-World Experience with Artificial Intelligence- Based Triage in Transferred Large Vessel Occlusion Stroke Patients. Cerebrovascular Diseases, 50(4), 450–455. https://doi. org/10.1159/000515320

    Murdoch, B. (2021). Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Medical Ethics, 22(1), 122. https://doi.org/10.1186/s12910-021-00687-3

    Murray, N. M., Unberath, M., Hager, G. D., & Hui, F. K. (2020). Artificial intelligence to diagnose ischemic stroke and identify large vessel occlusions: a systematic review. Journal of Neurointerventional Surgery, 12(2), 156–164. https://doi.org/10.1136/neurintsurg-2019-015135

    Nagendran, M., Chen, Y., Lovejoy, C. A., Gordon, A. C., Komorowski, M., Harvey, H., Topol, E. J., Ioannidis, J. P. A., Collins, G. S., & Maruthappu, M. (2020). Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ, 368. https://doi.org/10.1136/bmj.m689

    Nair, T., Precup, D., Arnold, D. L., & Arbel, T. (2020). Exploring uncertainty measures in deep networks for Multiple sclerosis lesion detection and segmentation. Medical Image Analysis, 59, 101557. https://doi.org/10.1016/j.media.2019.101557

    Nakao, T., Hanaoka, S., Nomura, Y., Sato, I., Nemoto, M., Miki, S., Maeda, E., Yoshikawa, T., Hayashi, N., & Abe, O. (2018). Deep neural network-based computer-assisted detection of cerebral aneurysms in MR angiography. Journal of Magnetic Resonance Imaging: JMRI, 47(4), 948–953. https://doi.org/10.1002/jmri.25842

    Nam, J. G., Kim, M., Park, J., Hwang, E. J., Lee, J. H., Hong, J. H., Goo, J. M., & Park, C. M. (2021). Development and validation of a deep learning algorithm detecting 10 common abnormalities on chest radiographs. The European Respiratory Journal: Official Journal of the European Society for Clinical Respiratory Physiology, 57(5). https://doi.org/10.1183/13993003.03061-2020

    Narayana, P. A., Coronado, I., Sujit, S. J., Wolinsky, J. S., Lublin, F. D., & Gabr, R. E. (2020). Deep Learning for Predicting Enhancing Lesions in Multiple Sclerosis from Noncontrast. MRI. Radiology, 294(2), 398–404. https://doi.org/10.1148/ radiol.2019191061

    National Institute for Health and Care Excellence (NICE). (n.d.). Evidence standards framework for digital health technologies. Retrieved June 10, 2022, from https://www.nice.org.uk/corporate/ecd7

    Neisius, U., El-Rewaidy, H., Nakamori, S., Rodriguez, J., Manning, W. J., & Nezafat, R. (2019). Radiomic Analysis of Myocardial Native T1 Imaging Discriminates Between Hypertensive Heart Disease and Hypertrophic Cardiomyopathy. JACC. Cardiovascular Imaging, 12(10), 1946–1954. https://doi. org/10.1016/j.jcmg.2018.11.024

    Nelson, A., Herron, D., Rees, G., & Nachev, P. (2019). Predicting scheduled hospital attendance with artificial intelligence. Npj Digital Medicine, 2(1), 26. https://doi.org/10.1038/s41746-019-0103-3

    Nielsen, A., Hansen, M. B., Tietze, A., & Mouridsen, K. (2018). Prediction of Tissue Outcome and Assessment of Treatment Effect in Acute Ischemic Stroke Using Deep Learning. Stroke; a Journal of Cerebral Circulation, STROKEAHA.117.019740. https://doi.org/10.1161/STROKEAHA.117.019740

    Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns (New York, N.Y.), 2(10), 100347. https://doi.org/10.1016/j.patter.2021.100347

    O’Connor, S. D., & Bhalla, M. (2021). Should Artificial Intelligence Tell Radiologists Which Study to Read Next? [Review of Should Artificial Intelligence Tell Radiologists Which Study to Read Next?]. Radiology. Artificial Intelligence, 3(2), e210009. https://doi.org/10.1148/ryai.2021210009

    Office for Civil Rights (OCR). (2012, September 7). Guidance Regarding Methods for De-identification of Protected Health Information in Accordance with the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule. HHS.gov; US Department of Health and Human Services. https://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de- identification/index.html

    Oktay, O., Schlemper, J., Le Folgoc, L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N. Y.,
    Kainz, B., Glocker, B., & Rueckert, D.
    (2018). Attention U-Net: Learning Where to Look for the Pancreas. In arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1804.03999

    Olczak, J., Fahlberg, N., Maki, A., Razavian, A. S., Jilert, A., Stark, A., Sköldenberg, O., & Gordon, M. (2017). Artificial intelligence for analyzing orthopedic trauma radiographs. Acta Orthopaedica, 88(6), 581–586. https://doi.org/10.1080/1745367
    4.2017.1344459

    Olthof, A. W., van Ooijen, P. M. A., & Rezazade Mehrizi, M. H. (2020). Promises of artificial intelligence in neuroradiology: a systematic technographic review. Neuroradiology, 62(10), 1265–1278. https://doi.org/10.1007/s00234-020-02424-w

    Omoumi, P., Ducarouge, A., Tournier, A., Harvey, H., Kahn, C. E., Jr, Louvet-de Verchère, F., Pinto Dos Santos, D., Kober, T., & Richiardi, J. (2021). To buy or not to buy-evaluating commercial AI solutions in radiology (the ECLAIR guidelines). European Radiology, 31(6), 3786–3796. https://doi.org/10.1007/ s00330-020-07684-x

    O’Neill, T. J., Xi, Y., Stehel, E., Browning, T., Ng, Y. S., Baker, C., & Peshock, R. M. (2021). Active Reprioritization of the Reading Worklist Using Artificial Intelligence Has a Beneficial Effect on the Turnaround Time for Interpretation of Head CT with Intracranial Hemorrhage. Radiology. Artificial Intelligence, 3(2), e200024. https://doi.org/10.1148/ryai.2020200024

    Ooi, S. K. G., Makmur, A., Soon, A. Y. Q., Fook-Chong, S., Liew, C., Sia, S. Y., Ting, Y. H., & Lim, C. Y. (2021). Attitudes toward artificial intelligence in radiology with learner needs assessment within radiology residency programmes: a national multi-programme survey. Singapore Medical Journal, 62(3), 126–134. https://doi.org/10.11622/smedj.2019141

    Pan, Y., Shi, D., Wang, H., Chen, T., Cui, D., Cheng, X., & Lu, Y. (2020). Automatic opportunistic osteoporosis screening using low-dose chest computed tomography scans obtained for lung cancer screening. European Radiology, 30(7), 4107–4116. https://doi.org/10.1007/s00330-020-06679-y

    Park, H. J., Kim, S. M., La Yun, B., Jang, M., Kim, B., Jang, J. Y., Lee, J. Y., & Lee, S. H. (2019). A computer-aided diagnosis system using artificial intelligence for the diagnosis and characterization of breast masses on ultrasound: Added value for the inexperienced breast radiologist. Medicine, 98(3), e14146. https://doi.org/10.1097/MD.0000000000014146

    Price, I. I., & Nicholson, W. (2019). Medical AI and Contextual Bias. https://papers.ssrn.com/abstract=3347890

    Puvanasunthararajah, S., Fontanarosa, D., Wille, M.-L., & Camps, S. M. (2021). The application of metal artifact reduction methods on computed tomography scans for radiotherapy applications: A literature review. Journal of Applied Clinical Medical Physics / American College of Medical Physics, 22(6), 198–223. https://doi.org/10.1002/acm2.13255

    Qin, Z. Z., Sander, M. S., Rai, B., Titahong, C. N., Sudrungrot, S., Laah, S. N., Adhikari, L. M., Carter, E. J., Puri, L., Codlin, A. J., & Creswell, J. (2019). Using artificial intelligence to read chest radiographs for tuberculosis detection: A multi-site evaluation of the diagnostic accuracy of three deep learning systems. Scientific Reports, 9(1), 15000. https://doi.org/10.1038/ s41598-019-51503-3

    Ramspek, C. L., Jager, K. J., Dekker, F. W., Zoccali, C., & van Diepen, M. (2021). External validation of prognostic models: what, why, how, when and where?

    Rao, B., Zohrabian, V., Cedeno, P., Saha, A., Pahade, J., & Davis, M. A. (2021). Utility of Artificial Intelligence Tool as a Prospective Radiology Peer Reviewer - Detection of Unreported Intracranial Hemorrhage. Academic Radiology, 28(1), 85–93. https://doi.org/10.1016/j.acra.2020.01.035

    Reddy, S., Allan, S., Coghlan, S., & Cooper, P. (2020). A governance model for the application of AI in health care. Journal of the American Medical Informatics Association: JAMIA, 27(3), 491–497. https://doi.org/10.1093/jamia/ocz192

    Reddy, S., Rogers, W., Makinen, V.-P., Coiera, E., Brown, P., Wenzel, M., Weicken, E., Ansari, S., Mathur, P., Casey, A., & Kelly, B. (2021). Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health & Care Informatics, 28(1). https://doi.org/10.1136/ bmjhci-2021-100444

    Reyes, M., Meier, R., Pereira, S., Silva, C. A., Dahlweid, F.-M., von Tengg-Kobligk, H., Summers, R. M., & Wiest, R. (2020). On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities. Radiology. Artificial Intelligence, 2(3), e190043. https://doi.org/10.1148/ryai.2020190043

    Rezazade Mehrizi, M. H., van Ooijen, P., & Homan, M. (2021). Applications of artificial intelligence (AI) in diagnostic radiology: a technography study. European Radiology, 31(4), 1805–1811. https://doi.org/10.1007/s00330-020-07230-9

    Richardson, J. P., Smith, C., Curtis, S., Watson, S., Zhu, X., Barry, B., & Sharp, R. R. (2021). Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digital Medicine, 4(1), 140. https://doi.org/10.1038/s41746-021-00509-1

    Richardson, M. L., Garwood, E. R., Lee, Y., Li, M. D., Lo, H. S., Nagaraju, A., Nguyen, X. V., Probyn, L., Rajiah, P., Sin, J., Wasnik, A. P., & Xu, K. (2021). Noninterpretive Uses of Artificial Intelligence in Radiology. Academic Radiology, 28(9), 1225– 1235. https://doi.org/10.1016/j.acra.2020.01.012

    Rockenbach, M. A. B. (2021, June 13). Multimodal AI in healthcare: Closing the gaps. CodeX. https://medium.com/codex/ multimodal-ai-in-healthcare-1f5152e83be2

    Rodríguez-Ruiz, A., Krupinski, E., Mordang, J.-J., Schilling, K., Heywang-Köbrunner, S. H., Sechopoulos, I., & Mann, R. M. (2019). Detection of Breast Cancer with Mammography: Effect of an Artificial Intelligence Support System. Radiology, 290(2), 305–314. https://doi.org/10.1148/radiol.2018181371

    Rodriguez-Ruiz, A., Lång, K., Gubern-Merida, A., Broeders, M., Gennaro, G., Clauser, P., Helbich, T. H., Chevalier, M., Tan, T., Mertelmeier, T., Wallis, M. G., Andersson, I., Zackrisson, S., Mann, R. M., & Sechopoulos, I. (2019). Stand- Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison With 101 Radiologists. Journal of the National Cancer Institute, 111(9), 916–922. https://doi.org/10.1093/jnci/djy222

    Santomartino, S. M., & Yi, P. H. (2022). Systematic Review of Radiologist and Medical Student Attitudes on the Role and Impact of AI in Radiology. Academic Radiology. https://doi.org/10.1016/j.acra.2021.12.032

    Schemmel, A., Lee, M., Hanley, T., Pooler, B. D., Kennedy, T., Field, A., Wiegmann, D., & Yu, J.-P. J. (2016). Radiology Workflow Disruptors: A Detailed Analysis. Journal of the American College of Radiology: JACR, 13(10), 1210–1214. https://doi.org/10.1016/j.jacr.2016.04.009

    Schreiber-Zinaman, J., & Rosenkrantz, A. B. (2017). Frequency and reasons for extra sequences in clinical abdominal MRI examinations. Abdominal Radiology (New York), 42(1), 306–311. https://doi.org/10.1007/s00261-016-0877-6

    Scott, I. A., Carter, S. M., & Coiera, E. (2021). Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health & Care Informatics, 28(1). https://doi.org/10.1136/ bmjhci-2021-100450

    Seah, J. C. Y., Tang, C. H. M., Buchlak, Q. D., Holt, X. G., Wardman, J. B., Aimoldin, A., Esmaili, N., Ahmad, H., Pham, H., Lambert, J. F., Hachey, B., Hogg, S. J. F., Johnston, B. P., Bennett, C., Oakden-Rayner, L., Brotchie, P., & Jones, C. M. (2021). Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multireader multicase study. The Lancet. Digital Health, 3(8), e496–e506. https://doi.org/10.1016/S2589- 7500(21)00106-0

    Sectra Amplifier Marketplace. (2021, July 5). Sectra Medical. https://medical.sectra.com/product/sectra-amplifier- marketplace/

    Sermesant, M., Delingette, H., Cochet, H., Jaïs, P., & Ayache, N. (2021). Applications of artificial intelligence in cardiovascular imaging. Nature Reviews. Cardiology, 18(8), 600–609. https://doi.org/10.1038/s41569-021-00527-2

    Setio, A. A. A., Traverso, A., de Bel, T., Berens, M. S. N., van den Bogaard, C., Cerello, P., Chen, H., Dou, Q., Fantacci, M. E., Geurts, B., Gugten, R. van der, Heng, P. A., Jansen, B., de Kaste, M. M. J., Kotov, V., Lin, J. Y.-H., Manders, J. T. M. C., Sóñora-Mengana, A., García-Naranjo, J. C., … Jacobs, C. (2017). Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge. Medical Image Analysis, 42, 1–13. https://doi.org/10.1016/j.media.2017.06.015

    Seyyed-Kalantari, L., Zhang, H., McDermott, M. B. A., Chen, I. Y., & Ghassemi, M. (2021). Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under- served patient populations. Nature Medicine, 27(12), 2176– 2182. https://doi.org/10.1038/s41591-021-01595-0

    Shan, H., Padole, A., Homayounieh, F., Kruger, U., Khera, R. D., Nitiwarangkul, C., Kalra, M. K., & Wang, G. (2019). Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction. Nature Machine Intelligence, 1(6), 269–276. https://doi.org/10.1038/s42256-019-0057-9

    Sharma, K., Rupprecht, C., Caroli, A., Aparicio, M. C., Remuzzi, A., Baust, M., & Navab, N. (2017). Automatic Segmentation of Kidneys using Deep Learning for Total Kidney Volume Quantification in Autosomal Dominant Polycystic Kidney Disease. Scientific Reports, 7(1), 2049. https://doi.org/10.1038/s41598-017-01779-0

    Shelmerdine, S. C., Arthurs, O. J., Denniston, A., & Sebire, N. J. (2021). Review of study reporting guidelines for clinical studies using artificial intelligence in healthcare. BMJ Health & Care Informatics, 28(1). https://doi.org/10.1136/ bmjhci-2021-100385

    Shinagare, A. B., Ip, I. K., Abbett, S. K., Hanson, R., Seltzer, S. E., & Khorasani, R. (2014). Inpatient imaging utilization: trends of the past decade. AJR. American Journal of Roentgenology, 202(3), W277–W283. https://doi.org/10.2214/AJR.13.10986

    Shlobin, N. A., Baig, A. A., Waqas, M., Patel, T. R., Dossani, R. H., Wilson, M., Cappuzzo, J. M., Siddiqui, A. H., Tutino, V. M., & Levy, E. I. (2022). Artificial Intelligence for Large-Vessel Occlusion Stroke: A Systematic Review. World Neurosurgery, 159, 207–220.e1. https://doi.org/10.1016/j.wneu.2021.12.004

    Silberg, J., & Manyika, J. (2019, June 6). Tackling bias in artificial intelligence (and in humans). McKinsey & Company. https://www.mckinsey.com/featured-insights/artificial- intelligence/tackling-bias-in-artificial-intelligence-and-in- humans

    Singh, S., Kalra, M. K., Hsieh, J., Licato, P. E., Do, S., Pien, H. H., & Blake, M. A. (2010). Abdominal CT: comparison of adaptive statistical iterative and filtered back projection reconstruction techniques. Radiology, 257(2), 373–383. https://doi.org/10.1148/radiol.10092212

    Smith-Bindman, R., Kwan, M. L., Marlow, E. C., Theis, M. K., Bolch, W., Cheng, S. Y., Bowles, E. J. A., Duncan, J. R.,
    Greenlee, R. T., Kushi, L. H., Pole, J. D., Rahm, A. K., Stout, N. K., Weinmann, S., & Miglioretti, D. L.
    (2019). Trends in Use of Medical Imaging in US Health Care Systems and in Ontario, Canada, 2000-2016. JAMA: The Journal of the American Medical Association, 322(9), 843–856. https://doi.org/10.1001/ jama.2019.11456

    Sutherland, G., Russell, N., Gibbard, R., & Dobrescu, A. (n.d.). The value of radiology, part II. https://car.ca/wp-content/ uploads/2019/07/value-of-radiology-part-2-en.pdf

    Tamada, D., Kromrey, M.-L., Ichikawa, S., Onishi, H., & Motosugi, U. (2020). Motion Artifact Reduction Using a Convolutional Neural Network for Dynamic Contrast Enhanced MR Imaging of the Liver. Magnetic Resonance in Medical Sciences: MRMS: An Official Journal of Japan Society of Magnetic Resonance in Medicine, 19(1), 64–76. https://doi.org/10.2463/ mrms.mp.2018-0156

    The Medical Futurist. (n.d.). The Medical Futurist. Retrieved February 23, 2022, from https://medicalfuturist.com/fda- approved-ai-based-algorithms/

    The Nuance AI Marketplace for Diagnostic Imaging. (n.d.). https://www.nuance.com/content/dam/nuance/en_us/collateral/ healthcare/data-sheet/ds-ai-marketplace-for-diagnostic- imaging-en-us.pdf

    Thodberg, H. H., Kreiborg, S., Juul, A., & Pedersen, K. D. (2009). The BoneXpert method for automated determination of skeletal maturity. IEEE Transactions on Medical Imaging, 28(1), 52–66. https://doi.org/10.1109/TMI.2008.926067

    Thomas, K. A., Kidziński, Ł., Halilaj, E., Fleming, S. L., Venkataraman, G. R., Oei, E. H. G., Gold, G. E., & Delp, S. L. (2020). Automated Classification of Radiographic Knee Osteoarthritis Severity Using Deep Neural Networks. Radiology. Artificial Intelligence, 2(2), e190065. https://doi.org/10.1148/ ryai.2020190065

    Towards trustable machine learning. (2018). Nature Biomedical Engineering, 2(10), 709–710. https://doi.org/10.1038/ s41551-018-0315-x

    Trinidad, M. G., Platt, J., & Kardia, S. L. R. (2020). The public’s comfort with sharing health data with third-party commercial companies. Humanities and Social Sciences Communications, 7(1), 1–10. https://doi.org/10.1057/s41599-020-00641-5

    Trivedi, H., Mesterhazy, J., Laguna, B., Vu, T., & Sohn, J. H. (2018). Automatic Determination of the Need for Intravenous Contrast in Musculoskeletal MRI Examinations Using IBM Watson’s Natural Language Processing Algorithm. Journal of Digital Imaging, 31(2), 245–251. https://doi.org/10.1007/ s10278-017-0021-3

    Tsao, D. N. (2020, July 27). AI in medical diagnostics 2020- 2030: Image recognition, players, clinical applications, forecasts: IDTechEx. https://www.idtechex.com/en/research-report/ai-in- medical-diagnostics-2020-2030-image-recognition-players- clinical-applications-forecasts/766

    Tucci, V., Saary, J., & Doyle, T. E. (2022). Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review. Journal of Medical Artificial Intelligence, 5, 4–4. https://doi.org/10.21037/jmai-21-25

    Ueda, D., Yamamoto, A., Nishimori, M., Shimono, T., Doishita, S., Shimazaki, A., Katayama, Y., Fukumoto, S., Choppin, A., Shimahara, Y., & Miki, Y. (2019). Deep Learning for MR Angiography: Automated Detection of Cerebral Aneurysms. Radiology, 290(1), 187–194. https://doi.org/10.1148/ radiol.2018180901

    Urakawa, T., Tanaka, Y., Goto, S., Matsuzawa, H., Watanabe, K., & Endo, N. (2019). Detecting intertrochanteric hip fractures with orthopedist-level accuracy using a deep convolutional neural network. Skeletal Radiology, 48(2), 239–244. https://doi.org/10.1007/s00256-018-3016-3

    van Leeuwen, K. G., Schalekamp, S., Rutten, M. J. C. M., van Ginneken, B., & de Rooij, M. (2021). Artificial intelligence in radiology: 100 commercially available products and their scientific evidence. European Radiology, 31(6), 3797–3804. https://doi.org/10.1007/s00330-021-07892-z

    Vayena, E., & Blasimme, A. (2017). Biomedical Big Data: New Models of Control Over Access, Use and Governance. Journal of Bioethical Inquiry, 14(4), 501–513. https://doi.org/10.1007/ s11673-017-9809-6

    Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLoS Medicine, 15(11), e1002689. https://doi.org/10.1371/journal. pmed.1002689

    Wang, J., Yang, F., Liu, W., Sun, J., Han, Y., Li, D., Gkoutos, G. V., Zhu, Y., & Chen, Y. (2020). Radiomic Analysis of Native T1 Mapping Images Discriminates Between MYH7 and MYBPC3- Related Hypertrophic Cardiomyopathy. Journal of Magnetic Resonance Imaging: JMRI, 52(6), 1714–1721. https://doi.org/10.1002/jmri.27209

    Wang, S.-H., Tang, C., Sun, J., Yang, J., Huang, C., Phillips, P., & Zhang, Y.-D. (2018). Multiple Sclerosis Identification by 14-Layer Convolutional Neural Network With Batch Normalization, Dropout, and Stochastic Pooling. Frontiers in Neuroscience, 12, 818. https://doi.org/10.3389/fnins.2018.00818

    Watanabe, A. T., Lim, V., Vu, H. X., Chim, R., Weise, E., Liu, J., Bradley, W. G., & Comstock, C. E. (2019). Improved
    Cancer Detection Using Artificial Intelligence: a Retrospective Evaluation of Missed Cancers on Mammography. Journal
    of Digital Imaging, 32
    (4), 625–637. https://doi.org/10.1007/ s10278-019-00192-5

    Weikert, T., Francone, M., Abbara, S., Baessler, B., Choi, B. W., Gutberlet, M., Hecht, E. M., Loewe, C., Mousseaux, E., Natale, L., Nikolaou, K., Ordovas, K. G., Peebles, C., Prieto, C., Salgado, R., Velthuis, B., Vliegenthart, R., Bremerich, J., & Leiner, T. (2021). Machine learning in cardiovascular radiology: ESCR position statement on design requirements, quality assessment, current applications, opportunities, and challenges. European Radiology, 31(6), 3909–3922. https://doi.org/10.1007/ s00330-020-07417-0

    Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., & Cave, S. (2019). Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research.
    London: Nuffield Foundation. https://www.nuffieldfoundation. org/sites/default/files/files/Ethical-and-Societal-Implications-of- Data-and-AI-report-Nuffield-Foundat.pdf

    WHO operational handbook on tuberculosis Module 2: Screening – Systematic screening for tuberculosis disease. (n.d.). Retrieved June 19, 2022, from https://www.who.int/ publications-detail-redirect/9789240022614

    Willemink, M. J., & Noël, P. B. (2019). The evolution of image reconstruction for CT-from filtered back projection to artificial intelligence. European Radiology, 29(5), 2185–2195. https://doi.org/10.1007/s00330-018-5810-7

    Winder, M., Owczarek, A. J., Chudek, J., Pilch-Kowalczyk, J., & Baron, J. (2021). Are We Overdoing It? Changes in Diagnostic Imaging Workload during the Years 2010-2020 including the Impact of the SARS-CoV-2 Pandemic. Healthcare (Basel, Switzerland), 9(11). https://doi.org/10.3390/healthcare9111557

    Wong, T. T., Kazam, J. K., & Rasiej, M. J. (2019). Effect of Analytics-Driven Worklists on Musculoskeletal MRI
    Interpretation Times in an Academic Setting. AJR. American Journal of Roentgenology, 1–5. https://doi.org/10.2214/ AJR.18.20434

    Wu, B., Zhou, Z., Wang, J., & Wang, Y. (2018). Joint learning for pulmonary nodule segmentation, attributes and malignancy prediction. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 1109–1113. https://doi.org/10.1109/ISBI.2018.8363765

    Wu, E., Wu, K., Daneshjou, R., Ouyang, D., Ho, D. E., & Zou, J. (2021). How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nature Medicine, 27(4), 582–584. https://doi.org/10.1038/s41591-021-01312-x

    Wu, G.-G., Zhou, L.-Q., Xu, J.-W., Wang, J.-Y., Wei, Q., Deng, Y.-B., Cui, X.-W., & Dietrich, C. F. (2019). Artificial intelligence in breast ultrasound. World Journal of Radiology, 11(2), 19–26. https://doi.org/10.4329/wjr.v11.i2.19

    Yala, A., Schuster, T., Miles, R., Barzilay, R., & Lehman, C. (2019). A Deep Learning Model to Triage Screening Mammograms: A Simulation Study. Radiology, 293(1), 38–46. https://doi.org/10.1148/radiol.2019182908

    Yasaka, K., Akai, H., Kunimatsu, A., Abe, O., & Kiryu, S. (2018). Liver Fibrosis: Deep Convolutional Neural Network for Staging by Using Gadoxetic Acid-enhanced Hepatobiliary Phase MR Images. Radiology, 287(1), 146–155. https://doi.org/10.1148/ radiol.2017171928

    Yeung, K. (2018). A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of
    Responsibility Within a Human Rights Framework.
    https://papers. ssrn.com/abstract=3286027

    Yoo, Y., Tang, L. Y. W., Li, D. K. B., Metz, L., Kolind, S., Traboulsee, A. L., & Tam, R. C. (2019). Deep learning of brain lesion patterns and user-defined clinical and MRI features for predicting conversion to multiple sclerosis from clinically isolated syndrome. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 7(3), 250–259. https://doi.org/10.1080/21681163.2017.1356750

    Yu, A. C., Mohajer, B., & Eng, J. (2022). External Validation of Deep Learning Algorithms for Radiologic Diagnosis: A Systematic Review. Radiology. Artificial Intelligence, 4(3), e210064. https://doi.org/10.1148/ryai.210064

    Yu, J.-P. J., Kansagra, A. P., & Mongan, J. (2014). The radiologist’s workflow environment: evaluation of disruptors and potential implications. Journal of the American College of Radiology: JACR, 11(6), 589–593. https://doi.org/10.1016/j. jacr.2013.12.026

    Yusuf, M., Atal, I., Li, J., Smith, P., Ravaud, P., Fergie, M., Callaghan, M., & Selfe, J. (2020). Reporting quality of studies using machine learning models for medical diagnosis: a systematic review. BMJ Open, 10(3), e034568. https://doi.org/10.1136/bmjopen-2019-034568

    Yu, Y., Xie, Y., Thamm, T., Gong, E., Ouyang, J., Christensen, S., Marks, M. P., Lansberg, M. G., Albers, G. W., & Zaharchuk,
    G.
    (2021). Tissue at Risk and Ischemic Core Estimation Using Deep Learning in Acute Stroke. AJNR. American Journal of Neuroradiology, 42(6), 1030–1037. https://doi.org/10.3174/ajnr. A7081

    Yu, Y., Xie, Y., Thamm, T., Gong, E., Ouyang, J., Huang, C., Christensen, S., Marks, M. P., Lansberg, M. G., Albers, G. W., & Zaharchuk, G. (2020). Use of Deep Learning to Predict Final Ischemic Stroke Lesions From Initial Magnetic Resonance Imaging. JAMA Network Open, 3(3), e200772. https://doi.org/10.1001/jamanetworkopen.2020.0772

    Zhang, Y., & Yu, H. (2018). Convolutional Neural Network Based Metal Artifact Reduction in X-Ray Computed Tomography. IEEE Transactions on Medical Imaging, 37(6), 1370–1381. https://doi.org/10.1109/TMI.2018.2823083

    Zhao, B., Liu, Z., Ding, S., Liu, G., Cao, C., & Wu, H. (2022). Motion artifact correction for MR images based on convolutional neural network. Optoelectronics Letters, 18(1), 54–58. https://doi.org/10.1007/s11801-022-1084-z

    Zhao, J., Huang, Y., Song, Y., Xie, D., Hu, M., Qiu, H., & Chu, J. (2020). Diagnostic accuracy and potential covariates for machine learning to identify IDH mutations in glioma patients: evidence from a meta-analysis. European Radiology, 30(8), 4664–4674. https://doi.org/10.1007/s00330-020-06717-9

    Zhou, C., Ding, C., Wang, X., Lu, Z., & Tao, D. (2020). One-pass Multi-task Networks with Cross-task Guided Attention for Brain Tumor Segmentation. IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society. https://doi.org/10.1109/TIP.2020.2973510

    Zicari, R. V., Brodersen, J., Brusseau, J., Düdder, B., Eichhorn, T., Ivanov, T., Kararigas, G., Kringen, P., McCullough, M., Möslein, F., Mushtaq, N., Roig, G., Stürtz, N., Tolle, K., Tithi, J. J., van Halem, I., & Westerlund, M. (2021). Z-Inspection®: A Process to Assess Trustworthy AI. IEEE Transactions on Technology and Society, 1–1. https://doi.org/10.1109/TTS.2021.3066209