Deliverable 3 – Evaluate Research and Data

Scenario
You are a first-year graduate student. You are taking a graduate course on research and writing. In this assignment, your professor has asked you to evaluate the research and data in two studies related to a research question you are interested in.

Instructions
In a paper for your professor, do the following:

Don't use plagiarized sources. Get Your Custom Essay on
Deliverable 3 – Evaluate Research and Data
Just from $13/Page
Order Essay
  • Find two scholarly research articles in the Rasmussen library (see uploaded) related to a research question you are interested in. Indicate the research question. Be sure to provide APA citations and provide the library permalinks for the two articles.
  • Evaluate how data was used in these studies:
  • Is the data credible and reliable? Support your answer.
  • Is the data well documented in the paper? Support your answer.
  • Evaluate the data analysis and interpretation. Does the data support the hypothesis and help answer the research question? Support your answer.
  • Discuss the ethical issues that may arise as you conduct your research study. How will you address those issues?
  • Resources
    For writing assistance, please visit the Rasmussen University Writing Guide.
    For help with APA, visit the Rasmussen University APA Guide.
    Library databases such as the following are great resources for this project:
  • Health Sciences
  • CINAHL Plus
  • Health Sciences and Nursing via ProQuest
  • Medline
  • Business
  • Business Source Complete
  • Business via ProQuest
  • Another database that you may be interested in knowing about is ProQuest Dissertations & Theses Global. You can view original research, view research design, data gathering techniques, etc.

European Society of Radiology (ESR)
Insights into Imaging (2022) 13:107
https://doi.org/10.1186/s13244-022-01247-y

S TAT E M E N T

Current practical experience with artificial
intelligence in clinical radiology: a survey
of the European Society of Radiology
European Society of Radiology (ESR)*

Abstract
A survey among the members of European Society of Radiology (ESR) was conducted regarding the current practi-
cal clinical experience of radiologists with Artificial Intelligence (AI)-powered tools. 690 radiologists completed the
survey. Among these were 276 radiologists from 229 institutions in 32 countries who had practical clinical experience
with an AI-based algorithm and formed the basis of this study. The respondents with clinical AI experience included
143 radiologists (52%) from academic institutions, 102 radiologists (37%) from regional hospitals, and 31 radiologists
(11%) from private practice. The use case scenarios of the AI algorithm were mainly related to diagnostic interpreta-
tion, image post-processing, and prioritisation of workflow. Technical difficulties with integration of AI-based tools into
the workflow were experienced by only 49 respondents (17.8%). Of 185 radiologists who used AI-based algorithms
for diagnostic purposes, 140 (75.7%) considered the results of the algorithms generally reliable. The use of a diagnos-
tic algorithm was mentioned in the report by 64 respondents (34.6%) and disclosed to patients by 32 (17.3%). Only
42 (22.7%) experienced a significant reduction of their workload, whereas 129 (69.8%) found that there was no such
effect. Of 111 respondents who used AI-based algorithms for clinical workflow prioritisation, 26 (23.4%) considered
algorithms to be very helpful for reducing the workload of the medical staff whereas the others found them only
moderately helpful (62.2%) or not helpful at all (14.4%). Only 92 (13.3%) of the total 690 respondents indicated that
they had intentions to acquire AI tools. In summary, although the assistance of AI algorithms was found to be reliable
for different use case scenarios, the majority of radiologists experienced no reduction of practical clinical workload.

Keywords: Professional issues, Artificial intelligence in imaging, Artificial intelligence and workload, Artificial
intelligence in radiology

© The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or
other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.

Key points

• Artificial Intelligence (AI) algorithms are being used
for a large spectrum of use case scenarios in clinical
radiology in Europe, including assistance with inter-
pretive tasks, image post-processing, and prioritisa-
tion in the workflow.

• Most users considered AI algorithms generally reli-
able and experienced no major problems with techni-
cal integration in their daily practice.

• Only a minority of users experienced a reduction of
the workload of the radiological medical staff due to
the AI algorithms.

Background and objectives
Digital imaging is naturally predisposed to benefit from
the rapid and exciting progress in data science. The
increase of imaging examinations and the associated
diagnostic data volume have resulted in a mismatch

Open Access

Insights into Imaging

*Correspondence: communications@myesr.org
European Society of Radiology (ESR), Am Gestade 1, 1010 Vienna, Austria

http://creativecommons.org/licenses/by/4.0/

http://crossmark.crossref.org/dialog/?doi=10.1186/s13244-022-01247-y&domain=pdf

Page 2 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107

between the radiologic workforce and workload in many
European countries. In an opinion survey conducted in
2018 among the members of the European Society of
Radiology (ESR), many respondents had expectations
that algorithms based on artificial intelligence (AI) and
particularly machine learning could reduce radiologists’
workload [1]. Although a growing number of AI-based
algorithms has become available for many radiological
use case scenarios, most published studies indicate that
only very few of these tools are helpful for reducing radi-
ologists’ workload, whereas the majority rather result in
an increased or unchanged workload [2]. Furthermore,
in a recent analysis of the literature it was found that the
available scientific evidence of the clinical efficacy of 100
commercially available CE-marked products was quite
limited, leading to the conclusion that AI in radiology
was still in its infancy [3]. The purpose of the present sur-
vey was to get an impression of the current practical clin-
ical experience of radiologists from different European
countries with AI-powered tools.

Methods
A survey was created by the members of the ESR eHealth
and Informatics Subcommittee and was intentionally
kept brief to allow responding in a few minutes. A few
demographic questions included the country, type of
institution (i.e. academic department, regional hospital,
or private practice), and the main field of radiological
practice as summarised in Tables 1, 2 and 3. For the more
specific questions about the use of AI-based algorithms
it was clearly stated that the answers were intended
to reflect experience from clinical routine rather than
research and testing purposes. The questions related to
the use of AI addressed the respondents’ working expe-
rience with certified AI-based algorithms, possible diffi-
culties in integrating these algorithms in the IT system,
and different use case scenarios for which AI-based algo-
rithms were used in clinical routine, mainly distinguish-
ing between tools aiming at facilitating the diagnostic
interpretation process itself (questions shown in Fig.  1)
from those that were aiming at facilitating the prioritisa-
tion of examinations in the workflow. Specific questions
addressed the technical integration of the algorithms
(question mentioned in Table 4); radiologists’ confidence
in the diagnostic performance (question mentioned in
Table  5); quality control mechanisms to evaluate diag-
nostic accuracy (questions mentioned in Tables  6, 7 and
8); communication of the use of diagnosis-related algo-
rithms towards patients or in the radiology reports (ques-
tions mentioned in Tables 9 and 10); and the usefulness of
algorithms for reducing the radiologists’ workload (ques-
tions mentioned in Tables  11 and 12). Respondents also
had the opportunity to offer free text remarks regarding

their use of AI-based tools. Those respondents who did
not use AI-based algorithms for the purpose of clinical
practice were asked to skip all the questions related to
clinical AI-use and to proceed directly to the last ques-
tion about acquisition of AI-based algorithms, so that the
opinions of all participating radiologists were taken into
consideration for the final questions about their inten-
tions regarding acquisition of such tools (question men-
tioned in Fig. 2).

The survey was created through the ESR central office
using the “Survey Monkey platform” (SurveyMonkey
Inc., San Mateo, CA, USA) and 27,700 radiologist mem-
bers of the ESR were invited by e-mail to participate
in January 2022. The survey was closed after a second
reminder in March 2022. The answers of the respond-
ents were collected and analysed using an Excel software
(Microsoft, Redmond, WA, USA).

Results
A total of 690 ESR radiologist members from 44 coun-
tries responded to the survey, for a response rate of
2.5%. The distribution per country and the proportion of
respondents with practical clinical experience with AI-
based algorithms per country are given in Table 1.

The 276 respondents with practical clinical experi-
ence with AI-based algorithms were affiliated to 229
institutions in 32 countries; their answers formed the
main basis of this study. Table 2 shows that 143 (52%) of
the respondents with practical clinical experience with
AI algorithms were affiliated to academic institutions,
whereas 102 (37%) worked in regional hospitals, and 31
(11%) in private practice.

Table  3 characterises the same group of respondents
as in Table  2 regarding their main field of activity show-
ing that a wide range of subspecialties was represented
in the survey and that abdominal radiology, neuroradiol-
ogy, general radiology, and emergency radiology together
accounted for half of the respondents. A detailed analysis
of the results according to subspecialties was beyond the
scope of the study because of the relatively small number
of resulting groups.

The experience regarding technical integration of
the software algorithms into the IT system or work-
flow is summarised in Table  4, showing that only 17.8%
of respondents reported difficulties with integration of
these tools, whereas a majority of 44.5% observed no
such difficulties, although 37.7% of respondents did not
answer this question.

Algorithms were used in clinical practice either for
assistance in interpretation or for prioritisation of work-
flow. An overview of the scenarios for which AI- powered
algorithms were the used by the respondents is given in
Fig. 1.

Page 3 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107

Table 1 Distribution of all 690 respondents by countries and proportion of radiologists with practical clinical experience with AI
algorithms

Country Number of respondents
per country

Number of respondents with practical
clinical experience with AI per country

Percentage of radiologists with practical
clinical experience in AI per country (%)

Italy 71 23 32

Spain 64 19 30

UK 60 23 38

Germany 50 23 46

Netherlands 50 35 70

Sweden 29 14 48

Denmark 27 15 56

Turkey 27 3 11

Norway 26 12 46

Switzerland 27 14 54

France 25 12 48

Belgium 23 13 57

Austria 21 12 57

Greece 21 5 24

Portugal 17 5 29

Romania 16 4 25

Ukraine 13 3 23

Croatia 11 4 36

Russian Fed 11 4 36

Bulgaria 10 0 0

Poland 10 4 40

Finland 7 4 57

Hungary 7 3 43

Serbia 7 1 14

Slovenia 7 3 43

Slovakia 6 5 83

Ireland 5 2 40

Lithuania 5 2 40

Bos. & Herzegovina 4 0 0

Czech Republic 4 3 75

Israel 4 2 50

Latvia 4 0 0

Armenia 3 0 0

Albania 2 0 0

Azerbaijan 2 0 0

Belarus 2 0 0

Estonia 2 2 100

Georgia 2 0 0

Kazakhstan 2 0 0

Luxembourg 2 1 50

Cyprus 1 0 0

Iceland 1 0 0

Kosovo 1 1 100

Uzbekistan 1 0 0

Total 690 276

Page 4 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107

Use of algorithms for assistance in diagnostic
interpretation
Among the 276 respondents who shared their practi-
cal experience with AI-based tool experience, a total
of 185 (67%) reported clinical experience with one or
more integrated algorithms for routine diagnostic
tasks. As seen in Fig.  1 there were different use case
scenarios, the commonest being detection or mark-
ing of specific findings. The free text remarks of the
respondents showed a large range of pathologies in
practically all clinical fields and with almost all imag-
ing modalities. Typical examples of pathologies were

pulmonary emboli and parenchymal nodules, cerebral
haemorrhage and reduced cerebrovascular blood flow,
or colonic polyps on CT. Other tasks included the
detection of traumatic lesions, e.g. the presence of bone
fractures on conventional radiographs or the calcula-
tion of bone age. The second most common diagnostic
scenario was assistance with post-processing (e.g. using
AI-based tools for image reconstruction or quantita-
tive evaluation of structural or functional abnormali-
ties), followed by primary interpretation (i.e. potentially
replacing the radiologist), assistance with differential

Table 2 Respondents with practical clinical experience with AI-based algorithms: distribution of origin by countries and type of
institutions

Country Number of
respondents per
country

Number of
institutions per
country

Respondents from
academic departments

Respondents from
private practice

Respondents from
regional hospitals

Netherlands 35 20 16 0 19

Germany 23 21 14 3 6

Italy 23 21 13 0 10

UK 23 22 7 2 14

Spain 19 16 14 1 4

Denmark 15 7 11 1 3

Switzerland 14 13 6 6 2

Sweden 14 14 7 1 6

Belgium 13 9 5 1 7

Austria 12 11 7 1 4

France 12 11 5 5 2

Norway 12 9 6 0 6

Greece 5 5 2 2 1

Portugal 5 4 0 4 1

Slovakia 5 5 2 2 1

Croatia 4 4 1 1 2

Finland 4 3 3 0 1

Poland 4 3 3 0 1

Romania 4 2 2 0 2

Russian Fed 4 4 3 0 1

Czech Republic 3 3 1 0 2

Hungary 3 3 2 0 1

Slovenia 3 3 2 0 1

Turkey 3 3 3 0 0

Ukraine 3 2 2 1 0

Estonia 2 2 1 0 1

Ireland 2 2 1 0 1

Israel 2 2 2 0 0

Lithuania 2 2 0 0 2

Kosovo 1 1 1 0 0

Luxembourg 1 1 0 0 1

Serbia 1 1 1 0 0

Total 276 229 143 (52%) 31 (11%) 102 (37%)

Page 5 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107

diagnosis, e.g. by facilitation of literature search, and
quality control.

Although a detailed analysis of all different diagnostic
use case scenarios was beyond the scope of this survey,
the respondents’ answers to specific survey questions
are shown in Tables 5, 6, 7, 8, 9, 10 and 11. Because some
respondents skipped or incompletely answered some
questions, the number of yes/no answers per question
was not complete. As shown in Table  5, most respond-
ents (75.7%) found the results provided by the algorithms
generally reliable.

A significant number of respondents declared that
they used mechanisms of quality assurance regarding
the diagnostic performance of the algorithms. These
included keeping records of diagnostic discrepancies

between the radiologist and the algorithms in 44.4%,
establishing receiver-operator characteristic (ROC)
curves of diagnostic accuracy based on the radiologist’s
diagnosis (34.1%) and/ or ROC curves based on the final
medical record (30.3%) (Tables 6, 7 and 8).

The use of a diagnostic algorithm was disclosed to
patients by 17.3% of the respondents but mentioned in
the report by 34.6% (Tables 9 and 10).

Only a minority of 22.7% of respondents who used AI-
based algorithms for diagnostic purposes experienced
a reduction of their workload, whereas 69.8% reported

Table 3 Respondents with practical clinical experience with
AI-based algorithms: main field of activity/subspecialty

Field of practice Number of respondents (%)

Abdominal radiology 45 16.3

Neuroradiology 45 16.3

General radiology 39 14.1

Chest radiology 32 11.6

Cardiovascular radiology 24 8.7

Musculoskeletal radiology 23 8.3

Oncologic imaging 23 8.3

Breast radiology 17 6.2

Emergency radiology 10 3.6

Paediatric radiology 8 2.9

Urogenital radiology 6 2.2

Head and Neck radiology 4 1.5

Total 276 100

111 (40%)

11 (4%)

14 (5%)

19 (7%)

79 (28.6%)

142 (51.5%)

0 50 100 150

Workflow Priori sa on

Quality control

Assistance during interpreta on (e.g., access to literature,
facilita ng differen al diagnosis etc.)

Primary interpreta on (=replacing the radiologist)

Assistance for post-processing (e.g., image reconstruc on,
quan ta ve evalua on)

Assistance during interpreta on (e.g., detec ng / marking of
specific findings like nodules, emboli etc.)

Fig. 1 Which type of scenario (use case) was addressed by the used AI algorithm(s) in clinical routine? The answers of all 276 respondents
with practical clinical AI experience are shown, including the number of respondents using one or more algorithms for assistance in diagnostic
interpretation (green) and/ or workflow prioritisation (blue)

Table 4 Respondents with practical clinical experience with
AI-based algorithms: Have there been any major problems with
integration of AI-based algorithms into your IT system/workflow?

Answer Number of respondents (%)

Yes 49 17.8

No 123 44.5

Skipped 104 37.7

Total 276 100

Table 5 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Were the
findings of the algorithm(s) considered to be reliable?

Answer Number of respondents (%)

Yes 140 75.7

No 31 16.8

Skipped 14 7.5

Total 185 100

Page 6 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107

that there was no reduction effect on their workload
(Table 11).

Use of algorithms for prioritisation of workflow
Among the 276 respondents who had practical expe-
rience with AI-based tools, there were 111 respond-
ents (40%) reporting experience with algorithms for

prioritisation of image sets in their clinical workflow.
As shown in Table  12, the prioritisation algorithms
were considered to be very helpful for reducing the
workload of the medical staff by 23.4% respondents
who used them, whereas the other users found them
only moderately helpful (62.2%) or not helpful at all
(14.4%).

Intentions of all respondents regarding the acquisition
of an AI‑based algorithm
All participants of the survey, regardless of their prac-
tical clinical experience, were given the opportunity to
answer the question whether they intended to acquire
a certified AI- based software. Of the 690 participants,
92 (13.3%) answered “yes”, 363 (52.6%) answered “no,”
and 235 (34.1%) did not answer this question. Figure  2

Table 6 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Were
discrepancies between the software and the radiologist
recorded?

Answer Number of respondents (%)

Yes 82 44.4

No 89 48.1

Skipped 14 7.5

Total 185 100

Table 7 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Was the
diagnostic accuracy (ROC curves) supervised on a regular basis in
comparison with the radiologist’s diagnosis?

Answer Number of respondents (%)

Yes 63 34.1

No 108 58.4

Skipped 14 7.5
Total 185 100

Table 8 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Was the
diagnostic accuracy (ROC curves) supervised on a regular basis in
comparison with the final diagnosis in the medical record?

Answer Number of respondents (%)

Yes 56 30.3

No 115 62.2

Skipped 14 7.5
Total 185 100

Table 9 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Were patients
informed that an AI software was used to reach the diagnosis?

Answer Number of respondents (%)

Yes 32 17.3

No 139 75.2

Skipped 14 7.5
Total 185 100

Table 10 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Was the use
of an AI software to reach the diagnosis mentioned in the report?

Answer Number of respondents (%)

Yes 64 34.6

No 107 57.9

Skipped 14 7.5
Total 185 100

Table 11 Experience of 185 respondents with AI-based
algorithms for clinical diagnostic interpretive tasks: Has (have) the
algorithm(s) used for diagnostic assistance proven to be helpful
in reducing the workload for the medical staff?

Answer Number of respondents (%)

Yes 42 22.7

No 129 69.8

Skipped 14 7.5
Total 185 100

Table 12 Experience of 111 respondents with AI-based
algorithms for clinical workflow prioritisation: Has the algorithm
proven to be helpful in reducing the workload for the medical
staff?

Answer Number of respondents (%)

Not at all helpful 16 14.4

Moderately helpful 69 62.2

Very helpful 26 23.4

Total 111 100

Page 7 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107

summarises the reasons given by participants who did
not intend to acquire AI-based algorithms for their
clinical use.

Discussion
While the previous survey on AI [1] was based on the
expectations of the ESR members regarding the impact
of AI on radiology, the present survey intended to obtain
an overview of current practical clinical experience with
AI-based algorithms. Although the respondents with
practical clinical experience in this survey represent only
1% of the ESR membership, their proportion among all
respondents varied greatly among countries. The geo-
graphical distribution of the 276 radiologists who shared
their experience with such tools in clinical practice shows
that the majority was affiliated to institutions in West-
ern and Central Europe or in Scandinavia. Half of all
respondents with practical clinical experience with AI
tools was affiliated to academic institutions, whereas the
other half practiced radiology in regional hospitals or in
private services. Since it is likely that the respondents in
this survey were radiologists with a special interest in AI-
based algorithms, it cannot be assumed that this survey
reflects the true proportion of radiologists in the Euro-
pean region with practical clinical experience with AI-
based tools.

Most of the respondents of this brief survey did not
encounter major problems related to the integration
the AI-based software tools into the local IT systems;
less than 18% did have such issues. However, it must be
taken into consideration that radiologists are not always
directly involved in the technical process of software
integration; this fact may perhaps also explain the rela-
tively high number of respondents who did not reply to
this specific question.

Today, AI-based tools for diagnostic purposes may
address a large range of use case scenarios. Although this

was reflected by the free text answers of the respondents
of the present study, the present survey distinguished
mainly between algorithms for diagnostic purposes
and those for the prioritisation of workflow whereas a
detailed analysis of all the different individual use case
scenarios was beyond the scope of this survey. Since
diagnostic tools are usually quite specific and related to
organs and pathologies, even radiologists working in the
same institution but in different subspecialties may have
different experiences with different algorithms related to
their respective fields.

In a recent survey among the members of the Ameri-
can College of Radiology (ACR) the most common appli-
cations for AI were intracranial haemorrhage, pulmonary
embolism, and mammographic abnormalities, although
it was stated that in the case of mammography, confusion
must be avoided between AI-based tools and the more
traditional software for computer aided diagnosis (CAD)
[4]. It was estimated that AI was used by approximately
30% of radiologists, but concerns over inconsistent per-
formance and a potential decrease in productivity were
considered to be barriers limiting the use of such tools.
Over 90% of respondents would not trust these tools
for autonomous use. It was concluded that despite ini-
tial predictions the impact of AI on clinical practice was
modest [4].

Quality assurance of algorithms that are based on
machine–learning may be quite time-consuming and
requires considerable resources. Effective supervision of
the sensitivity and specificity of a device that adapts itself
over time may be done by recording differences between
the diagnosis of the radiologist and the algorithm but ide-
ally combines regular monitoring by comparison against
a final diagnosis as a gold standard—a so-called “ground
truth”. Despite the enthusiasm about AI-based tools there
are some barriers to be addressed when implementing
this new technology in clinical practice. These include

23 (6.3%)

83 (22.9%)

96 (26.4%)

161 (44.4%)

0 20 40 60 80 100 120 140 160 180

No reason given

Adds too much workload

Does not perform as well as advertised

No additional value

Fig. 2 Reasons given by 363 of all 690 participants of the survey (regardless of their experience with AI-based algorithms in clinical workflow) for
not intending to acquire a certified AI-based algorithm for their clinical practice

Page 8 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107

the large amount of annotated image data required for
supervised learning as well as validation and quality
assurance for each use case scenario of these algorithms,
and, last but not least, regulatory aspects including certi-
fication [5, 6]. A recent overview of commercially avail-
able CE-marked AI products for radiological use found
that scientific evidence of potential efficacy of level 3 or
higher was documented in only 18 of 100 products from
54 vendors and that for most of these products evidence
of clinical impact was lacking [3].

Nonetheless, as a general impression, most of the
respondents of this ESR survey who used AI-based algo-
rithms in their clinical practice considered their diagnos-
tic findings to be reliable for the spectrum of scenarios
for which they were used. It is noteworthy that 44% of the
respondents recorded discrepancies occurring between
the radiologists’ and the algorithms’ findings and that
approximately one-third indicated that they gener-
ated ROC curves based on the radiological report or the
clinical record in order to calculate the performance of
algorithms in clinical practice. Details regarding the meth-
odologies, e.g. the degree of automation used for estab-
lishing these data, were neither asked from nor provided
by the respondents. However, since over one-half of the
respondents worked in academic institutions, it is possi-
ble that some of the algorithms were not only evaluated
in the context of clinical routine but also in the context
of scientific research studies, thus explaining the relatively
high level of quality supervision of the algorithms. Only a
small minority of radiologists participating in this survey
informed the patients about the use of AI for the diagno-
sis and about one-third mentioned it in their reports. This
may be understandable as long as the radiologist and not
the algorithm makes the final diagnosis.

However, the important question remains to what
extent AI-powered tools can reduce radiologists’ work-
load. In the previous ESR survey conducted in 2018, 51%
of respondents expected that the use of AI tools would
lead to a reduced reporting workload [1]. The actual con-
tributions of AI to the workload of diagnostic radiologists
were assessed in a recent analysis based on large num-
ber of published studies. It was concluded that although
there was often added value to patient care, workload was
decreased in only 4% but increased in 48% and remained
unchanged in 46% institutions [2]. The results of the pre-
sent survey are somewhat more optimistic since almost
23% of respondents experienced a reduction of their
workload when using algorithms for diagnostic assistance
in clinical practice, whereas almost 70% did not. Obser-
vations with algorithms aiming at workflow prioritisa-
tion were comparable. In view of the wide range of use
case scenarios for which AI- based tools can be applied,

additional studies are needed in order to determine for
which specific tasks and questions in which subspecial-
ties AI-based algorithms could be helpful to reduce radi-
ologists’ workload. Typically, this could be the case in
those scenarios that address the detection of relatively
simple diagnostic findings and a high volume of cases.

The previous ESR survey from 2018 included 675 par-
ticipants of which 20% were already using AI-powered
tools and 30% planned to do so [1]. The present ESR sur-
vey included 690 participants of which 276 (40%) had
experience with such tools in clinical practice. However,
when all the participants of the present survey were
asked whether they intended to acquire a certified AI-
based algorithm, only a minority (13.3%) answered yes,
whereas the majority either answered no (52.6%) or did
not answer the question (34.1%). Reasons given for the
negative answers included doubts about the added value
or the advertised performance or concerns regarding
added workload. We must consider, however, that the
answers to this particular question included not only the
opinions of the respondents who had experience with
practical clinical use but also of those who used these
algorithms rather in the context of scientific projects
including non-commercial, home-grown AI-based tools.

The results of the present ESR survey are difficult to
compare with the recent ACR survey [4] not only because
the questions were not identical, but also because of the
existing diversity among European countries. Nonetheless,
both surveys conclude that, compared with initial predic-
tions and expectations, the overall impact of AI-based
algorithms on current radiological practice is modest.

Several limitations of this brief survey need to be men-
tioned. Firstly, the survey data cannot reflect the true
proportion of European radiologists using AI. Secondly,
the answers to several questions can only provide a gen-
eral overview, although some of the issues addressed by
this survey would deserve a more detailed analysis. This
is true, for example, regarding the differentiation of use
case scenarios as well as the methodologies used for the
verification of their results. Thirdly, the observations are
based on the situation in 2022, and results and opinions
may change rapidly in this evolving field.

In summary, this survey suggests that, compared with
initial expectations, the use of AI- powered algorithms in
practical clinical radiology today is limited, most impor-
tantly because the impact of these tools on the reduction
of radiologists’ workload remains unproven. As more
experience with AI-powered algorithms for specific sce-
narios is being gained and some of the barriers to their
use may become mitigated in the future, a follow-up to
this initial survey could provide further insights into the
usefulness of these tools.

Page 9 of 9European Society of Radiology (ESR) Insights into Imaging (2022) 13:107

Abbreviations
ACR : American College of Radiology; AI: Artificial intelligence; CAD: Computer-
aided diagnosis; CE: Conformité Européenne: a self-declaration mark used
by manufacturers, intended to prove compliance with EU health and safety
regulations; ESR: European Society of Radiology; ROC: Receiver-operator
characteristic.

Acknowledgements
The authors would like to thank Megan McFadden, Bettina Leimberger, and
Danijel Lepir of the ESR office for their contributions with the acquisition and
analysis of the data of this survey. This report and survey were prepared by
Christoph D. Becker and Elmar Kotter on behalf of the ESR e-health and infor-
matics Subcommittee with contributions by Laure Fournier and Luis Martí-
Bonmatí. It was approved by the ESR Executive Council in May 2022.
European Society of Radiology (ESR): Christoph D. Becker, Elmar Kotter, Laure
Fournier and Luis Martí-Bonmatí.

Author contributions
All authors have read and approved the final manuscript.

Funding
This work has not received any funding.

Availability of data and materials
The datasets generated during and/or analysed during the current study are
available from the corresponding author on reasonable request.

Declarations

Ethics approval and consent to participate
Not applicable.

Consent for publication
The writers consent to the publication of this work.

Competing interests
Luis Martí-Bonmatí is the Editor in Chief of Insights into Imaging. He has not
taken part in the review or selection process of this article. All other writers
declare no conflict of interest.

Received: 31 May 2022 Accepted: 7 June 2022

References
1. European Society of Radiology (ESR) (2019) Impact of artificial intelli-

gence on radiology: a EuroAIM survey among members of the European
Society of Radiology. Insights Imaging 10:105

2. Kwee TC, Kwee RM (2021) Workload of diagnostic radiologists in the fore-
seeable future based on recent scientific advances: growth expectations
and role of artificial intelligence. Insights Imaging 12:88

3. van Leeuwen KG, Schalekamp S, Matthieu CM, Rutten MJCM, van
Ginneken B, de Rooij M (2021) Artificial intelligence in radiology: 100
commercially available products and their scientific evidence. Eur Radiol
31:3797–4380

4. Allen B, Agarwal S, Coombs L, Wald C, Dreyer K (2021) 2020 ACR data sci-
ence institute artificial intelligence survey. J Am Coll Radiol 18:1153–1159

5. European Society of Radiology (ESR) (2019) What the radiologist should
know about artificial intelligence—an ESR white paper. Insights Imaging
10:44

6. Kotter E, Ranschaert E (2021) Challenges and solution for introducing
artificial intelligence (AI) in clinical workflow. Eur Radiol 31:5–7

Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in pub-
lished maps and institutional affiliations.

© The Author(s) 2022. This work is published under
http://creativecommons.org/licenses/by/4.0/(the “License”). Notwithstanding

the ProQuest Terms and Conditions, you may use this content in accordance
with the terms of the License.

  • Current practical experience with artificial intelligence in clinical radiology: a survey of the European Society of Radiology
  • Abstract
    Key points
    Background and objectives
    Methods
    Results
    Use of algorithms for assistance in diagnostic interpretation
    Use of algorithms for prioritisation of workflow
    Intentions of all respondents regarding the acquisition of an AI-based algorithm
    Discussion
    Acknowledgements
    References

Mulryan et al. Insights into Imaging (2022) 13:79
https://doi.org/10.1186/s13244-022-01209-

4

O R I G I N A L A R T I C L E

An evaluation of information online
on artificial intelligence in medical imaging
Philip Mulryan1, Naomi Ni Chleirigh2, Alexander T. O’Mahony2* , Claire Crowley1, David Ryan3,
Patrick McLaughlin4, Mark McEntee2, Michael Maher1,2 and Owen J. O’Connor1,2

Abstract
Background: Opinions seem somewhat divided when considering the effect of artificial intelligence (AI) on medi-
cal imaging. The aim of this study was to characterise viewpoints presented online relating to the impact of AI on the
field of radiology and to assess who is engaging in this discourse.

Methods: Two search methods were used to identify online information relating to AI and radiology. Firstly, 34 terms
were searched using Google and the first two pages of results for each term were evaluated. Secondly, a Rich Search
Site (RSS) feed evaluated incidental information over 3 weeks. Webpages were evaluated and categorized as having a
positive, negative, balanced, or neutral viewpoint based on study criteria.

Results: Of the 680 webpages identified using the Google search engine, 248 were deemed relevant and accessible.
43.2% had a positive viewpoint, 38.3% a balanced viewpoint, 15.3% a neutral viewpoint, and 3.2% a negative view-
point. Peer-reviewed journals represented the most common webpage source (48%), followed by media (29%), com-
mercial sources (12%), and educational sources (8%). Commercial webpages had the highest proportion of positive
viewpoints (66%). Radiologists were identified as the most common author group (38.9%). The RSS feed identified 177
posts of which were relevant and accessible. 86% of posts were of media origin expressing positive viewpoints (64%).

Conclusion: The overall opinion of the impact of AI on radiology presented online is a positive one. Consistency
across a range of sources and author groups exists. Radiologists were significant contributors to this online discussion
and the results may impact future recruitment.

Keywords: Artificial intelligence in radiology, Perspectives on evolution of radiology, Future impact on the
radiologist, Radiology recruitment, Radiology efficiency

© The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or
other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.

Keypoints

• Consensus?
• An overall positive opinion exists online towards AI

on the future of radiology.
• Radiologists?
• A high proportion of radiologists believe there will be

a positive impact.

Background
Artificial intelligence (AI) involves the use of computer
algorithms to perform tasks typically associated with
human intelligence [1]. The role of AI in medical imag-
ing has progressed to various stages of development,
application and refinement over the past 10–15  years.
Consequentially, publications on AI in medical imaging
have exponentially increased from about 100–150 per
year in 2007–2008 to 700–800 per year in 2016–2017
[2]. Several studies pertaining to dermatology, pathol-
ogy, and ophthalmology have shown the potential and
clinical utility of AI algorithms. For example, skin cancer,
the most diagnosed malignancy worldwide, is primarily

Open Access

Insights into Imaging

*Correspondence: aomahony@ucc.ie
2 University College Cork, Cork, Ireland
Full list of author information is available at the end of the article

http://orcid.org/0000-0001-9110-2579

http://creativecommons.org/licenses/by/4.0/

http://crossmark.crossref.org/dialog/?doi=10.1186/s13244-022-01209-4&domain=pdf

Page 2 of 11Mulryan et al. Insights into Imaging (2022) 13:79

diagnosed visually. Deep neural networks (DNN) have
demonstrated equivalence with consultant dermatologist
diagnostic ability [3]. Hence the early evolution of AI has
leaned towards the visual sciences and its application to
radiology an extension of this.

Medical imaging interpretation requires accuracy, pre-
cision, and fidelity. At its essence it is a visual science
whereby the interpreter translates either a single or series
of images into a succinct report to answer a clinical ques-
tion and guide evidence-based management. Studies
report that on average a radiologist must interpret one
image every 3–4  s in an 8-h workday to meet workload
demands [4] and with the compound annual growth rate
(CAGR) of diagnostic imaging estimated to be 5.4% until
2027 [5] increasing workloads are expected. Burnout
has been ubiquitously reported among medical special-
ties (~ 25–60%) [6, 7] with limited solutions being pro-
posed and implemented; thus many key advantages may
be conferred by the incorporation of AI into radiologi-
cal practice. The applications of AI in radiology can be
broadly divided into diagnostic and logistic. Computer-
aided diagnostics (CAD) may facilitate earlier detection
of abnormalities, improve patient outcomes, reduce med-
ico-legal exposure, and decrease radiologist workload.
Logistic improvements would include optimization of
workflow, prompt communication of critical findings and
more efficient vetting and triage systems.

Historically, apprehension has existed concerning
recruitment within the medical and radiological commu-
nity as a result of AI. Focused assessment of individual
stakeholder groups in relation to AI in radiology dem-
onstrated a wide spectrum of opinion. Studies of medi-
cal student perspectives in North America, Europe and
the United Kingdom conveyed heterogenous opinions
on the potential implications of AI on radiology possibly
with geographical variation [8–10]. A recurrent theme
in early studies is the large educational gap in medical
schools regarding the capability, utility, and limitations
of AI. A European multi-centre study of both radiolo-
gists in training and consultants performed in France
[11] demonstrated an overall positive perspective; how-
ever, a majority expressed concerns regarding insufficient
information on AI and its potential implications. Ten
years ago, the end of radiology as a career was being her-
alded. Hence radiology residency applications reduced in
response to concerns about the future of radiology as a
career [12]. Ten years ago, perception probably reflected
local concerns in the absence of experience. It has been
shown that more positive opinions have been expressed
by those medical students with exposure to AI and radi-
ology [9]. The transition from discourse about the poten-
tial of AI to its integration and use should have modified
opinions based on practice and experience.

Therefore, this paper aimed to quantify the proportion
of positive, negative, balanced, and neutral viewpoints
presented on the internet in relation to the impact of AI
on radiology. The purpose of this was to determine the
global and regional perception of AI in radiology, and
thus, conclude as to where the future of radiology may
lie.

Methods
Data collection
Two search methods were used to evaluate information
online relating to artificial intelligence in medical imag-
ing. The first search method screened existing data on
AI in radiology at the time of search. The second method
identified a live stream of articles relating to AI as they
were released on the web. Searches were carried out
independently by two of the investigators.

Thirty-four key search phrases were established (Addi-
tional file  1: Appendix  1). Phrases were generated with
input from a medical student, healthcare professional,
non-consultant hospital doctor and prospective radiol-
ogy trainee. These phrases were than validated by two
consultant radiologists (M.M.M., O.J.O’C). The phrases
were chosen to reflect a broad range of search terms
encompassing a multidisciplinary opinion to the impact
of AI on the radiology service.

Data identification
Existing data
This search was performed on ‘All’ content in the Google
search engine and was conducted over the period 25th
January 2021–7th February 2021. The google search
engine has over 90%  of the market share and thus was
felt to be reasonably representative of the population on
a global scale [13]. The google search was performed for
the 34 key phrases in an identical manner. Results were
limited to the English language and open access aca-
demia or where no financial stipulation was required
to access the article. We reviewed the first two pages of
Google results for these searches, as numerous studies on
user behaviour have indicated that 95% of users choose
websites listed on the first page of results, leaving only
5% reviewing results on any subsequent page [14–18].
While date of publication was not a selection criterion
all included articles from the google search were created
within the past 5 years.

Live stream
A Rich Site Summary (RSS) feed search strategy was
used to evaluate the written incident information over
a 3-week period (07/03/21–28/03/21) as a surrogate for
postings on news media and social media. The same
34 key phrases were entered into Google Alerts. This

Page 3 of 11Mulryan et al. Insights into Imaging (2022) 13:79

provided a continuous search for new relevant online
content appearing subsequently. This content was then
analyzed and organized appropriately.

Data sourcing
The source of each relevant post was identified. The
source website was then assigned a sub-type based on the
‘About Us’ section. The source subtypes were segregated
as either journal, media, commercial, education or other
if outside of these categories. For published academia,
it was noted whether it was from a peer- reviewed and/
or indexed journal (PUBMED). Where an identifiable
author existed, it was subtyped into radiologist, journal-
ist, non-radiologist doctor, radiographer and other. The
geographical origin and date of issue was also noted,
where available.

Data categorization
The web pages identified by the dichotomized search
strategy were analyzed by each investigator homog-
enously. Firstly, all Google advertisements were omitted.
Each post was then categorized as either relevant or non-
relevant. Non-relevant posts included those failing to
provide information on AI in medical imaging (such as
a journal calling for abstracts/submissions) or academia
related posts that were not open access, duplicate posts
or posts that were inaccessible.

Relevant posts were divided as either having an over-
all positive, negative, balanced, or neutral viewpoint. The
assessment and categorization of this information was
carried out by two senior authors (M.M.M., O.J.O.C),
both of whom are academic consultant radiologists
working in a large teaching hospital. The assessment was
done in tandem, and the final decision was arrived at by
consensus.

Relevance
Positive
Positive viewpoints were themed as changes brought
about by AI which would result in increased employ-
ment, service expansion, efficiency, fidelity of interpreta-
tion, improved patient care, better quality assurance and
more job satisfaction. Additional file  1: Appendix  2 pro-
vides a sample of positive viewpoints as extracted from
the data of included posts. Webpages that contained
predominantly positive information and concluded with
an overriding positive viewpoint were categorized as
‘Positive’.

Negative
Negative viewpoints were those that displayed a contrary
theme to the positive viewpoint (see Additional file  1:
Appendix 2).

Balanced and neutral
Webpages categorized as ‘Balanced’ listed comparable
amounts of positive and negative points without giv-
ing an overall positive or negative viewpoint. Webpages
categorized as ‘Neutral’ objectively presented informa-
tion relating to artificial intelligence and radiology but
did not discuss how this would impact, be it negatively
or positively, on the field of radiology. The fundamental
difference between the ‘Balanced’ and ‘Neutral’ catego-
ries is that balanced webpages explicitly discussed how
aspects of artificial intelligence would impact the field
of radiology while neutral webpages did not (see Addi-
tional file 1: Appendix 2).

Data analysis
Data compilation and statistical analyses were per-
formed using Microsoft Excel (Microsoft Corporation,
Redmond, Washington, USA) and Google Sheets (1600
Amphitheatre Parkway, Mountain View, California,
United States). Descriptive statistics were used to sum-
marize data. Frequency analyses were performed for
categorical variables.

Results
A total of 680 Google pages relating to AI in medi-
cal imaging were identified. Of these, 561 pages were
deemed relevant and accessible. Duplicate pages were
removed, leaving 248 pages for evaluation.

Forty-three percent (n = 106) of these pages expressed
the overall view that AI would have a positive impact
on the radiologist and the radiology department; 3.2%
(n = 8) presented an overall negative viewpoint; 38.2%
(n = 95) presented a balanced viewpoint and 15.3%
(n = 38) presented a neutral viewpoint (see Fig. 1).

Forty-eight percent (n = 120) of the relevant pages were
from open-access peer-reviewed journals; 30.2% (n = 75)
were from media sources; 12.9% (n = 32) from commer-
cial websites and 8.5% (n = 21) from educational sources.
Table  1. Summarises the allocated categories of origin
and viewpoint conveyed. The type of media source along
with the details of specific commercial company can
be seen in Additional file  1: Appendix  3.1 & 3.2. Com-
mercial web pages had the highest proportion of posi-
tive viewpoints i.e., 66%, followed by media web pages
at 52%, peer-reviewed journals at 37% and educational
web pages at 14%. On the other hand, media web pages
had the greatest proportion of negative viewpoints at
5%, followed by peer-reviewed journals at 3%. Negative
viewpoints were not identified among commercial, edu-
cational, or other sources. Peer-reviewed journals had the
greatest proportion of balanced viewpoints at 48%, while

Page 4 of 11Mulryan et al. Insights into Imaging (2022) 13:79

educational web pages had the greatest proportion of
neutral viewpoints at 43%.

An identifiable named author was displayed on 93%
(n = 230) of web pages, with radiologists responsible for
38.7% (n = 89); journalists represented 20% of authors
(n = 46); doctors working in other specialties repre-
sented 6.9% (n = 16); and radiographers represented 4.8%
(n = 11). Other authors not falling into the aforemen-
tioned categories made up the remaining 29.6% (n = 68).
Researchers, lawyers, and marketing managers were
amongst those in the ‘Other’ category.

Web pages authored by journalists had the highest
percentage of overall positive viewpoints (52%, n = 24).
This was followed by web pages authored by radiolo-
gists (46%, n = 41) and radiographers (45%, n = 5). Web
pages authored by non-radiologist doctors accounted
for the lowest proportion of positive viewpoints (18.8%,
n = 3). Four percent of web pages authored by radi-
ologists (n = 4) or by those falling into the ‘Other’ cat-
egory (n = 3) had negative viewpoints, followed by
web pages authored by journalists at 2% (n = 1). There
were no negative viewpoints identified in web pages
authored by radiographers or non-radiologist doctors.
Those authors falling into the category “Other” had the
highest proportion of balanced viewpoints at 39.7%
(n = 27), while journalists had the greatest proportion

of neutral viewpoints at 34.7% (n = 16). See Additional
file 1: Appendix 4.1 for tabulated summary (Fig. 2).

There were 130 pages authored in North America
expressing 60 positive, 48 balanced, 18 neutral and
4 negative pages of content. In Europe (n = 49), there
were 21 positive, 17 balanced, 9 neutral and 9 negative
pages authored. The United Kingdom had the great-
est number of European authored pages, and these
expressed 9 positive, 10 balanced, 4 neutral and 0 nega-
tive opinions (n = 23). The distribution of the remain-
ing pages from Europe was as follows: Netherlands—6,
Germany—6, Italy—8, Ireland—4, Belgium—5, Nor-
way—1, Denmark—1, Switzerland—5, Austria—1,
Cyprus—3, Europe not specified—9. Finally, a mis-
cellaneous group including: Australia—11; Israel—4;
Asia—12; South America—2, Africa—2; and Not avail-
able—14, expressed 19 positive, 18 balanced, 7 neu-
tral and 2 negative opinions in the pages that were
authored. This frequency data is presented in Table  2
with corresponding percentages in Fig. 3.

Radiologists in North America (n = 42) authored 19
positive, 18 balanced, 3 neutral and 2 negative view-
points. In Europe, radiologists (n = 31) authored 14
positive, 12 balanced, 3 neutral and 2 negative view-
points. UK radiologists authored four pages expressing

Total Data Points
(n=680)

Posi�ve (n=106) Balanced (n=95) Neutral (n=38) Nega�ve (n=8)

Non Relevant (n=113)
Inaccessible (n=6)
Duplicates (n=313)

Relevant and accessible
(n=248)

Fig. 1 Schematic of google search and results summary

Table 1 Summary of categorization of posts by origin with percentage

n = 248 Journal Media Commercial Education

n = 120 48.39% n = 75 30.20% n = 32 12.90% n = 21 8.47%

Positive 44 36.67% 39 52.00% 21 65.63% 3 14.29%

Negative 4 3.33% 4 5.33% 0 0.00% 0 0.0

0%

Balanced 58 48.33% 24 32.00% 4 12.50% 9 42.86%

Neutral 14 11.67% 8 10.67% 7 21.88% 9 42.86%

Page 5 of 11Mulryan et al. Insights into Imaging (2022) 13:79

two positive and two balanced perspectives. These data
are presented in Table 3 and Fig. 4.

The Google Alerts RSS feed identified 5504 new posts
over the 3-week period from 34 search terms. Of the
alerts identified, 177 were deemed relevant and acces-
sible. Sixty-five percent (n = 115) of the posts expressed
an overall positive viewpoint; 11% (n = 20) a balanced

viewpoint; 23% (n = 40) a neutral viewpoint; and 1%
(n = 2) an overall negative viewpoint towards the poten-
tial impact of AI on radiology (Fig. 5).

Of the relevant posts, the majority were of media ori-
gin (86%, n = 152); peer-reviewed journals accounted for
8% (n = 14); 4% (n = 7) were from commercial websites;
and 2.3% (n = 4) were from other sources. Commercial

4

1

24

3

5

26

4
1

0

0

3

3

2

5

8 5

27

13

16

5
1

12

0

10

20

30

40

50

60

70

80

90

100

Radiologist Journalist Non-Radiologist
Doctor

Radiographers

Other

Posi�ve Nega�ve Balanced Neutral
Fig. 2 Number of overall viewpoints presented by each author group. N = 230

Page 6 of 11Mulryan et al. Insights into Imaging (2022) 13:79

webpages had the highest percentage of overall positive
viewpoints (85.7%, n = 6). This was followed by media
webpages (67%, n = 102), peer-reviewed journals (35.7%,
n = 5), and webpages that fell under the category ‘other’
(25%, n = 1). Forums, educational webpages, and blogs
composed the ‘other’ category. Peer-reviewed jour-
nals had the greatest percentage of balanced viewpoints
(21.4%, n = 3), followed by those that fell under the cat-
egory ‘other’ (25%, n = 1). One (7%) article from a peer-
reviewed journal had an overall negative viewpoint,
as did one (0.66%) of the media webpages. No negative
viewpoints were identified in the commercial category.
See Table 4 for summary.

An identifiable named author was present on 85%
(n = 151) of the relevant webpages identified by the
Google Alerts RSS feed. The majority of listed authors
were journalists (66%, n = 100). This was followed by
commercial authors (12.6%, n = 19), radiologists (4%,
n = 6), researchers 4% (n = 6), and doctors working in
other specialties 3.3% (n = 5). Other authors not falling
into the categories represented 9.9% (n = 15) of the con-
tributors. This is illustrated in Fig.  4. Marketing man-
agers, media editors, and students were amongst those
that made up the ‘other’ category. Webpages with a
commercial author had the highest percentage of over-
all positive viewpoints 84% (n = 16). This was followed

Table 2 Geographical origin of viewpoints

Number Positive Negative Neutral Balanced

Geographical origin

North America 130 60 4 18 4

8

Europe 49 21 2 9 17

UK 23 9 0 4 10

Other 46 19 2 7 18

46%

3%

1

4%

37%

43%

4%

1

8%

3

5%

39%

0%

17%

43%
41%

4%

15%

39%
0%
5%

10%

15%

20%

25%

30%

35%

40%

45%

50%

Posi�ve Nega�ve Neutral Balanced

North America Europe UK Other

Fig. 3 Geographical origin of viewpoint (percentage)

Table 3 Geographical origin of radiologist and viewpoint

Origin Number Positive Negative Neutral Balanced

North America 42 19 2 3 18

Europe 31 14 2 3 12

UK 4 2 0 0 2

Other 12 3 1 3 5

Page 7 of 11Mulryan et al. Insights into Imaging (2022) 13:79

by webpages authored by journalists 64% (n = 64); non-
radiologist doctors 60% (n = 3); ‘other’ authors 53%
(n = 8); and radiologists 50% (n = 3). Researchers had
the greatest percentage of balanced viewpoints 67%
(n = 4), while radiologists had the greatest percent-
age of neutral viewpoints 33% (n = 2). One webpage

authored by a journalist (1%) and one authored by an
author in the ‘other’ category (7%) had overall negative
viewpoints. This data summarized and tabulated can be
seen in Additional file 1: Appendix 4.2 (Fig. 6).

45%

5% 7%

43%45%

6%
10%

39%
50%

0% 0%

50%
25%
8%
25%

42%

0%
10%
20%
30%
40%
50%

60%

Posi�ve Nega�ve Neutral Balanced

Percentage

North America Europe UK Other

Fig. 4 Geographical origin and radiologist viewpoint percentage

Total Data Points
(n=5,504)

Posi
ve (n=115) Balanced (n=20) Neutral (n=40) Nega
ve (n=2)

Non Relevant (n=5,069)
Duplicates (n=258)

Relevant & Accessible (n=177)

Fig. 5 Schematic of live Google Alert RSS feed and results summary

Table 4 Summary of categorization of posts by origin with percentage

n = 177 Journal Media Commercial Other

14 7.91% 152 85.88% 7 3.95% 4 2.26%

Positive 5 35.71% 102 67.11% 6 85.71% 1 25.00%

Negative 1 7.14% 1 0.66% 0 0.00% 1 25.00%

Balanced 3 21.43% 13 8.55% 0 0.00% 1 25.00%

Neutral 5 35.71% 36 23.68% 1 14.29% 1 25.00%

Page 8 of 11Mulryan et al. Insights into Imaging (2022) 13:79

Discussion
Opinions and forecasts concerning the role and impact of
AI on medical imaging have exploded in last number of
years primarily due to recent advancements in AI prod-
ucts for radiology. These viewpoints can be positive, neg-
ative, balanced, or neutral in their content. AI in medical
imaging was first mentioned in the literature in the 1950’s
and has evolved substantially since the early 2000’s with
the advent of machine learning (ML) and deep learning
(DL) algorithms [19]. The number of AI exhibitors at
the annual meeting of the Radiological Society of North
America (RSNA) and the European Congress of Radiol-
ogy (ECR) has tripled from 2017 to 2019 [20, 21]. Since
2016, the US Food and Drugs administration (FDA) has
approved 64 AI ML-based medical imaging technologies

with 21 of these specializing in the field of Radiology [22].
In Europe, 240 AI/ML devices have been approved over
the 2015–2020 period by the Conformité Européene (CE)
with 53% for use in radiology [23]. In 2019, The European
Society of Radiology published a white paper to provide
the radiology community with information on AI and
a further study by the ESR demonstrated that there is a
demand amongst the radiological community to inte-
grate AI education into radiology curricula and training
programs including issues related to ethics legislation
and data management [24]. The aim of the present paper
was use internet activity to determine current opinion on
whether AI is a threat or opportunity to the field as this
will have impact on recruitment and resource allocation
to radiology.

64

3
16

1 3
8

1
1
8
1
1

4 2

2
27
2
2
1
4
0
20
40
60
80
100

120

Journalist Radiologist Commercial Researcher Non-radiologist
doctor

Other

Posi�ve Nega�ve Balanced Neutral
Fig. 6 Number of overall viewpoints presented by each author group. N = 151

Page 9 of 11Mulryan et al. Insights into Imaging (2022) 13:79

We observed that a wide diversity of commentators
were engaged dialog pertaining to AI in radiology ranging
from those with professional and academic backgrounds
to those with individual and organizational interests.
While these authors predictably included healthcare
professionals, there was also a significant representation
from those with media and commercial backgrounds.
Opinions on AI in radiology were therefore gathered
from authors with a wide variety of occupations and
backgrounds including radiologists, non-radiology physi-
cians, journalists, researchers, radiographers, commercial
managers, physicists, lawyers, computer scientist, data
officers, engineers’, students, and pharmacists. There was
a relatively equal division of authorship between North
America and Europe. This distribution was also dem-
onstrated among radiologist authored pages included in
this study. This professional and geographic diversity of
authors provides a more complete and international sam-
ple of opinions on the impact of AI on radiology.

Radiologists repeatedly expressed the opinion that
inclusion of AI algorithms could help with labour inten-
sive tasks, improve efficiency and workflow. They also
opined against the potential of AI replacing radiologists.
Numerous studies in the literature also argued against AI
replacing radiologists [25, 26]. An example of two com-
ments made by radiologists included:

The higher efficiency provided by AI will allow radi-
ologists to perform more value-added tasks, becom-
ing more visible to patients and playing a vital role
in multidisciplinary clinical teams

And

Radiologists, the physicians who were on the fore-
front of the digital era in medicine, can now guide
the introduction of AI in healthcare – The time to
work for and with AI in radiology is now

Radiographers expressed the opinion that utilizing AI
algorithms could:

ultimately lead to a reduction in the radiation expo-
sure while maintaining the high quality of medical
images

and that radiographers would be vital in building qual-
ity imaging biobanks for AI data bases. Interestingly,
radiographers also wrote that AI should be integrated
into the medical radiation practice curriculum and there
should be more emphasis on radiomics. Furthermore,
radiographers expressed the belief that emotional intel-
ligence not artificial intelligence is the cornerstone of all
patient care and while the concept of ‘will a robot take my
job’ may be a hot topic, they believe that patient’s will not
accept their radiographs being taken by a robotic device.

This study identified a total of ten negative viewpoints
which included comments from radiologists—5, a law-
yer—1, a journalist—1 and a neuroscience Ph.D. stu-
dent—1. Examples include:

In the long-term future, I think that computers will
take over the work of image interpretation from
humans, just as computers or machines have taken
over so many tasks in our lives. The question is, how
quickly will this happen?

And

Radiologists know that supporting research into AI
and advocating for its adoption in clinical settings
could diminish their employment opportunities and
reduce respect for their profession. This provides an
incentive to oppose AI in various ways

And

An artificially intelligent computer program can
now diagnose skin cancer more accurately than a
board-certified dermatologist and better yet, the
program can do it faster and more efficiently

And

A.I. is replacing doctors in fields such as interpreting
X-rays and scans, performing diagnoses of patients’
symptoms, in what can be described as a ‘consulting
physician’ basis

A recent editorial in the Radiological Society of North
America (RSNA) highlighted a number of high-profile
negative viewpoints made a number of years ago relating
to the impact of AI on radiologists [27]. This included an
AI pioneer who was recently awarded the Association for
Computing Machinery Turing Award, “We should stop
training radiologists now” [27]. Secondly a venture cap-
italist, Vinod Khsla proclaimed in 2017 ‘that the role of
the radiologist will be obsolete in 5  years’ and replaced
with ‘sophisticated algorithms’ [28] and furthermore an
American ‘Affordable Care’ architect remarked at the
2016 American College of Radiology Annual meeting
that radiologists will be replaced by computer technology
in 4–5 years [29] and that ‘in a few years there may be no
specialty called radiology’[30].

Interestingly, many of the opinions regarding time-
frames during which AI were predicted to replace radi-
ologists have already expired with a relatively minor
uptake of AI in imaging interpretation and without signs
of AI replacing radiologists at present. These controver-
sial viewpoints have potential to grab headlines but are
not without potential for negative impact on the future
of radiology and particularly on recruitment of future
radiologists, given that studies have shown that medical

Page 10 of 11Mulryan et al. Insights into Imaging (2022) 13:79

students are less likely to consider pursuing a career in
radiology because of the apparent threat of AI to the spe-
cialty [8–10, 31].

This study found that the overwhelming majority of
web pages assessed had favourable viewpoints with very
few negative viewpoints identified. This finding is con-
sistent with a recent social media-based study showing
that discussions around AI and radiology were astound-
ingly positive, with an increasing frequency of positive
discussions identified over a 1-year period [32]. Taken
together, these findings suggest a shift in opinion from a
once negative view to a more positive one.

Of the webpages identified using the Google search
engine, Radiologists were found to be the most common
author group, making up 38.5% of all identifiable authors.
These webpages were predominantly peer-reviewed jour-
nal papers and media articles. These findings highlight
that radiologists are actively involved in both AI-related
research and online discussions relating to AI and the field
of radiology. Radiologists have been encouraged to play
an active role in the development of applications of AI in
medical imaging to ensure appropriate implementation
and validation of AI in clinical practice [26, 33]. In 2017,
The American College of Radiology established The Data
Science Institute partly with this purpose in mind [34].

The main limitation of this study was the use of sub-
jective assessment to qualify information into positive,
negative, neutral, and balanced. This introduces potential
for observer bias in determining the overall viewpoint of
posts, but it was attempted to minimize this by using two
senior radiologists as the assessors. We did not quantita-
tively assess readability of posts. We only used one search
engine ‘Google’ and limited to just the English language
and to the first two pages of each search term, a strategy
following previous publications and backed by behavio-
ral studies which have indicated that 95% of users choose
websites listed on the first page of results, leaving only 5%
reviewing results on any subsequent pages.

We acknowledge that the list of search terms  in Addi-
tional file  1: Appendix  1  is not exhaustive and is just a
representative sample of the actual terms that may be
used when searching for AI in medical imaging but by
using a broad range of terms and studying the first two
pages of findings that these search results would yield the
most relevant information. The RSS feed was used as a
surrogate for incident information and may not be wholly
representative of information found in social media news
feeds, Twitter, and other sites. There is also potential that
a single 3-week alert period may be biased by news and
media events that occurred during that time.

In Conclusion, authors of 43% of all pages evalu-
ated expressed the overall opinion that AI would have

a positive impact on the radiologist and the radiology
department; 38.3% presented a balanced viewpoint;
15.3% presented a neutral viewpoint; and 3.2% pre-
sented a negative viewpoint. We have demonstrated
that the overall view presented online is a positive one
that AI will benefit the specialty. We should be excited
and look forward to advancements in this technol-
ogy which has the potential to improve accuracy of
diagnosis in diagnostic radiology, reduce errors and
improve efficiency in dealing with rapidly increasing
workloads.

Abbreviations
AI: Artificial intelligence; CAGR : Compound annual growth rate; CAD:
Computer-aided diagnostics; DL: Deep learning; ECR: European congress of
radiology; FDA: Food and drug agency; ML: Machine learning; RSS: Rich site
summary; RSNA: Radiological society of North America.

Supplementary Information
The online version contains supplementary material available at https:// doi.
org/ 10. 1186/ s13244- 022- 01209-4.

Additional file 1: 34 key search phrases used in both static Google search
and Rich Site Summary feed search strategy.

Authors’ contributions
PM and NNC had full access to all of the data in the study and take responsi-
bility for the integrity of the data and accuracy of the data analysis. Concept
and design: PM, NNC, DR, CC, PMcL, MMcE, OJO’C, MM. Acquisition, analysis,
or interpretation of data: PM, NNC, CC, DR, MM, OJO’C, ATO’M. Drafting of the
manuscript: PM, NNC, OJO’C, MM, ATO’M. Critical revision of the manuscript
for important intellectual content: PM, NNC, ATO’M, CC, MMcE, MM, OJO’C.
Administrative, technical, or material support: ATO’M. Supervision: PMcL, MM,
OJO’C. All authors read and approved the final manuscript.

Funding
No sources of funding were sought or required to carry out this study.

Availability of data and materials
The datasets used and/or analyzed during the current study are available from
the corresponding author on reasonable request.

Declarations

Ethics approval and consent to participate
Ethical approval was granted by the institution review board: Clinical Research
Ethics Committee of the Cork teaching Hospitals.

Consent for publication
Not applicable.

Competing interests
The authors declare that they have no competing interests.

Author details
1 Cork University Hospital/Mercy University Hospital, Cork, Ireland. 2 University
College Cork, Cork, Ireland. 3 Cork University Hospital, Cork, Ireland. 4 South
Infirmary Victoria University Hospital, Cork, Ireland.

Received: 12 November 2021 Accepted: 12 March 2022

https://doi.org/10.1186/s13244-022-01209-4

https://doi.org/10.1186/s13244-022-01209-4

Page 11 of 11Mulryan et al. Insights into Imaging (2022) 13:79

References
1. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. The MIT Press,

Cambridge
2. Pesapane F, Codari M, Sardanelli F (2018) Artificial intelligence in medical

imaging: threat or opportunity? Radiologists again at the forefront of
innovation in medicine. Eur Radiol Exp 2:35. https:// doi. org/ 10. 1186/
s41747- 018- 0061-6

3. Esteva A, Kuprel B, Novoa RA et al (2017) Dermatologist-level classifica-
tion of skin cancer with deep neural networks. Nature 542(7639):115–118.
https:// doi. org/ 10. 1038/ natur e21056

4. McDonald RJ, Schwartz KM, Eckel LJ et al (2015) The effects of changes in
utilization and technological advancements of cross-sectional imaging
on radiologist workload. Acad Radiol 22(9):1191–1198. https:// doi. org/ 10.
1016/j. acra. 2015. 05. 007

5. Wood L (2021) The worldwide diagnostic imaging industry is expected to
reach $48.5 Billion by 2027. BusinessWire A Berkshire Hathaway Company.
ResearchandMarkets.com. https:// www. busin esswi re. com/ news/ home/
20211 20900 5945/ en/ The- World wide- Diagn ostic- Imagi ng- Indus try- is-
Expec ted- to- Reach- 48.5- Billi on- by- 2027— Resea rchAn dMark ets. com#:
~: text= Amid% 20the% 20COV ID% 2D19% 20cri sis,the% 20ana lysis% 20per
iod% 202020% 2D2027.

6. Shanafelt TD, Gradishar WJ, Kosty M et al (2014) Burnout and career
satisfaction among US oncologists. J Clin Oncol 32(7):678–686. https://
doi. org/ 10. 1200/ JCO. 2013. 51. 8480

7. Shanafelt TD, Balch CM, Bechamps GJ et al (2009) Burnout and career sat-
isfaction among American surgeons. Ann Surg 250(3):463–471. https://
doi. org/ 10. 1097/ SLA. 0b013 e3181 ac4dfd

8. PintoDosSantos D, Giese D, Brodehl S et al (2019) Medical students’
attitude towards artificial intelligence: a multicentre survey. Eur Radiol
29(4):1640–1646. https:// doi. org/ 10. 1007/ s00330- 018- 5601-1

9. Gong B, Nugent JP, Guest W et al (2019) Influence of artificial intelligence
on Canadian medical students’ preference for radiology specialty: a
national survey study. Acad Radiol 26(4):566–577

10. Sit C, Srinivasan R, Amlani A et al (2020) Attitudes and perceptions of
UK medical students towards artificial intelligence and radiology: a
multicentre survey. Insights Imaging 11:14. https:// doi. org/ 10. 1186/
s13244- 019- 0830-7

11. Waymel Q, Badr S, Demondion X, Cotten A, Jacques T (2019) Impact of
the rise of artificial intelligence in radiology: what do radiologists think?
Diagn Interv Imaging 100(6):327–336. https:// doi. org/ 10. 1016/j. diii. 2019.
03. 015

12. Chen JY, Heller MT (2014) How competitive is the match for radiology
residency? Present view and historical perspective. J Am Coll Radiol
11(5):501–506. https:// doi. org/ 10. 1016/j. jacr. 2013. 11. 011

13. GlobalStats (2021) Search engine market share south worldwide 2021–
2022. https:// gs. statc ounter. com/ search- engine- market- share. Accessed
02 Jan 2021

14. Lorigo L, Pan B, Hembrooke H, Joachims T, Granka L, Gay G (2006) The
influence of task and gender on search and evaluation behavior using
Google. Inf Process Manag 42(4):1123–1131

15. Spink A, Jansen BJ, Blakely C, Koshman S (2006) A study of results overlap
and uniqueness among major web search engines. Inf Process Manag
42(5):1379–1391

16. Enge E, Spencer S, Stricchiola J, Fishkin R (2012) The art of SEO: mastering
search engine optimization, 2nd edn. O’Reilly Media, Sebastopol

17. Hopkins L (2012) Online reputation management: why the first page of
Google matters so much. www. leeho pkins. net/ 2012/ 08/ 30/ online- reput
ation- manag ement- why- the- first- page- of- google- matte rs- so- much/.
Accessed 06 Feb 2021

18. Chuklin A, Serdyukov P, De Rijke M (2013) Modeling clicks beyond the
first result page. In: Proceedings of international conference on informa-
tion and knowledge management, pp 1217–1220

19. Kaul V, Enslin S, Gross SA (2020) History of artificial intelligence in medi-
cine. Gastrointest Endosc 92(4):807–812

20. Radiological Society of North America (2017) AI exhibitors RSNA 2017.
Radiological society of North America. http:// rsna2 017. rsna. org/ exhib
itor/? action= add& filter= Misc& value= Machi ne- Learn ing. Accessed

21. Radiological Society of North America (2019) AI exhibitors RSNA 2019.
Radiological society of North America. https://rsna2019.mapyourshow.
com/8_0/explore/pavilions.cfm#/show/cat-pavilion|AI%20Showcase.
Accessed

22. Benjamens S, Dhunnoo P, Meskó B (2020) The state of artificial
intelligence-based FDA-approved medical devices and algorithms:
an online database. NPJ Digit Med. 3:118. https:// doi. org/ 10. 1038/
s41746- 020- 00324-0

23. Muehlematter UJ, Daniore P, Vokinger KN (2021) Approval of artificial
intelligence and machine learning-based medical devices in the USA and
Europe (2015–20): a comparative analysis. Lancet Digit Health 3(3):e195–
e203. https:// doi. org/ 10. 1016/ S2589- 7500(20) 30292-2

24. Codari M, Melazzini L, Morozov SP et al (2019) Impact of artificial intelli-
gence on radiology: a EuroAIM survey among members of the European
Society of Radiology. Insights Imaging 10:105. https:// doi. org/ 10. 1186/
s13244- 019- 0798-3

25. Recht M, Bryan RN (2017) Artificial intelligence: threat or boon to radiolo-
gists? J Am Coll Radiol 14:1476–1480

26. King BF (2018) Artificial intelligence and radiology: what will the future
hold? J Am Coll Radiol 15(3, Part B):501–503

27. Langlotz CP (2019) Will artificial intelligence replace radiologists? Radiol
Artif Intell. 1(3):e190058

28. Farr C (2020) Here’s why one tech investor thinks some doctors will be
‘obsolete’ in five years. CNBC 2017. https:// www. cnbc. com/ 2017/ 04/ 07/
vinod- khosla- radio logis ts- obsol ete- five- years. html. Accessed 4 Feb 2020

29. Siegel E (2020) Will radiologists be replaced by computers? Debunking
the hype of AI. Carestream 2016. https:// www. cares tream. com/ blog/
2016/ 11/ 01/ debat ing- radio logis ts- repla ced- by- compu ters/. Accessed 4
Feb 2020

30. Chockley K, Emanuel E (2016) The end of radiology? Three threats to the
future practice of radiology. J Am Coll Radiol 13:1415–1420. https:// doi.
org/ 10. 1016/j. jacr. 2016. 07. 010

31. Bin Dahmash A, Alabdulkareem M, Alfutais A et al (2020) Artificial intel-
ligence in radiology: does it impact medical students preference for
radiology as their future career? BJR Open 2(1):20200037

32. Goldberg JE, Rosenkrantz AB (2019) Artificial intelligence and radiology: a
social media perspective. Curr Probl Diagn Radiol 48(4):308–311

33. Dreyer K, Allen B (2018) Artificial intelligence in health care: brave new
world or golden opportunity? J Am Coll Radiol 15(4):655–657

34. McGinty GB, Allen B (2018) The ACR data science institute and AI advisory
group: harnessing the power of artificial intelligence to improve patient
care. J Am Coll Radiol 15(3, Part B):577–579

Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in pub-
lished maps and institutional affiliations.

https://doi.org/10.1186/s41747-018-0061-6

https://doi.org/10.1186/s41747-018-0061-6

https://doi.org/10.1038/nature21056

https://doi.org/10.1016/j.acra.2015.05.007

https://doi.org/10.1016/j.acra.2015.05.007

https://www.businesswire.com/news/home/20211209005945/en/The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to-Reach-48.5-Billion-by-2027—ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2D19%20crisis,the%20analysis%20period%202020%2D2027

https://www.businesswire.com/news/home/20211209005945/en/The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to-Reach-48.5-Billion-by-2027—ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2D19%20crisis,the%20analysis%20period%202020%2D2027

https://www.businesswire.com/news/home/20211209005945/en/The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to-Reach-48.5-Billion-by-2027—ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2D19%20crisis,the%20analysis%20period%202020%2D2027

https://www.businesswire.com/news/home/20211209005945/en/The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to-Reach-48.5-Billion-by-2027—ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2D19%20crisis,the%20analysis%20period%202020%2D2027

https://www.businesswire.com/news/home/20211209005945/en/The-Worldwide-Diagnostic-Imaging-Industry-is-Expected-to-Reach-48.5-Billion-by-2027—ResearchAndMarkets.com#:~:text=Amid%20the%20COVID%2D19%20crisis,the%20analysis%20period%202020%2D2027

https://doi.org/10.1200/JCO.2013.51.8480

https://doi.org/10.1200/JCO.2013.51.8480

https://doi.org/10.1097/SLA.0b013e3181ac4dfd

https://doi.org/10.1097/SLA.0b013e3181ac4dfd

https://doi.org/10.1007/s00330-018-5601-1

https://doi.org/10.1186/s13244-019-0830-7

https://doi.org/10.1186/s13244-019-0830-7

https://doi.org/10.1016/j.diii.2019.03.015

https://doi.org/10.1016/j.diii.2019.03.015

https://doi.org/10.1016/j.jacr.2013.11.011

https://gs.statcounter.com/search-engine-market-share

http://www.leehopkins.net/2012/08/30/online-reputation-management-why-the-first-page-of-google-matters-so-much/

http://www.leehopkins.net/2012/08/30/online-reputation-management-why-the-first-page-of-google-matters-so-much/

http://rsna2017.rsna.org/exhibitor/?action=add&filter=Misc&value=Machine-Learning

http://rsna2017.rsna.org/exhibitor/?action=add&filter=Misc&value=Machine-Learning

https://doi.org/10.1038/s41746-020-00324-0

https://doi.org/10.1038/s41746-020-00324-0

https://doi.org/10.1016/S2589-7500(20)30292-2

https://doi.org/10.1186/s13244-019-0798-3

https://doi.org/10.1186/s13244-019-0798-3

https://www.cnbc.com/2017/04/07/vinod-khosla-radiologists-obsolete-five-years.html

https://www.cnbc.com/2017/04/07/vinod-khosla-radiologists-obsolete-five-years.html

Will Radiologists Be Replaced by Computers? Debunking the Hype of AI

Will Radiologists Be Replaced by Computers? Debunking the Hype of AI

https://doi.org/10.1016/j.jacr.2016.07.010

https://doi.org/10.1016/j.jacr.2016.07.010

© The Author(s) 2022. This work is published under
http://creativecommons.org/licenses/by/4.0/(the “License”). Notwithstanding

the ProQuest Terms and Conditions, you may use this content in accordance
with the terms of the License.

  • An evaluation of information online on artificial intelligence in medical imaging
  • Abstract
    Background:
    Methods:
    Results:
    Conclusion:
    Keypoints
    Background
    Methods
    Data collection
    Data identification
    Existing data
    Live stream
    Data sourcing
    Data categorization
    Relevance
    Positive
    Negative
    Balanced and neutral
    Data analysis
    Results
    Discussion
    References

1

Deliverable 3 – Evaluate Research and Data

Jamie Raines

Attempt 1

Rasmussen College

HSA5000CBE Section 01CBE Scholarly Research and Writing

Caroline Gulbrandsen

8/26/2022

Running head: RESEARCH QUESTION EVALUATION 2

Research Question Evaluation

The credibility of the Data

The research question integrated into this study is related to how artificial intelligence (AI) integration in

clinical radiology has the potential to disrupt the industry. According to Becker et al. (2022), AI is strongly

connected to the operations performed in clinical radiology for better test results. The technology allows machines

can achieve human-level performance while performing detection of any tumors during radiology tests. There are

suitable improvements performed in the AI industry using research that structures technical operations by

machines to validate the Integration of AI algorithms for patient care. The credibility of the data in the article is

reliable since the authors researched how healthcare professionals who have used artificial intelligence have

promoted better health management. Most research participants agreed that AI integration in radiology

information technology (IT) departments has promoted accuracy and reduced excess time for setting up systems.

The next article focused on the use of AI for medical imaging, whereby it is clear that the demand for AI is

constantly progressing. According to Mulryan et al. (2022), the advancements in AI have been adverse, causing

some radiologists to develop a negative attitude towards the industry that could potentially eliminate human jobs.

AI has been found to simulate human brain capacity, which does not get received well by professionals in

healthcare settings. This indicates more operations can get performed to validate AI operations since they are

needed for system management. The article’s data was collected from journalists, radiologists, commercial

representatives, researchers, and non-radiologist doctors, and all provided different opinions on the impact of AI

in medical imaging.

Documentation of the Data

All the articles integrate a high-quality data documentation process since there are different topics, use of

graphics, and graphs that explain how the research got conducted. There was a direct process to determine that the

articles were quantitative since there was a comparison among different variables. In the article by Becker et al.

(2022), the data analyzed how different respondents reacted to the value of AI in clinical radiology. The Data then

Robert Neuteboom
was

Robert Neuteboom

Robert Neuteboom

Robert Neuteboom
Would you say experts in the field constitute another example of credibility?

Robert Neuteboom
Okay, so be sure to address matters of credibility and reliability independently. Your claim about conducting research of existing studies would fall under credibility. You might also talk about the authors’ credentials, employment, and affiliations — anything that demonstrates trustworthiness. Reliability, conversely, has to do with consistency across items, time, and researchers, inter-rater alignment, and replicability. Use these terms accurately and provide specific examples for both.

Robert Neuteboom

Robert Neuteboom

Robert Neuteboom
to

Robert Neuteboom

Robert Neuteboom
Capitalize – Credibility

Robert Neuteboom
Remove your running header. These are not necessary in APA 7th Edition.

Robert Neuteboom
Check your margins on this paper. They should be set at one inch.

Running head: RESEARCH QUESTION EVALUATION 3

got recorded in tables under different questions so that there would be an accurate analysis outcome by finding out

the number of responses that supported or did not support concepts mentioned in a research question. In the same

article, an example of accurate documentation is a graph that indicated data from clinical radiologists on why they

did not acquire certified AI-algorism expertise.

In the second article, Mulryan et al. (2022) offered accurate data representations using schematics that

integrated different reactions and the number of participants by applying a logistic regression model to determine

the Data’s validity. The use of key terms was possible upon opening the document as it was possible to detect that

the application of the quantitative research method was common by indicating a different number of participants

used in the research and how their answers got distributed. The study was direct and only required participants to

answer questions during survey sessions while providing their data for easy identification. Collecting data from

different radiologists was applied to promote the study’s validity.

Evaluation of Data Analysis and Interpretation

The data collected from the articles support a hypothesis developed for the study: it is imperative to

conform to AI practices in radiology since they affect inevitable healthcare delivery. The future of AI appears to

be getting more advanced, especially for the radiology industry, which integrates a structure for change in

handling system operations. It can be possible to handle superintelligence operations based on AI’s ability to

mimic human clinical radiologists’ behavior. The authors introduced the level of expertise in handling AI

operations advancements based on the capability to correlate ideas for generating better data analysis. The

existence of AI for clinical radiology is a structure connected to the change of different aspects in the healthcare

environment due to its capability to manage system operations. There is thus a structure for change in terms of

better machine operations for patient healthcare improvement. Management of human beings’ acceptance of AI is

required for the best procedure of offering intelligence improvement for a safe healthcare future.

Possible Ethical Issues

Robert Neuteboom
I can’t tell which article you are writing about. Describe how Becker et al. and Mulryan et al. analyze their data. Be clear by differentiating the two studies.

Robert Neuteboom
inevitably affect

Robert Neuteboom

Robert Neuteboom
So, does this constitute credibility or reliability? Explain.

Robert Neuteboom
Wordy, convoluted sentence. Rework for clarity.

Robert Neuteboom

Robert Neuteboom

Robert Neuteboom
data’s – no need to capitalize.

Robert Neuteboom
was

Robert Neuteboom

Robert Neuteboom

Running head: RESEARCH QUESTION EVALUATION 4

In conclusion, it would be possible to correlate real-life positive healthcare outcomes and the data provided

by persons familiar with the area of interest. There are conditions required to improve AI operations, including

algorithm management and logic handling, which are all imperative to support healthcare expertise by clinical

radiologists as they learn to perform AI operations. All these are appropriate techniques for promoting accurate

system handling as an accurate structure for AI integration into clinical radiology and medical imaging.

Digitization of healthcare imaging operations affects existent automation operations that seek to engage people

with the technical environment expertise in operations structured to integrate better machine learning processes as

set up using clinical radiology expertise. The authors thus were affected by the influx of too much technology in

healthcare that may seem to undermine the expertise of healthcare providers. It is thus critical to operate in the

current advanced IT environment using the training of clinical radiologists instead of solely trusting AI. Applying

social constructs and views from clinical radiologists is thus imperative to constantly manage AI operations

related to accurate system improvements.

Robert Neuteboom
I think you misread the instructions for this third section. You are supposed to write about the ethical issues you might encounter conducting your own study (the one you are writing about this quarter) and explain how you will address those issues.

Robert Neuteboom
Why is this entire section centered?

Running head: RESEARCH QUESTION EVALUATION 5

References

Becker, C., Kotter, E., Fournier, L., & Martí-Bonmatí, L. (2022). Current practical experience with artificial

intelligence in clinical radiology: a survey of the European Society of Radiology. Insights Into Imaging,

13(1). doi: 10.1186/s13244-022-01247-y.

Mulryan, P., Ni Chleirigh, N., O’Mahony, A., Crowley, C., Ryan, D., & McLaughlin, P. et al. (2022). An

evaluation of information online on artificial intelligence in medical imaging. Insights Into Imaging, 13(1).

doi: 10.1186/s13244-022-01209-4

1

Deliverable 3 – Evaluate Research and Data

Attempt

2

Jamie Raines

Rasmussen College

HSA5000CBE Section 01CBE Scholarly Research and Writing

Caroline Gulbrandsen

9/1/2022

2

Research Question Evaluation

The Credibility of the Data

The research question integrated into this study is related to how artificial intelligence (AI)

integration in clinical radiology has the potential to disrupt the industry. According to Becker et al.

(2022), AI is strongly connected to the operations performed in clinical radiology for better test

results. The technology allows machines to achieve human-level performance while detecting

tumors during radiology tests. There are suitable improvements performed in the AI industry using

research that structures technical operations by machines to validate the Integration of AI

algorithms for patient care. The credibility of the data in the article is reliable since the authors

researched how healthcare professionals who have used artificial intelligence have promoted better

health management. The European Society of Radiology has accredited all the authors due to their

degrees and advanced educational levels (Becker et al., 2022). In the other article, the authors also

integrate credibility since workers have experience in hospitals and are at a university level of

education (Mulryan et al., 2022). Most research participants agreed that AI integration in radiology

information technology (IT) departments has promoted accuracy and reduced excess time for

setting up systems.

The next article focused on the use of AI for medical imaging, whereby it is clear that the

demand for AI is constantly progressing. According to Mulryan et al. (2022), the advancements in

AI have been adverse, causing some radiologists to develop a negative attitude towards the

industry that could potentially eliminate human jobs. AI has been found to simulate human brain

capacity, which does not get received well by professionals in healthcare settings. This indicates

more operations can get performed to validate AI operations since they are needed for system

management. The article’s data was collected from journalists, radiologists, commercial

Robert Neuteboom
hold advanced degrees in a relevant field – is this what you mean?

Robert Neuteboom

Robert Neuteboom
presented as credible

Robert Neuteboom

Robert Neuteboom

Robert Neuteboom
As I mentioned in my comment on your previous submission, you need to use these terms separately in your evaluation. Saying that credibility is reliable does not describe either of these terms nor does it offer examples. Your claim about conducting research of existing studies would fall under credibility. You might also talk about the authors’ credentials, employment, and affiliations — anything that demonstrates trustworthiness. Reliability, conversely, has to do with consistency across items, time, and researchers, inter-rater alignment, and replicability. Use these terms accurately and provide specific examples for both.

Robert Neuteboom

Robert Neuteboom

3

representatives, researchers, and non-radiologist doctors, and all provided different opinions on the

impact of AI in medical imaging. Other health experts also provided their knowledge as long as

they received advanced education for their professions.

Documentation of the Data

All the articles integrate a high-quality data documentation process since there are different

topics, use of graphics, and graphs that explain how the research was conducted. There was a direct

process to determine that the articles were quantitative since there was a comparison among

different variables. In the article by Becker et al. (2022), the data analyzed how different

respondents reacted to the value of AI in clinical radiology. The data was then recorded in tables

under different questions so that there would be an accurate analysis outcome by finding out the

number of responses that supported or did not support concepts mentioned in a research question.

In the same article, an example of accurate documentation is a graph that indicated data from

clinical radiologists on why they did not acquire certified AI-algorism expertise.

In the second article, Mulryan et al. (2022) offered accurate data representations using

schematics that integrated different reactions and the number of participants by applying a logistic

regression model to determine the data’s validity. There was an accurate quantitative research

method by analyzing how different participants were used in the research and how their answers

got distributed. The study was direct and only required participants to answer questions during

survey sessions while providing their data for easy identification. Collecting data from different

radiologists was applied to promote the study’s reliability since the data can get assessed by any

professional and produce a health management standard.

Evaluation of Data Analysis and Interpretation

Robert Neuteboom
promotes

Robert Neuteboom

Robert Neuteboom
be

Robert Neuteboom

Robert Neuteboom
different reactions from participants? Clarify what reactions you are describing here.

Robert Neuteboom
the

Robert Neuteboom
of outcomes

Robert Neuteboom
, ensuring their contributions were credible.

Robert Neuteboom
insights on the topic

Robert Neuteboom

4

The data collected from the articles support a hypothesis developed for the study:

conforming to AI practices in radiology is imperative since they inevitably affect healthcare

delivery. The future of AI appears to be getting more advanced, especially for the radiology

industry, which integrates a structure for change in handling system operations. It can be possible

to handle superintelligence operations based on AI’s ability to mimic human clinical radiologists’

behavior. Becker et al. (2022) introduced expertise in handling AI operations advancements based

on the capability to correlate ideas for generating better data analysis. The existence of AI for

clinical radiology is a structure connected to the change of different aspects in the healthcare

environment due to its capability to manage system operations. There is thus a structure for change

in terms of better machine operations for patient healthcare improvement. Management of human

beings’ acceptance of AI is required for the best procedure of offering intelligence improvement for

a safe healthcare future (Mulryan et al., 2022). These authors adopted a document analysis process

by seeking credible sources on how radiology operations get performed.

Possible Ethical Issues

In conclusion, it would be possible to correlate real-life positive healthcare outcomes and

the data provided by persons familiar with the area of interest. There are conditions required to

improve AI operations, including algorithm management and logic handling, which are imperative

to support healthcare expertise by clinical radiologists as they learn to perform AI operations.

While performing any personal study, there can be a constricted method when attempting to

understand how to seek factual data without the integration of plagiarism of original information.

Another issue can be obtaining informed consent from professionals in the radiology industry. It is

easy to find final work posted online, yet communicating with the developers and ensuring they

allow their work to be used in research can be challenging. Confidentiality is another requirement

Robert Neuteboom
What do you mean here? Clarify.

Robert Neuteboom

Robert Neuteboom

Robert Neuteboom
are

Robert Neuteboom
results of improving

Robert Neuteboom

Robert Neuteboom
technology

Robert Neuteboom

Robert Neuteboom
Human beings must accept

Robert Neuteboom

Robert Neuteboom

Robert Neuteboom
What is “It” in reference to? Be clear.

Robert Neuteboom
those practices

5

that needs to be analyzed to seek information on how to protect the original owners of any piece of

work without appearing to steal data. All these possible issues can be addressed by conducting

thorough research and assessing the required topic.

Robert Neuteboom
Good – yes, generally speaking, these are categories we would label ethical issues in relation to a study. Now, speak specifically about your study. You are welcome to use first-person pronouns here to discuss specific concerns that may arise in the study you wrote about in Deliverable 1 and 2.

6

References

Becker, C., Kotter, E., Fournier, L., & Martí-Bonmatí, L. (2022). Current practical experience with

artificial intelligence in clinical radiology: a survey of the European Society of Radiology.

Insights Into Imaging, 13(1). doi: 10.1186/s13244-022-01247-y.

Mulryan, P., Ni Chleirigh, N., O’Mahony, A., Crowley, C., Ryan, D., & McLaughlin, P. et al.

(2022). An evaluation of information online on artificial intelligence in medical

imaging. Insights Into Imaging, 13(1). doi: 10.1186/s13244-022-01209-4

1

Deliverable 3 – Evaluate Research and Data

Attempt

2

Jamie Raines

Rasmussen College

HSA5000CBE Section 01CBE Scholarly Research and Writing

Caroline Gulbrandsen

9/1/2022

2

Research Question Evaluation

The Credibility of the Data

The research question integrated into this study is related to how artificial intelligence (AI)

integration in clinical radiology has the potential to disrupt the industry. According to Becker et al.

(2022), AI is strongly connected to the operations performed in clinical radiology for better test

results. The technology allows machines to achieve human-level performance while detecting

tumors during radiology tests. There are suitable improvements performed in the AI industry using

research that structures technical operations by machines to validate the Integration of AI

algorithms for patient care. The credibility of the data in the article is reliable since the authors

researched how healthcare professionals who have used artificial intelligence have promoted better

health management. The European Society of Radiology has accredited all the authors due to their

degrees and advanced educational levels (Becker et al., 2022). In the other article, the authors also

integrate credibility since workers have experience in hospitals and are at a university level of

education (Mulryan et al., 2022). Most research participants agreed that AI integration in radiology

information technology (IT) departments has promoted accuracy and reduced excess time for

setting up systems.

The next article focused on the use of AI for medical imaging, whereby it is clear that the

demand for AI is constantly progressing. According to Mulryan et al. (2022), the advancements in

AI have been adverse, causing some radiologists to develop a negative attitude towards the

industry that could potentially eliminate human jobs. AI has been found to simulate human brain

capacity, which does not get received well by professionals in healthcare settings. This indicates

more operations can get performed to validate AI operations since they are needed for system

management. The article’s data was collected from journalists, radiologists, commercial

Robert Neuteboom
hold advanced degrees in a relevant field – is this what you mean?

Robert Neuteboom

Robert Neuteboom
presented as credible

Robert Neuteboom

Robert Neuteboom

Robert Neuteboom
As I mentioned in my comment on your previous submission, you need to use these terms separately in your evaluation. Saying that credibility is reliable does not describe either of these terms nor does it offer examples. Your claim about conducting research of existing studies would fall under credibility. You might also talk about the authors’ credentials, employment, and affiliations — anything that demonstrates trustworthiness. Reliability, conversely, has to do with consistency across items, time, and researchers, inter-rater alignment, and replicability. Use these terms accurately and provide specific examples for both.

Robert Neuteboom

Robert Neuteboom

3

representatives, researchers, and non-radiologist doctors, and all provided different opinions on the

impact of AI in medical imaging. Other health experts also provided their knowledge as long as

they received advanced education for their professions.

Documentation of the Data

All the articles integrate a high-quality data documentation process since there are different

topics, use of graphics, and graphs that explain how the research was conducted. There was a direct

process to determine that the articles were quantitative since there was a comparison among

different variables. In the article by Becker et al. (2022), the data analyzed how different

respondents reacted to the value of AI in clinical radiology. The data was then recorded in tables

under different questions so that there would be an accurate analysis outcome by finding out the

number of responses that supported or did not support concepts mentioned in a research question.

In the same article, an example of accurate documentation is a graph that indicated data from

clinical radiologists on why they did not acquire certified AI-algorism expertise.

In the second article, Mulryan et al. (2022) offered accurate data representations using

schematics that integrated different reactions and the number of participants by applying a logistic

regression model to determine the data’s validity. There was an accurate quantitative research

method by analyzing how different participants were used in the research and how their answers

got distributed. The study was direct and only required participants to answer questions during

survey sessions while providing their data for easy identification. Collecting data from different

radiologists was applied to promote the study’s reliability since the data can get assessed by any

professional and produce a health management standard.

Evaluation of Data Analysis and Interpretation

Robert Neuteboom
promotes

Robert Neuteboom

Robert Neuteboom
be

Robert Neuteboom

Robert Neuteboom
different reactions from participants? Clarify what reactions you are describing here.

Robert Neuteboom
the

Robert Neuteboom
of outcomes

Robert Neuteboom
, ensuring their contributions were credible.

Robert Neuteboom
insights on the topic

Robert Neuteboom

4

The data collected from the articles support a hypothesis developed for the study:

conforming to AI practices in radiology is imperative since they inevitably affect healthcare

delivery. The future of AI appears to be getting more advanced, especially for the radiology

industry, which integrates a structure for change in handling system operations. It can be possible

to handle superintelligence operations based on AI’s ability to mimic human clinical radiologists’

behavior. Becker et al. (2022) introduced expertise in handling AI operations advancements based

on the capability to correlate ideas for generating better data analysis. The existence of AI for

clinical radiology is a structure connected to the change of different aspects in the healthcare

environment due to its capability to manage system operations. There is thus a structure for change

in terms of better machine operations for patient healthcare improvement. Management of human

beings’ acceptance of AI is required for the best procedure of offering intelligence improvement for

a safe healthcare future (Mulryan et al., 2022). These authors adopted a document analysis process

by seeking credible sources on how radiology operations get performed.

Possible Ethical Issues

In conclusion, it would be possible to correlate real-life positive healthcare outcomes and

the data provided by persons familiar with the area of interest. There are conditions required to

improve AI operations, including algorithm management and logic handling, which are imperative

to support healthcare expertise by clinical radiologists as they learn to perform AI operations.

While performing any personal study, there can be a constricted method when attempting to

understand how to seek factual data without the integration of plagiarism of original information.

Another issue can be obtaining informed consent from professionals in the radiology industry. It is

easy to find final work posted online, yet communicating with the developers and ensuring they

allow their work to be used in research can be challenging. Confidentiality is another requirement

Robert Neuteboom
What do you mean here? Clarify.

Robert Neuteboom

Robert Neuteboom

Robert Neuteboom
are

Robert Neuteboom
results of improving

Robert Neuteboom

Robert Neuteboom
technology

Robert Neuteboom

Robert Neuteboom
Human beings must accept

Robert Neuteboom

Robert Neuteboom

Robert Neuteboom
What is “It” in reference to? Be clear.

Robert Neuteboom
those practices

5

that needs to be analyzed to seek information on how to protect the original owners of any piece of

work without appearing to steal data. All these possible issues can be addressed by conducting

thorough research and assessing the required topic.

Robert Neuteboom
Good – yes, generally speaking, these are categories we would label ethical issues in relation to a study. Now, speak specifically about your study. You are welcome to use first-person pronouns here to discuss specific concerns that may arise in the study you wrote about in Deliverable 1 and 2.

6

References

Becker, C., Kotter, E., Fournier, L., & Martí-Bonmatí, L. (2022). Current practical experience with

artificial intelligence in clinical radiology: a survey of the European Society of Radiology.

Insights Into Imaging, 13(1). doi: 10.1186/s13244-022-01247-y.

Mulryan, P., Ni Chleirigh, N., O’Mahony, A., Crowley, C., Ryan, D., & McLaughlin, P. et al.

(2022). An evaluation of information online on artificial intelligence in medical

imaging. Insights Into Imaging, 13(1). doi: 10.1186/s13244-022-01209-4

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Confidentiality Guarantee

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

24/7 Support

Our specialists are always online to help you! We are available 24/7 via live chat, WhatsApp, and phone to answer questions, correct mistakes, or just address your academic fears.

See our T&Cs
Live Chat+1(978) 822-0999EmailWhatsApp

Order your essay today and save 30% with the discount code ESSAYHELP