One in four doctors currently use artificial intelligence (AI) and see its benefits for efficiency and patient care, new research commissioned by the GMC has found.
Researchers commissioned by the regulator looked into the types of AI doctors are using, how well they understand its risks and what they do if they disagree with the output of an AI system.
A survey of 1,000 doctors â including 173 GPs â found that AI is âembeddedâ in the working life of a sizeable portion of doctors, with 29% reporting that they had made use of at least one AI system in their work in the previous year.
Of those that had reported any AI use, more than half (56%) said that they use it at least once per week, and 73% reported using it at least once per month. However, 15% felt the technology was making them âworried about their job securityâ.
The Alan Turing Institute, who analysed the results of the survey, pointed out that the majority of doctors are not making any use of AI systems in their work, meaning that there are âsignificant areasâ where the âpotentialâ of the technology âis not being exploredâ.
A total of 17 doctors, including seven GPs, who had said they used AI in the past 12 months discussed the benefits and risks of using the technology with researchers from agency Community Research. Â
They found that there was âsignificant interestâ in developing AI systems to help manage back-office functions and free up resources for patient care, but that doctors were also aware of risks of over reliance on the tools.
One GP said: âThereâs an element of risk, so youâre still in a risky profession, you still know that you could make a decision that alters the life or death situation for a patient.
âYou might look at the advice and think: âNo, I donât agree with that,â but thatâs fine; at least itâs asked the question and itâs then part of the checklist in your own thought processes.â
GPs told the researchers that AI tools and algorithms are currently used in general practice and integrated into triage and patient management tools to:
- Help prioritise which patients to see
- Suggest diagnostic tests that should be organised
- Optimise care pathways (e.g. reduce the number of appointments needed)
- Flag risks and potential diagnoses
- Highlight possible prescribing issues or conflicts
Those using these systems in primary care said that, while the system may be available to all doctors within a GP practice, it remains âup to the individual practitionerâ how far they pay attention to AI outputs.
One GP told the researchers: âItâs really dependent on the clinician if they choose to use it or not and I guess thatâs the bit where every individual has their own variances or thresholds, as to how willing they are to use some of the information that is there.â
The doctors interviewed also said that the technologies âpresented risksâ, the researchers added, as they saw potential for AI-generated answers to be based on data that could itself âbe false or biasedâ. They also acknowledged âpossible confidentiality risksâ in sharing patient data.
One GP trainee said: âI did wonder what the data I was feeding it was going to be used for, long-term, because these LLMs [Large Language Models] build themselves on data that theyâve been given in the past.
âSo before I started using it on a regular basis, I had a chat with my supervisors and trainers in the surgery, to have a discussion about privacy risks, which is why I essentially do not feed in patient information, as in name, date of birth, address, details like that, because I believe thatâs where the danger comes from.â
The researchers said that this study showed that doctors believe AI is an âassistive toolâ to their practice, and that they recognise that they have responsibility for all decisions informed by AI.
The study said: âThey appear confident in overriding decisions made by AI and some doctors are overriding AI recommendations frequently.
âThis very much supports findings from the Turing survey where only 1% of doctors said they would follow the AI judgment when asked what they would do if they disagreed with the recommendation of an AI system.â
GMC director of strategy and policy Shaun Gallagher said: âItâs clear that AIâs use in healthcare will continue to grow and projects like these give valuable insights into how doctors are using these systems day-to-day.
âThese views are helpful for us as a regulator, but also for wider healthcare organisations, in anticipating how we can best support the safe and efficient adoption of these technologies now, and into the future.â
 Last year, a survey found that one fifth of GPs are already using artificial intelligence in clinical practice, with ChatGPT being the most popular tool.
However, GP leaders have voiced their concerns regarding the developing use of AI in general practice, with LMCs voting in favour of a motion which said that âonly a doctor with full training and appropriate levels of experience will be effective to challenge an AIâ.
And GP practices in one area have been warned against using AI without seeking approval from their ICB first.
Last week, the Medical Schools Council argued that it is âessentialâ that medical students are taught how to use artificial intelligence (AI) as part of their studies.