Veterinary medicine is using more AI, but experts warn that the rush to embrace the technology raises some ethical concerns.
“The biggest difference between veterinary medicine and human medicine is that veterinarians have the ability to euthanize patients – this can be for a variety of medical and financial reasons – so the risks of diagnoses provided by AI algorithms are very high,” says researcher Eli Cohen. Professor of clinical radiology at the North Carolina State University College of Veterinary Medicine.
“Human AI products need to be validated before they hit the market, but there is currently no regulatory oversight for veterinary AI products.”
in a review for Veterinary Radiology and UltrasoundCohen discusses the ethical and legal questions posed by veterinary AI products currently in use. It also highlights the key differences between veterinary AI and the AI used by human medical practitioners.
AI is currently marketed to veterinarians for radiology and imaging, largely because there are not enough veterinary radiologists to meet the demand in practice. But Cohen cautions that AI image analysis is not the same as a trained radiologist interpreting images in light of an animal’s medical history and unique condition. While AI can accurately identify some conditions on an X-ray, users need to understand the potential limitations. For example, AI may not be able to identify every possible condition and accurately distinguish between conditions that look similar on X-rays but have different treatment processes.
Currently, the FDA does not regulate AI in veterinary products the same way it does in human medicine. Veterinary products may be released without any oversight beyond that provided by the AI developer and/or company.
“Artificial intelligence and how it works is often a black box, meaning even the developer doesn’t know how it arrives at decisions or diagnoses,” Cohen says. “With the lack of transparency from companies in AI development, including how AI is trained and validated, you’re asking veterinarians to use a diagnostic tool that has no way of assessing whether this is true.
“As veterinarians often make a single visit to diagnose and treat a patient and are not always followed up, AI may be providing inaccurate or incomplete diagnoses, and a veterinarian will have limited ability to identify this unless the case is reviewed or severely treated. results come in,” says Cohen.
“AI is marketed as having similar value to a radiologist’s interpretation or as a substitute because there is a market gap. “The best use of AI going forward, and certainly at this early stage of deployment, is with what is called a radiologist in the loop, where AI is used with a radiologist, not instead of a radiologist,” says Cohen.
“This is the most ethical and defensible way to use this emerging technology: using it to make radiologist consultations accessible to more veterinarians and pets, but most importantly having field experts who can troubleshoot AI and prevent negative consequences and patient harm.”
Cohen recommends that veterinarians work with AI developers to ensure the quality of the datasets used to train the algorithm, and that third-party validation tests be performed before AI tools are made public.
“Almost anything a veterinarian can diagnose on radiographs has the potential to be moderate to high risk, meaning changes in medical treatment, surgery, or euthanasia due to clinical diagnosis or financial constraints of the client,” says Cohen. “This level of risk is the threshold the FDA uses in human medicine to determine whether to become a radiologist in the loop. It would be wise for us to adopt a similar model as a profession.
“Artificial intelligence is a powerful tool and will change the way medicine is practiced, but best practice going forward would be to use it in concert with radiologists to improve access and quality of patient care, rather than using it as a replacement for these consultations.”
Source: NC State