Artificial intelligence in veterinary medicine raises ethical challenges

artificial intelligence in veterinary medicine
Photo: arvydas73 123rf

Use of artificial intelligence (AI) is increasing in the field of veterinary medicine, but US veterinary experts caution that the rush to embrace the technology raises some ethical considerations.

“A major difference between veterinary and human medicine is that veterinarians have the ability to euthanise patients—which could be for a variety of medical and financial reasons—so the stakes of diagnoses provided by AI algorithms are very high,” said Eli Cohen, associate clinical professor of radiology at NC State’s College of Veterinary Medicine. 

“Human AI products have to be validated prior to coming to market, but currently there is no regulatory oversight for veterinary AI products.”

In a review for Veterinary Radiology and Ultrasound, Cohen discusses the ethical and legal questions raised by veterinary AI products currently in use. He also highlights key differences between veterinary AI and AI used by human medical doctors.

AI is currently marketed to veterinarians for radiology and imaging, largely because there aren’t enough veterinary radiologists in practice to fill the demand. However, Cohen points out that AI image analysis is not the same as a trained radiologist interpreting images in light of an animal’s medical history and unique situation. 

While AI may accurately identify some conditions on an X-ray, users need to understand potential limitations. For example, the AI may not be able to identify every possible condition, and may not be able to accurately discriminate between conditions that look similar on X-rays but have different treatment courses.

Veterinary products can come to market with no oversight beyond that provided by the AI developer and/or company.

“AI and how it works is often a black box, meaning even the developer doesn’t know how it’s reaching decisions or diagnoses,” Cohen said. 

“Couple that with lack of transparency by companies in AI development, including how the AI was trained and validated, and you’re asking veterinarians to use a diagnostic tool with no way to appraise whether or not it is accurate.

“Since veterinarians often get a single visit to diagnose and treat a patient and don’t always get follow-up, AI could be providing faulty or incomplete diagnoses and a veterinarian would have limited ability to identify that, unless the case is reviewed or a severe outcome occurs.”

Cohen recommends that veterinary experts partner with AI developers to ensure the quality of the datasets used to train the algorithm, and that third-party validation testing be done before AI tools are released to the public.


Please enter your comment!
Please enter your name here