8 Jan 2026
The association has published a new policy paper which urges professionals to engage with the technology and practices to develop individual use policies.

The BVA has urged clinicians to take a “positive and open-minded approach” to the use of artificial intelligence (AI) in a newly published policy paper on the issue.
The document contains eight main recommendations for action, including a plea for all professionals to “actively engage with understanding” the technology.
But, despite a much narrower gap than for current use, the group’s own data suggests the proportion of clinicians not intending to use it still outnumbers those who do by more than two to one.
The paper sets out eight general principles for using AI, including the key idea that the technology should be seen as a tool to support and not replace vets.
It also includes a risk pyramid which highlights the kind of functions that would pose either minimal, moderate, high or unacceptable levels of concern.
Association president Rob Williams said: “The AI revolution is here to stay and brings with it both important opportunities as well as challenges for the veterinary profession.
“Having a positive and open-minded approach that views AI as a tool to support vets and the wider vet team is the best way forward to make sure that the profession is confident applying these technologies in their day-to-day work.
“The general principles developed in BVA’s new policy position offer a timely and helpful framework for all veterinary workplaces considering the safe and effective use of AI technologies.”
Figures from the BVA’s Voice of the Veterinary Profession survey, first published last May, showed 73% of participants were not currently using AI in practice, compared to just 21% who were.
But, when asked about future intentions, the proportion not planning to use it dropped to 40%, while 18% said they would do so. The remaining 42% were unsure.
The paper also includes calls for practices to have their own policies on AI use, the development of international explainability and governance standards and established regulators to take the lead in overseeing UK-based usage.
But Dr Williams argued clinicians should be involved in platform development as early and frequently as possible so the sector can “lead from the front” in applying the technology to their clinical work.