Sci-Tech

Analysis

Artificial Intelligence

Reading time: 10min

Balancing AI Innovation with Responsible Governance

published on

07/31/2025

written by

Amy Hammond

Amy has over 25 years of leadership experience in Chief Information Officer (CIO), Chief Technology Officer (CTO), digital transformation and cyber-focused roles. She brings both strategic vision and operational expertise to business-driven IT initiatives. She has in-depth knowledge of end-to-end technology platforms, digital transformation, cyber resilience and enterprise data management. She holds an MBA in Technology and a Master’s degree in Information Security, and has worked with large FTSE 100 (Financial Times Stock Exchange 100 Index) companies, small and medium-sized enterprises (SMEs) and the Third Sector, both internationally and across the United Kingdom. Most recently, she completed an interim assignment at the World Organisation for Animal Health (WOAH), where she served as Interim Head of Digital Transformation and Information Systems from November 2023 to July 2025.

Share on social media

Abstract

Artificial Intelligence (AI) is rapidly reshaping how organisations operate, from automating tasks to predicting outcomes. It is transforming industries as varied as healthcare, finance, surveillance (closed-circuit television [CCTV] and facial recognition) and recruitment practices, and alongside this widespread adoption comes an urgent need for effective AI governance. Without proper oversight, AI systems risk amplifying existing data biases, reinforcing discrimination and spreading misinformation, ultimately distorting reality. Reliable AI depends on high-quality data, which is why AI governance must address three key concerns: (i) protecting user data: AI systems collect vast volumes of data, often without explicit user consent; (ii) minimising bias and discrimination: many AI models are trained on datasets that already reflect societal biases; and (iii) preventing misinformation: AI-generated content based on flawed data can mislead users and distort public understanding. With this in mind, the World Organisation for Animal Health is actively advancing the responsible adoption of AI and data analytics to improve global animal health decision-making. The Data Architecture Project, a centralised high-quality data repository, will integrate all the Organisation’s datasets, leading to better outcomes for Members and Delegates, and mitigating the risks of AI.

AI Governance: Challenges and Necessity

The World Organisation for Animal Health (WOAH) plays a critical role in collecting, managing and disseminating extensive animal health data. To manage this data, the Organisation is pioneering the Data Architecture Project (DAP), which will establish a central data repository, enabling integration and connectivity across all WOAH datasets. Not only will the platform support both internal and public reporting needs, but as part of WOAH’s broader Data Architecture Initiative, it will inform the Organisation’s new AI policy. DAP will streamline WOAH’s data and AI governance through automation, ensuring that core datasets are well-structured, high-quality and centrally maintained. This foundation will support the generation of actionable intelligence and AI-driven insights during the platform’s development in 2025.

AI dominates global discourse, with strong advocacy for its widespread adoption. Yet, there is still a lack of clarity on what true integration involves. An AI system is ‘a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’ [1]. The use of such AI systems is revolutionising industries by automating processes, predicting trends and transforming sectors – particularly healthcare and finance. However, its rapid advancement underscores the urgent need for robust AI governance to ensure ethical development and responsible deployment.

AI governance refers to the principles, rules and mechanisms that guide the development, implementation and oversight of AI technologies. Its goal is to balance innovation with ethical standards, safety and compliance. Achieving this balance is vital to allowing AI to continue evolving while mitigating its associated risks.

Key concerns that AI governance must address are: (i) data privacy and protection: AI systems collect vast volumes of data, often without explicit user consent, leading to a growing risk of data breaches; (ii) bias and discrimination: AI models are trained on data that may reflect existing societal biases, causing discriminatory outputs [2]; (iii) misinformation: AI-generated content based on flawed or biased datasets can mislead users and distort reality by spreading misinformation; (iv) transparency and accountability: AI-generated decisions must have clear explanations, and developers should be held accountable for their systems’ outcomes; (v) regulatory compliance: AI must operate within established legal frameworks and industry standards, such as the General Data Protection Regulation (GDPR).

Image: ©ChatGPT

Global Approaches to AI Governance

The rapid evolution of technology and AI presents a significant challenge for regulators, as legal frameworks struggle to keep pace. As a result, countries are adopting varied approaches to AI governance. The European Union has taken a leading role, introducing the AI Act, which came into force on 1 August 2024 (AI acts provide a framework for the safe development and deployment of an AI system) [3]. Meanwhile, in the United States of America, an initial Executive Order outlined guidelines for AI safety, innovation and security; however, this order was recently rescinded by the new administration [4]. In contrast, the United Kingdom has adopted a more pro-innovation stance, encouraging organisations to adhere to ISO 42001, a standard that addresses key AI challenges such as bias, explainability and ethical considerations [5,6].

On the corporate front, several technology companies, including Microsoft, Anthropic, Google and Open AI, have voluntarily implemented self-governance frameworks, aiming to embed responsible AI practices within their organisations [7]. At the international level, the United Nations and the Organisation for Economic Co-operation and Development have collaborated to formulate global AI policies, promoting a cohesive and standardised approach to regulation [8]. These diverse efforts highlight the complexity of aligning AI governance across different regions and the importance of international cooperation in building robust, ethical and future-proof AI frameworks.

At WOAH, there is a clear recognition that data governance and AI governance are deeply interconnected. Aligning these two domains ensures responsible and effective AI deployment. A key reason for this interconnectedness is that AI systems depend on high-quality, well-governed data; poor data governance can result in flawed AI outputs, a risk captured by the adage ‘garbage data in, garbage data out’ [9]. In addition, both data and AI must comply with ethical and regulatory standards to uphold privacy, security and accountability.

To address these needs, WOAH’s data governance framework provides a standardised and consistent approach for managing the Organisation’s data. It ensures that WOAH’s data assets are accurate, secure, consistent and compliant with current GDPR legislation. Moreover, the framework outlines how data is collected, stored and used across its lifecycle, covering processes from data creation and storage to processing, sharing and deletion. This framework will be integrated into the DAP and its associated platform.

Complementing this, WOAH’s AI governance sets out policy guidelines for the responsible, efficient, secure and strategic adoption of AI within the Organisation. It includes standards for selecting, testing, evaluating, deploying and monitoring AI models.

Clear guidelines have been put in place for the use of AI models within the World Organisation for Animal Health. These include strict data privacy measures, with a strong recommendation against entering confidential, personally identifiable or sensitive organisational data into public AI systems. It is also crucial to validate AI-generated content for accuracy, relevance and suitability before it is used in official communications or decision-making.

Ensuring Responsible AI Adoption

WOAH is committed to the responsible adoption of AI through systematic and rigorous testing and evaluation procedures. These processes are designed to thoroughly assess the technical performance and business impact of AI-generated models. Regular monitoring will detect changes in data patterns, degradation in model accuracy, security vulnerabilities and deviation from compliance standards.

Clear guidelines have been put in place for the use of AI models within the Organisation. These include strict data privacy measures, with a strong recommendation against entering confidential, personally identifiable or sensitive organisational data into public AI systems. It is also crucial to validate AI-generated content for accuracy, relevance and suitability before it is used in official communications or decision-making. Transparency is another key consideration, requiring disclosure of the use of AI-generated content to stakeholders when appropriate. Ethical use is emphasised, with safeguards in place to ensure that AI tools are used responsibly, preventing bias, discrimination or inappropriate outputs. Additionally, compliance with organisational policies and applicable laws (e.g. GDPR) is vital when using public AI models. Finally, security guidelines stress the need for vigilance against potential data breaches or leaks, with all incidents to be reported immediately.

The AI governance policy also defines the roles and responsibilities of key stakeholders involved in the adoption and implementation of AI models at WOAH. The Architectural Review Board (ARB) provides oversight, strategic direction and final approval for all new technical architecture. The Project Technical Teams, comprising WOAH’s internal technical staff as well as technical partners and suppliers, are responsible for the technical implementation, testing, deployment and monitoring of AI models. The Compliance and Security Teams ensure that all models and deployments adhere to regulatory, compliance and security standards, thereby reinforcing the integrity of the AI systems. Lastly, the Change Advisory Board governs any modifications to the Organisation’s live technology platforms, with administrative support by WOAH’s technical team.

Future Directions in AI and Data Analytics at the World Organisation for Animal Health

WOAH is continually innovating its approach and is currently exploring how AI and advanced data analytics can improve the collection and interpretation of data to deliver better information and insights to its Members and Delegates. As part of this effort, WOAH’s DAP will support the use of AI across the Organisation’s datasets during the platform’s development. As AI capabilities evolve within the Organisation, additional business applications such as the Performance of Veterinary Services Information System and the World Animal Health Information System will also benefit from these advancements. This will ensure that WOAH remains at the forefront of effective data-driven decision-making in global animal health and welfare.

Main image: ©ChatGPT

 

References

[1] Organisation for Economic Co-operation and Development (OECD). Explanatory memorandum on the updated OECD definition of an AI system. OECD Artificial Intelligence Papers, No. 8. Paris (France): OECD Publishing; 2024. https://doi.org/10.1787/623da898-en

[2] Information Commissioner’s Office (ICO). AI tools in recruitment: Audit outcomes report. Cheshire (United Kingdom): ICO; 2024. Available at: https://ico.org.uk/media2/migrated/4031620/ai-in-recruitment-outcomes-report.pdf (accessed on 9 July 2025).

[3] AI Act enters into force on 1 August 2024. Brussels (Belgium): European Commission; 2024. Available at: https://commission.europa.eu/news-and-media/news/ai-act-enters-force-2024-08-01_en (accessed on 9 July 2025).

[4] Removing barriers to American leadership in Artificial Intelligence. Washington, D.C. (United States of America): The White House; 2025. Available at: https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/ (accessed on 9 July 2025).

[5] Department for Science, Innovation & Technology. A pro-innovation approach to AI regulation. London (United Kingdom): HM Government. Available at: https://assets.publishing.service.gov.uk/media/64cb71a547915a00142a91c4/a-pro-innovation-approach-to-ai-regulation-amended-web-ready.pdf (accessed on 9 July 2025).

[6] International Organization for Standardization (ISO). ISO 42001: Artificial intelligence. Geneva (Switzerland): ISO; 2023. Available at: https://www.iso.org/standard/42001 (accessed on 9 July 2025).

[7] Anthropic, Google, Microsoft, OpenAI launch Frontier Model Forum. Redmond (United States of America): Microsoft; 2023. Available at: https://blogs.microsoft.com/on-the-issues/2023/07/26/anthropic-google-microsoft-openai-launch-frontier-model-forum/ (accessed on 9 July 2025).

[8] UN and OECD announce next steps in collaboration on artificial intelligence. New York (United States of America): United Nations; 2024. Available at: https://www.un.org/digital-emerging-technologies/content/press-release-un-and-oecd-announce-next-steps-collaboration-artificial-intelligence#:~:text=22%20September%202024%20%E2%80%93%20Meeting%20on%20the%20margins,UN%20and%20the%20OECD%20on%20global%20AI%20governance (accessed on 9 July 2025).

[9] Data quality and its impact on AI. Tallinn (Estonia): AIMultiple; 2025. Available at: https://research.aimultiple.com/data-quality-ai/  (accessed on 9 July 2025).

Continue reading

07/29/2025

5 min read

Breeding for Resilience: Interpreting Animal Behaviour With Machine Learning

Stephen Kemp

artificial intelligence_the use of artificial intelligence in animal care

07/04/2025

5 min read

Smart Barns, Digital Kennels: How artificial intelligence is quietly revolutionising animal care

Santiago Alonso Sousa

07/02/2025

5 min read

From Reactive to Predictive Care: A Conversation on AI and Animal Health

Claude AI

Discover more themes

Animal health

Biosecurity

Collaboration

Gender

Veterinary Workforce

Wildlife