Hide table of contents

TL;DR

  • AI systems manage hundreds of thousands of animals in NZ (billions globally) under decades-old regulations with zero AI-specific welfare provisions
  • I spent 3 months part-time [1]+ $0 writing a policy brief highlighting this gap for local policymakers
  • The pattern is universal: deploy first, regulate later (maybe)
  • The resulting policy brief proposes practical solutions using existing frameworks
  • Takeaway: this work is doable without massive resources, similar gaps likely exist in your jurisdiction and the window of opportunity is closing

Introduction

AI systems are already being deployed to manage potentially billions of animals across agriculture and wildlife control, with essentially zero welfare-specific regulation. The regulatory frameworks being established right now, in the next 1-3 years, will likely lock in for decades. And almost no one is working on this.

EA has strong work on AI safety and animal welfare separately, but the intersection is only beginning to be explored. Recent landscape analysis identifies AI×animal welfare as critically neglected, with few policy hooks. Organisations like Sentient Futures, Futurekind and Future Impact Group are helping to bridge this gap. Yet the field remains nascent relative to its urgency and scale.[2]

There have been many useful discussions on AI's impact on animals, for example by Max Taylor and Lizka and Ben_West

But are these conversations happening outside our bubble? 

I recently completed the European Network for AI Safety (ENAIS) AI Policy 201 Deep Dive course and decided to apply what I learned to this neglected intersection. The result is a policy brief for the New Zealand government on how AI is transforming animal management while regulatory frameworks lag decades behind. 

With this brief I hope to move from the theoretical to the practical: providing a concrete case study to help start conversations with local policymakers and animal advocates about how AI could shape animal welfare in the years ahead.

Aotearoa New Zealand case study

In Aotearoa New Zealand, AI deployment in animal management is in early but accelerating stages. Current deployment includes:

However, critical data gaps prevent meaningful oversight:

  • Industry-wide adoption rates are unclear and commercial confidentiality limits transparency
  • Unknown extent of AI use in wildlife control programmes that kill millions of animals annually
  • No public data exists on adoption trajectories, industry penetration rates or projected timelines

The potential scale is large. In 2023 New Zealand had  5.9 million dairy cattle ungulates, 3.7 million beef cattle ungulates, 24.4 million sheep and 740,000 deer. In the same year, over 124 million chickens, 591,000 pigs, 357,000 ducks and 23,000 turkeys were slaughtered. Numbers for fishes, molluscs, crustaceans and commercial wild fishes, like elsewhere, are provided by collective weight reaching 449,659 tonnes. Population estimates for species designated as “pests” are not publicly available, however, possums, just one species on the list, are estimated at approximately 30 million nationwide.

These could all potentially be subject to AI management as adoption rates accelerate. We're deploying technology that could affect millions of animals without public data or systematic monitoring of welfare outcomes. 

The Regulatory Context and Pattern

These systems, whatever their current scale, operate under the Animal Welfare Act 1999, enacted before modern AI applications existed, with no AI-specific welfare provisions.

As I mapped New Zealand's approach a concerning pattern emerged that is likely repeated elsewhere: Technology deployment is outpacing regulation. 

Virtual fencing provides a clear example:

The systems expand across hundreds of thousands of animals without specific welfare standards. 

This pattern - deploy first, regulate later, maybe - means that by the time frameworks exist, hundreds of thousands of animals will have already been affected, with welfare implications, both positive and negative, that remain largely unexamined, and industry incentives may by then be locked in against reform.

My work here is smaller in scope than Addressing the Nonhuman Gap in Intergovernmental AI Governance Frameworks, but I reached the same conclusion, there is “a critical gap in AI governance frameworks: the systematic exclusion of sentient nonhumans.” Every country developing an AI strategy is setting precedents for how, or whether, animal welfare gets considered.

Global AI governance: the animal welfare gap

New Zealand launched its AI Strategy in July 2025 (the last OECD country to do so). The Strategy positions agriculture as a key adoption sector yet completely omits any animal welfare considerations.

International frameworks show this same gap. Surveying international AI governance revealed only Serbia's AI framework explicitly mentions animals stating AI systems “must be in line with the well-being of humans, animals, and the environment.” The International AI Safety Report 2025 (written by  AI experts and representatives across 33 countries), the ISO/IEC 42001 (international AI management standard), and OECD AI Principles all exclude animal welfare from consideration.

This represents a massive coordination failure. Countries are independently developing AI strategies with no consideration of animal welfare. Despite emerging EA work on this gap, most national policy conversations still exclude animals entirely.

Why This Was Doable 

I took the ENAIS policy course as an individual, not representing any organisation. I have some relevant experience, I'm Policy President of the Animal Justice Party Aotearoa NZ and have written government-facing documents before, but I've no background in AI governance and no budget or organisational resources for this work.

The course's wide curriculum and structured discussion-reflection process gave me a broader picture of the AI governance landscape. For example, a paper on the distinction between “steering policies” (shaping foundational assumptions during design) and “adaptation policies” (responding after institutions solidify) really resonated with me, clarifying the strategic choice about when in a technology's lifecycle you try to intervene.

Total investment: ~3 months part-time[1], one free policy course, $0.

A recent landscape analysis of AI safety priorities explicitly identifies work on “short-timelines interventions that could lock in animal welfare protections” as a gap. This brief demonstrates that such work is achievable without massive resources or specialised AI expertise.

Four Implementation Insights

1. Data gaps prevent assessment

In New Zealand introduced species control programmes report hectares covered, not kill numbers or welfare outcomes. This creates information vacuums where neither traditional nor AI-enabled methods face scrutiny. Without basic data, there's no way to assess whether methods are effective or “humane”.

2. Consultation shapes outcomes

The Ministry for Primary Industries Artificial Intelligence: A snapshot of AI in New Zealand and global food systems report interviewed only industry representatives. No animal welfare organisations were consulted. The resulting analysis concluded, unsurprising: “stakeholder priorities focus on economic outcomes.” Official consultation processes can exclude relevant stakeholders while still being recorded as “consultation occurred”.

3. Regulatory speed is political, not technical

The ability to legislate swiftly isn't the issue; the will to do so is. The government fast-tracked wildlife law amendments under urgency in May 2025 - a direct, rapid response to the development industry. This move confirmed the legislative capacity exists to handle complex issues quickly. Prioritisation, not procedure, determines when a law is deployed.

4. Economic frameworks create structural barriers

New Zealand's proposed Regulatory Standards Bill requires regulatory benefits to exceed costs in economic terms. This systematically disadvantages animal welfare improvements as many benefits to animals resist economic quantification. Economic metrics inherently limit ethical scope, making moral goods perpetually harder to justify within dominant policy frameworks.

Practical Pathways

The brief proposes four recommendations designed to work within existing frameworks rather than requiring new legislation:

1. Update animal welfare legislation for AI
Integrate animal-centric design principles into the existing Act and Codes of Welfare.

2. Build regulatory capacity
Train inspectors and enforcement officers to assess AI technologies, leveraging existing enforcement structures.

3. Establish inter-agency coordination
Create coordination across agriculture, conservation and environment agencies. AI-animal welfare issues cut across agency mandates; coordination addresses this without creating new bureaucracy.

4. Leverage existing legislation
Apply current frameworks through coordinated interpretation, for example:

  • Privacy acts → extend to animal biometric data
  • Fair trading → require evidence-based AI welfare claims
  • Environmental assessment → include animal welfare
  • Aviation regulation → welfare protocols for drone-based control

So what’s next?

The EA community is analysing how AI affects animals and is exploring how to make future AI good for animals. This case study examines how regulatory systems respond to AI that's being deployed right now. While the community explores post-TAI scenarios and alignment questions, regulatory frameworks for today's AI systems affecting millions of animals are being locked in.

The brief is being circulated to relevant ministries, advisory groups and animal welfare organisations via the Animal Justice Party Aotearoa NZ, where I volunteer. This organisational pathway matters as government officials receiving this brief likely haven't encountered EA frameworks or the intersection of AI and animal welfare concerns. 

I don't claim this will definitely change policy.  While concrete changes (updated AI Strategy language, animal welfare voices in consultation processes or AI-inclusion to Codes of Welfare) would be ideal, simply establishing that animal welfare belongs in these conversations would be a major step forward. Currently, these frameworks are being developed without us.

Over the next 6-12 months, I’ll track responses and share what I learn about what helps to move government decision-making. New Zealand's small, accessible policy environment makes it an ideal test case for observing the initial stages of policy conversation and uptake. 

The regulatory gaps I found in New Zealand almost certainly exist in your jurisdiction too. AI strategies are being finalised globally with agriculture listed as a priority sector and animals nowhere in the analysis. 

AI systems affecting animals are still in early deployment. Regulatory frameworks are being developed now. The window for proactive intervention is open, but it won't stay that way.

Read the full policy brief: https://animaljustice.org.nz/story/ai-and-animal-welfare-time-to-close-the-gap/ 

Questions or considering similar work? Feel free to reach out.

 

 


 

 

 

  1. ^

    The ENAIS course was 12 weeks part-time. The capstone project, for which I wrote this brief, comprised the last 4 weeks.

  2. ^

    It is my error if I have missed anyone or organisations, please accept my apologies.

  3. ^

    I follow Richard Twine’s approach in The Climate Crisis and Other Animals (2024), which highlights how language around “nonhuman animals” often reflects embedded anthropocentrism and the normalisation of commodification. Following this, I use pluralised and/or qualified terms (for example, “fishes” rather than “fish”, “cattle ungulates” instead of “cattle”), and place certain human-imposed designations such as “pests” in quotation marks to contest their framing.

43

1
0

Reactions

1
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

Thank you for this writeup! Coincidentally, I also did a recent announcement about a gap of people pushing forward on nonhuman welfare risk reduction through the EU AI Act Code of Practice. 

Thanks! Yes I saw it (and kicked myself for not knowing about it) - congrats on getting the nonhuman language included. 

Thanks for writing this up Karen – interesting and important.

This is great 👍 

 

Well done sir

Curated and popular this week
Relevant opportunities