AVH

AJ van Hoek

Epidemiologist
-6 karmaJoined Working (15+ years)Netherlands

Comments
6

Thanks for the generous response. You write that we "may have been compatible" and I'm "reacting to something you're not saying."

Here's my concern: I've come to recognize that reality operates as a dynamic network—nodes (people, institutions) whose capacity is constituted by the relationships among them. This isn't just a modeling choice; it's how cities function, how pandemics spread, how states maintain capacity. You don't work from this explicit recognition.

This creates an asymmetry. Once you see reality as a network, your Section 5 framework becomes incompatible with mine—not just incomplete, but incoherent. You explicitly frame the state as separate from people, optimizing for longtermist goals while managing preferences as constraints. But from the network perspective, this separation doesn't exist—the state's capacity just IS those relationships. You can't optimize one while managing the other.

Let me try to say this more directly: I've come to understand my own intelligence as existing not AT my neurons, but BETWEEN them—as a pattern of activation across connections. I am the edge, not the node. And I see society the same way: capacity isn't located IN institutions, it emerges FROM relationships. From this perspective, your Section 5 (state separate from people) isn't a simplification—it's treating edges as if they were nodes, which fundamentally misunderstands what state capacity is.

That's the asymmetry: your explicit framing (state separate from people) is incompatible with how I now understand reality. But if you haven't recognized the network structure, you'd just see my essay as "adding important considerations" rather than revealing a foundational incompatibility.

Does this help clarify where I'm coming from?

Thank you for engaging, and especially for the intelligence curse point—that's exactly the structural issue I'm trying to get at.

You suggest I'm arguing "we should care about some of those things intrinsically." Let me use AGI as an example to show why I don't think this is about intrinsic value at all:

What would an AGI need to persist for a million years?

Not "what targets should it optimize for" but "what maintains the AGI itself across that timespan?"

I think the answer is: diversity (multiple approaches for unforeseen challenges), error correction (detecting when models fail), adaptive capacity (sensing and learning, not just executing), and substrate maintenance (keeping the infrastructure running).

An AGI optimizing toward distant targets while destroying these properties would be destroying its own substrate for persistence. The daily maintenance—power, sensors, error detection—isn't preparation for the target. It IS what persistence consists of.

I think the same logic applies to longtermist societies. The question would shift from "how to allocate resources between present and future" to "are we maintaining or destroying the adaptive loop properties that enable any future to exist?" That changes what institutions would need to do—the essay explores some specific examples of what this might look like.

Does the AGI example help clarify the reframe I'm proposing?

Afterword: A Note of Appreciation and Reflection

I want to begin by thanking Owen Cotton-Barratt and Rose Hadshar for their thoughtful and important chapter. Their willingness to examine what longtermist societies might actually look like—moving beyond marginal analysis to whole-system thinking—opens necessary terrain. This essay is offered in that same spirit of serious engagement, not as refutation but as extension.

My response is constructive, though it may read as fundamental disagreement. I believe we share a deep concern: how do we enable human flourishing to persist? Where we differ, I think, is in our conceptual starting point, and this difference ramifies through everything that follows.

Two frameworks for thinking about persistence:

Cotton-Barratt and Hadshar work from what might be called a projection framework: existence is distributed across time, and the question is how to allocate resources between temporal slices—present people now, future people later. Within this framework, their insight is important: even extreme longtermism requires substantial investment in present welfare for instrumental reasons. People whose basic needs aren't met cannot do complex work.

My essay works from what might be called a process framework: existence is not quantity distributed across time but a continuous adaptive process. There is no "present existence" separate from "future existence"—only ongoing maintenance of adaptive capacity. The question becomes not how to optimize for distant projected outcomes, but whether we're maintaining the structures that enable any outcomes at all.

Why this difference matters:

These aren't just semantic alternatives. They lead to different institutional designs, different understandings of risk, and different responses to uncertainty.

Cotton-Barratt and Hadshar recognize that instrumental reasons require present welfare. I'm suggesting something stronger: that the process of maintaining present adaptive capacity—the sense-learn-adapt-coordinate-repair loop—isn't instrumental to distant goals but constitutive of what persistence means. The Tuesday-morning maintenance network isn't preparation for a future we're aiming toward; it is the future, continuously instantiated.

This leads to seeing different risks. The incentive gradient I describe—the structural drift toward configurations that optimize measurable proxies while degrading adaptive capacity—isn't visible from a projection framework because it looks like progress on longtermist goals right up until the system can no longer adapt to surprises.

What I hope this contributes:

Cotton-Barratt and Hadshar's analysis helps us think carefully about constraints and resource allocation. Their distinction between partial and strict longtermism, their attention to legitimacy concerns, their recognition of instrumental value—all of this is valuable.

My hope is that the process framework adds something complementary: a way to think about systemic resilience, about what makes persistence possible in the face of deep uncertainty, about why maintaining diversity, autonomy, error correction, and genuine interdependence might not be constraints on longtermism but prerequisites for anything to persist at all.

An invitation:

I'd be genuinely curious to hear how Cotton-Barratt and Hadshar see this difference. Is it a meaningful distinction? Are these frameworks reconcilable at different scales of analysis? When would we know which better serves long-term flourishing?

Perhaps the most important test is this: when unforeseen challenges arrive—as they inevitably will—which approach has preserved the adaptive capacity to sense them early, learn from evidence, coordinate responses, and iterate toward solutions?

I suspect we all want the same thing: a future where human flourishing continues. The question is how we think about—and design for—that persistence. I offer this essay as one contribution to that ongoing conversation.

 

A note on method: For transparency, I used Claude Sonnet 4.5 and ChatGPT-5 as thinking partners and writing tools for this essay—for structure, clarity, and articulation. The core framework, however, emerges from my hands-on work with dynamic network modeling of infectious diseases, and my training across biology, economics, and philosophy. The loop-maintenance perspective reflects years of thinking and exploration and was sparked by conceptual reflection on oak trees. The ideas are mine; the AI helped me say them clearly.

I agree. But first we need to conceptually break down this further. As AIs & humans (and anything intelligent) will become part of the same "intelligent network" where there are several inequalities which need to be addressed. This is is acknowledging the difference in maintenance of our substrate and the difference in our sensors, and a difference in our capacity to monitor and understand ourselves. Doing so will reveil -  I belief the inequalities between humans, as well as between humans and AI -  and will showcase also the huge differences between humans to wield power. We need to solve these all, which frankly requires a huge step-up in our democracies to become democratic and truly look after the long-term-stabilty of the full network (meaning overcoming nationalism, class divisions, sexism, racism etc.)

Hi all,

What I find an interesting perspective is to approach ethics from the point of view of a “network.” In our case, a network in which humans (or, more precisely, our intelligences) are the nodes, and the relationships between these intelligences are the edges.

For this network to exist, the nodes need to establish and maintain relationships. This “edge maintenance” can, in turn, be translated into what we call ethics or ethical behaviour. Whatever creates or restores these edges/relationships—and thereby enables the existence of the network—is just, correct, or virtuous. This is because, to make the intelligent nodes physically exist (to keep their substrate intact), the network itself must exist: the nodes are interdependent. One node grows wheat, another harvests it, another bakes bread, another distributes it, etc. Thus, ethics becomes about existence, which is much easier to comprehend.

Once you embrace this network between intelligent nodes, you can also start thinking about all subsequent dependencies in terms of nodes and edges/relationships. This neatly highlights the interdependences of our existence and leads me to formulate the meaning of life as: “Keep alive what keeps us/you alive.” As this becomes the internal logic of this interdependent network.

I’m curious who else finds this perspective interesting, as I believe that using the language of networks and complex systems in this context opens the door to thinking and talking more clearly about intelligence and AI alignment, (inter)national collaboration, (bio)diversity, evolution, etc

Thanks. Enjoyed your thinking. I am asking myself similar questions. But I find it sort of circular, maths is a language we discover for a reality we discover, where we keep what fits reality. So no wonder that it describes what we know. But nevertheless, the universality of the language, the abstraction level in which you can apply it, is worth the questions you ask! I am getting obsessed with networks and network dynamics recently. As intelligence lives in (dynamic) networks and their feedback loops and emergent behavior. A mathematical idea which logic and rules scales over AI, nature, and non-living reality.