Note (added after original post): This essay has a slightly updated final version which can be found here.
Note: after posting a temporary version for the "Essays on Longtermism" competition deadline, this series of posts has been temporarily removed from the front page, and will be reposted in the following days after updates have been made.
Epistemic status: In the spirit of draft amnesty, I am posting this series slightly before it is fully ready or in ideal form.
This represents many years' worth of my thinking and I think the core material here is quite important, but I really wanted to submit them for the "Essays on Longtermism" competition, due today which ended up competing with some other important fellowships and other applications, hence getting far lower priority and attention than it deserved.
Nonetheless, I believe these ideas are fundamentally important, execution may just be closer to middle draft than final draft.
That said, I will likely be updating this series significantly in the following days, especially the last post, “Shortlist of Longtermist Interventions,” and the last two pieces which are unpublished. I will have a running list of when updates have been made, so that readers can easily see when each essay has been updated and finalized, available on the series explainer page here:
(Brief and Comprehensive Series Explainers Here)
This essay, easily readable as a stand-alone piece, is this first essay and part of my submission to the "Essays on Longtermism" competition, based on my unpublished work on Deep Reflection (comprehensive examination of crucial considerations to determine the best achievable future). (Deep Reflection Summary Available Here)
TL;DR
This essay is excerpted from my unpublished Deep Reflection work and performs concrete stakeholder mapping to identify practical pathways toward achieving viatopia (an intermediate societal state helping humanity converge toward optimal futures).
While "Why Viatopia is Important" (link) establishes the theoretical case, this essay addresses the challenge of actually creating viatopia by analyzing what three key groups can do: AI labs, governments, and the general public.
Each stakeholder group has different incentives, capabilities, and constraints. AI labs wield significant influence over AI development and may already share ideological alignment with viatopia's core principles. Governments control policy and regulation but face short-term political pressures. The general public ultimately determines what becomes socially acceptable and politically viable.
The essay examines how viatopia can be framed to appeal to each group's interests, demonstrating that viatopia is not just theoretically desirable but practically achievable through existential compromise (positive-sum arrangements that satisfy diverse stakeholders while making progress toward better futures).
This stakeholder analysis bridges from theoretical arguments about why comprehensive reflection matters to practical questions about how we might actually achieve the buy-in necessary for viatopia. It sets up the concrete institutional mechanisms explored in subsequent essays.
Buy-in
Perhaps the primary challenge of Deep Reflection is that it seems difficult to persuade all stakeholders that Deep Reflection is the best way of deciding what to do with the deep future. Developing and using superintelligence (whether human or AI) to make decisions about the deep future is far outside the Overton window for most people.
This challenge of buy-in can be divided into two parts:
- Convincing people that it is a good idea to implement Deep Reflection prior to making any large-scale, hard to reverse changes (feasibility)
- Ensuring that whatever strategy Deep Reflection outputs is actually acted on by humanity (actionability)
The challenge arises because in their everyday lives most people are not particularly concerned with universal moral, epistemic, or empirical optimization, nor with the outcome of the deep future. Fortunately, although few individuals explicitly hold these forms of optimization as their primary goals, the everyday choices of humanity collectively tend to achieve a broad, positive trajectory of scientific insight, ethical concern, and material well-being, as noted by those such as Steven Pinker— although, there are some notable exceptions, such as farmed animal welfare, environmental degradation, increasing inequality, and increased destructive capacity. Yet, it seems quite possible that each of these could be solved with further technological advances and advances in coordination. and, when they pause to reflect, people generally endorse this progress.
Furthermore, far from universal deep future maximizers, most people can be quite content without fully maximizing even their own lives. Most people are “satisficers” rather than relentless maximizers; once their core needs and desires are met, they are satisfied. With the abundant resources and intelligence provided by advanced AI, it will likely not be difficult to keep people far beyond satisfied, and to at the same time prevent catastrophes via guardrails, continue/accelerate various forms of progress, and allow humanity to naturally converge on Deep Reflection.
This is where seed reflection – specifically, Viatopia – comes in. Will MacAskill defines Viatopia as “a society that is on-track to achieve some near-best outcome, whatever that may be.” Viatopia can ensure progress continues, and can incorporate additional robustly good processes which gently encourage society to converge on a world which is more happy, free, epistemically attuned, and morally wise. At the same time, security processes can block off paths which are most dangerous or likely to cause harm.
This may mean slowing down many applications of advanced AI (unpredictable, harmful, and dangerous technologies), while speeding up others (health, education, wise governance, material abundance and public benefit technologies.) While there will certainly be some pushback on any restrictions, it seems that most people don’t actually want terrifyingly rapid AI progress across all areas, and that upon reflection people will support prioritizing wisdom and robustly good applications while slowing down potentially catastrophic applications so that we don’t use our exceptional new intelligence to do irreversible bad or dangerous things.
Thus, viatopia could potentially solve both buy-in challenges:
- Viatopia can be constructed in such a way that is not just palatable but highly appealing to most people (feasibility)
- Nonetheless, viatopia can continue/differentially accelerate humanity’s natural moral and epistemic advancement toward a state likely to endorse the strategy output by Deep Reflection (actionability)
While viatopia is not the only way to achieve Deep Reflection, it seems especially tractable because of its broad appeal.
It seems likely the above considerations apply to all three primary stakeholders – AI labs, government, and the general public – yet there are some special considerations for each group.
AI labs – Leading AI labs—OpenAI, xAI, Google DeepMind, and Anthropic—currently wield significant influence over AI development. If these labs already share an ideological alignment with the core principles of Deep Reflection (deep understanding, maximizing positive impact, preventing existential risk), this potential synergy presents a significant opportunity for collaborative efforts aimed at advancing Deep Reflection.
The leadership of these labs have all endorsed the idea that AI is likely the most significant technology ever developed and will have transformative impacts on society. Most of them have repeatedly laid out utopian visions, citing the possibilities of using advanced AI to dramatically advance science, accelerate medicine, improve education, achieve economic abundance, solve climate change, and help humans settle space.
Most head AI leadership has signed the 2017 Future of Life Asilomar AI Principles (link). These principles are aimed at goals such as ensuring AI and superintelligence is robustly beneficial, it’s benefits are widely distributed, it is used ethically, it is aligned to human values, it avoids existential and catastrophic risks, and “advanced AI could represent a profound change in the history of life on earth, and should be planned for and managed with commensurate care and resources.” (Emphasis mine.) Signatories include:
- Sam Altman (OpenAI)
- Elon musk (xAI)
- Demis Hassabis, Shane Legg (Google DeepMind)
- Ilya Sutskever (Safe Superintelligence)
- Mustafa Suleyman (Microsoft AI)
- Yann LeCun (Meta)
In addition to this, the leading AI labs and their leadership have taken specific actions worth consideration.
- OpenAI – Sam Altman originally proposed a nonprofit structure so the technology would "belong to the world" and could focus on safety and broad benefit to humanity, rather than maximizing profits. The “windfall clause” was an example of this same intention. Sam Altman’s Worldcoin, (link) launched 2023, seems to be founded on similar principles, attempting to establish universal unique identity in order to enable global participation in UBI and democratic governance. His 2024 blog post “The Intelligence Age” (link) lays out his utopian vision.
While OpenAI’s safety reputation has lagged and it recently attempted to transition from its nonprofit structure to a benefit corporation structure, is not clear whether this indicates OpenAI is unconcerned with public benefit, as opposed to prioritizing its competitive edge, in order to secure benefits in the long-term.
- xAI – Elon Musk has famously built several companies in order to solve global problems, especially existential risk, and has been consistent about this mission in his communication for decades. Tesla and Solar City’s goals are to push forward electric vehicles, enable clean energy, and help solve climate change. SpaceX aims to enable life on Mars so that humanity is resilient against planetary existential threats. He has built a few companies to ensure safe and beneficial AI:
- Neuralink was founded to enable human-AI symbiosis through BCI to enhance human cognitive abilities to keep up with AI, and to merge with AI rather than allowing it to take over.
- OpenAI received support from Musk because he wanted to decrease concentration of power risk (specifically a Google/DeepMind AGI monopoly.)
- xAI’s stated mission is to “understand the true nature of the universe.”
Again, Musk has explicitly stated his philosophy is a close match to that found in the book “What We Owe The Future,” (link) which outlines the moral philosophy of longtermism.
While in recent times Musk has become extremely controversial, it is not clear this indicates he has become any less likely to be sympathetic to Deep Reflection.
- Google DeepMind – DeepMind’s founding vision was to “solve intelligence, then use it to solve everything else.” Demis Hassabis has consistently reiterated that the purpose of DeepMind is to further scientific understanding and solve societal problems. Concretely, AlphaFold 3 seems poised to have significant drug discovery benefits.
DeepMind’s sale to Google and competitive pressures have pushed the company significantly in the direction of productization, yet there may still remain sympathy for advancing science and public benefit.
- Anthropic – Anthropic’s leadership on AI safety is evident from its founding story of leaving OpenAI to start an AI company squarely focused on AI safety. Its strategy of “race to the top” i.e. challenging other AI labs to compete with its safety has helped inspire imitations of its responsible scaling policy and interpretability research at multiple labs. Apart from these, it has produced an unusually large quantity of leading AI safety research.
Anthropic’s strong focus on AI safety signals a significant concern for public benefit and the deep future. This commitment is further underscored by several other factors: its long-term public benefit trust, featuring prominent effective altruists among its selected members; a large constituency of effective altruists in its leadership and employee base; its research into model welfare; and CEO Dario Amodei’s essay on AI's near-term utopian possibilities, “Machines of Loving Grace.”
While recently anthropic has tried to downplay its EA associations (link,) it seems likely this has more to do with public image management than a sudden change of heart.
As mentioned, each of these leading AI labs has its downsides, and there is certainly room for concern that they may be “safety washing” or “impact washing” their work to make it more acceptable to funders, government, and the public. Commercial competitiveness pressures may be pushing significantly away from public benefit concerns, although in general it is unclear to what degree this is a systemic problem in which each organization may want to achieve public benefit, but may be pushed by race dynamics such that each believes the only way for them to ensure public benefit is through “winning the race.”
Nonetheless, it seems quite possible that at least some of these labs remain highly concerned with public benefit, and that when aligned advanced AI is achieved, they may naturally revert to this as their primary focus.
It is highly encouraging that AI labs may be some of the most likely parties to be sympathetic to Deep Reflection, as they may also be some of the most useful allies. For example, they could:
- Advocate to policymakers that Deep Reflection ought to be a primary use case for advanced AI. This advocacy could include “waking up” policymakers to the threats and opportunities of AI by giving them early previews of both scary and highly positive advances in the technology
- Help communicate to the public why viatopia and Deep Reflection are important and in everyone’s interest
- Help perform meta-reflection work on how to create seed reflection that is likely to be effective
- Differentially accelerate applications that increase the likelihood of viatopia by creating highly effective and desirable versions of those technologies early
- Make partnerships across AI labs to encourage applications for viatopia and Deep Reflection
Government – Influencing the US government seems particularly important, since the US is where most leading AI development is happening, and the US (plus allies) seems likely to be the first decisively advantaged party. Fortunately, as a democratic government it is relatively responsive to input from constituents.
Furthermore, to achieve national security in the wake of explosive AI growth, it seems likely the US government will need to extensively partner with AI labs who, as already discussed, may be much easier to nudge in the direction of Deep Reflection due to ideological sympathy. While government may not share as much ideological alignment as AI labs, their core values likely do not differ such that they would not be interested in viatopia, and perhaps Deep Reflection as well with sufficient epistemic support to bridge the inferential distance (link).
Policymakers are generally interested in the welfare of their constituents. It is obvious that increasing humanity’s intelligence and wisdom in how we interact with powerful advanced technologies will be helpful for ensuring such technologies are safe and beneficial, should they arise.
Policymakers are becoming increasingly open to the idea that extinction risk from advanced AI is a serious issue. As more and more advanced capabilities arise, both the dangers and the opportunities of powerful AI and other technologies powerful AI will enable will become increasingly clear.
Once AI alignment is achieved, AI safety policy advocates can shift their focus to related priorities. Deep Reflection stands out as a leading contender, in part because it may be essential for mitigating post-alignment existential risks by ensuring careful deliberation before society takes irreversible actions. Preparing AI safety advocates or bespoke Deep Reflection advocates in advance seems crucial, as the window of opportunity for shaping policy may be brief.
Part of what makes Deep Reflection policy work difficult is that it requires policymakers to take not only the risks and opportunities of advanced AI seriously, but also concepts such as path dependence and lock-in, the deep future, and the plethora of crucial considerations. This is essential for implementing viatopia effectively and avoiding an approach that, while close, does not ultimately lead to Deep Reflection.
Addressing this challenge will require developing technical and policy mechanisms that are highly feasible, effective, and actionable; i.e. developing a version of viatopia and policy levers that are likely to lead to the best possible outcome and yet also be highly attractive to all stakeholders.
One factor that could ease Deep Reflection policy work is AI epistemic technology that is very good at bridging inferential gaps to help policymakers understand these complex ideas in an intuitive and appealing way. Epistemic tech could also help policymakers to forecast and deeply understand the implications of the different paths we could take. These could be technologies that policymakers use for all decisions, or they could be used by Deep Reflection policy advocates to develop proposals and ways of presenting these ideas so that they are rigorous, yet appealing and easy to understand.
Another useful tool could be advanced treaty-making AI (link) that can analyze an incredible number of possible treaties that satisfy all political parties, within government and between many governments. Such technology could help discover feasible viatopia proposals despite internal political division and vastly differing interests of many different governments.
Why viatopia – One question that emerges in the sphere of governance and government is why viatopia is necessary in the first place; why not just continue with the same liberal representative democratic capitalist system, rooted in the U.S. Constitution, perhaps with some safeguards against the worst-case catastrophes?
I believe the main reason is that the current system is not built to “foom” (link) well – i.e., when powerful AI massively accelerates certain components of the existing system, things may go off the rails. While it is certainly a possibility the current system might be able to navigate explosive technological growth and converge to Deep Reflection if it has the right enhancements, such as advanced AI helping policymakers and key decision-makers – it also seems quite possible this won’t be enough.
It seems like there are some things that may be likely to foom hard under the current system which could be very harmful; e.g. profit-maximization, addictive products such as attention maximizing apps, other products with high negative externalities, research and development of powerful new technologies, people/organizations/institutions whose primary goal is increasing their own power or other selfish/narrow goals, dangerous individuals and terrorist groups, etc. It seems like finding new mechanisms for managing these will be very important.
Fortunately, it seems like there are significant process enhancements that could be made with advanced coordination technology, AI epistemic and education technology, AI wisdom/moral technology, AI therapeutic/relationship coaching technology, etc. that would significantly improve the current trajectory if implemented very strategically and effectively.
Overall, it seems like we could give ourselves a much better chance at converging on Deep Reflection than we have with the current default system. While the current system is a sturdy base and most or even all of it may remain, it seems like there are many other things that would be good to add to society to make foom more likely to go well.
Moreover, while the US is likely the most important country for initiating viatopia, we will likely want international cooperation on this, as implementing viatopia in a single country may make that country uncompetitive on a global scale, especially if other countries take a more unrestrained approach to powerful AI. New international governance mechanisms will likely be necessary to ensure collective Extinction Security and maintain a peaceful balance of power between countries, and if humanity can collectively agree that we want to collaboratively move toward the best future possible, then this makes coordination on viatopia desirable as well.
Such a dramatic shift in the global order due to the development of powerful AI will create a massive opportunity to create the kind of world we desire. A new bespoke global system that utilizes the enormous amount of newly available intelligence may be desirable. This system could be very deliberately designed to foom in as robustly safe and beneficial ways as possible, and to take full advantage of existing and emerging technologies in the foundations of its operation. Such a system could be very carefully designed to ensure moral and epistemic progress continues and that our power never outpaces our wisdom. It could be carefully crafted so as to enable rapid responses to all types of emerging threats, so that any dangerous or deleterious trend can be acted on far before anything goes off the rails. The system can be highly flexible just like our current system, so as to allow individuals and society incredible freedom of action, and yet also ensure that we have a clear thesis as to the kinds of things which must or must not happen in order for society to continue evolving in a good direction; e.g. ensuring Existential Security, moral progress – with a very flexible yet by no means meaningless definition of “moral progress,” – increasing amounts of both freedom and cooperativeness, increasing amounts of both diversity and societal cohesion, increasing amounts of fulfillment and well-being, etc.
While at some point it might be possible to have aligned powerful AI take over parts of this system, have humans be very happy with the results, and simultaneously converge on Deep Reflection – and perhaps AI could even become more effective at governing humans than humans themselves – nonetheless we may not fully trust the AI, or humans may just prefer governing themselves. It seems likely humans could govern themselves and gain most of the benefits of AI simply by having “AI in the loop,” making sure that we don’t implement some galaxy brained idea that sounds great, but in fact will predictably end in catastrophe. Having AI in the loop may be inevitable, as humans will likely increasingly seek out the advice of advanced AI and may even come to feel handicapped without it. It could be high leverage to ensure that advisor AI also possesses advanced artificial wisdom (AW), (link to “on artificial wisdom”) perhaps something like highly advanced moral reasoning, well-calibrated epistemics, security mindset, and a strong desire to know the truth and act in the best possible way – while allowing humans full agency. Advanced AI/AW advisors could help us achieve our intended outcomes through methods that are safer, more effective, and even more appealing than our original proposals.
One governance framework worth mentioning is “ongoing reflection,” in which a highly effective reflective process is implemented on an ongoing basis. “Good Reflective Governance” by Owen Cotton-Barratt describes just such a process, which differs significantly from the end-stage Deep Reflection as some grand final self-complete process which figures out everything we could possibly need to know about how the future should go, instead conceptualizing the goal as an ongoing process of very careful reflection at each step along the way in order to make wise decisions at all points. Such a process is an important example of the kind of mechanism that could help us get from where we are to where we ultimately end up, in as safe, intelligent, and wise a way as possible. It is even possible that such a process may be a preferable form of reflection over the all-at-once process of end-stage Deep Reflection.
Again, the ultimate aim—ensuring that AI is increasingly beneficial to humanity, bolsters our collective wisdom, and prevents further existential risks —aligns with values most policymakers already endorse. Viatopia represents a natural next step in humanity’s ongoing trajectory of epistemic and moral progress, not a radical departure, meaning it shouldn’t be too hard to make proposals that are intuitive and politically feasible.
There are several ways that governments could help with Deep Reflection. They could:
- Create policies or incentives that drive AI labs to push in the direction of viatopia, even if there is not initially sufficient demand to justify such applications commercially
- Work diplomatically with other countries or create treaties that ensure robustly safe and beneficial use cases of AI are prioritized as global public goods, and perhaps even begin work on a treaty for governing advanced AI, such as some form of viatopia
- Collaborate with labs and Deep Reflection policy advocates to work out what specific applications, broad mechanisms, and overall frameworks of viatopia would be acceptable to government, and how these can be tested and scaled up
- I’ve heard proposals such as Manhattan Project for AI, CERN for AI, and Intelsat for AI (link). What I would most like to see is something like a (perhaps international?) “Bell Labs” (link) for AI/viatopia
The public – As mentioned, most people don’t care much about optimizing the deep future, but this need not be a problem, as humankind, when left to its own, tends to collectively push in a good direction, and this trajectory can be even better when wisely governed with the help of advanced AI in such a way as to curtail potentially dangerous foom-y aspects, and differentially accelerate moral and epistemic progress, while still meeting and far exceeding most people’s desires and expectations.
Before viatopia is initiated, it is essential to ensure security from extinction or power grabs, which could be imminent given the flood of powerful technologies and global instability unleashed by advanced AI.
Perhaps the first order of business within viatopia should be to create a world of abundance and meet everyone’s basic needs. Advanced AI, including robots, should enable rapid automation of labor and the harnessing of far greater resources, so that everyone can easily have their basic needs secured, including things like food, clean water, shelter, ongoing financial security, and relative freedom of action. Health and indefinite life extension technology would also be an initial top priority. With radical abundance of labor and intelligence, problems like farmed animal welfare, clean energy, climate change, and pollution also seem relatively easily soluble.
When advanced AI has helped us solve these problems, more complex human problems and pursuits concerning education, equality, violence, crime, global cooperation, global peace, safe and strong economic growth, philosophical and scientific truth, aesthetic and artistic achievement, purpose and meaning, well-being, relationships, community, wisdom, moral growth, self-actualization, and self-transcendence can be addressed.
The tremendous human drive toward meaning, purpose, and altruism freed up by solving the first set of goals will make it much easier for humans to work together (with the help of AI) on the second set of goals. This “fungibility of good,” (link) mechanism in which human intention is freed up for increasingly higher pursuits could be thought of as a global version of moving up Maslow’s hierarchy of needs. (link)
As humanity moves up the hierarchy of value, it seems there may be increasing concern for the deep future and the ultimate fate of humanity. While this is a very basic toy model that is likely inaccurate in at least some respects, it seems quite possible some viatopian trajectory roughly like this could occur and be largely satisfactory to most people while simultaneously converging on Deep Reflection.
While I have written the first draft of a book on various broad societal mechanisms for viatopia (which I called “paths to utopia,”) this is beyond the scope of this essay (link or a footnote and link or something). One proof-of-concept example of an obvious yet very useful tool is having an AI mentor/friend/therapist/coach/advisor/tutor/teacher that is highly intelligent but also very morally wise and able to help guide people across all areas of society in making good decisions. Such tools could be highly beneficial for well-being (1, 2), epistemics, and moral progress. This is not a particularly sci-fi application of advanced AI, indeed one Harvard Business Review study finds AI therapy/companionship as the number one use case for generative AI as of 2025, with enhanced learning and various coaching applications also rated highly.
When communicating this vision to the public, it is not necessary to talk about weird sci-fi possibilities and the deep future. What is most essential is that viatopia is all about putting ourselves on a path to create increasingly better futures, and being sufficiently careful that hopefully, eventually, we can achieve the best future possible. Most people would be happy to have a better future, and once they have a better future they will likely want an even better future, and this process can naturally continue as we get progressively closer to the best future possible.
While it is essential to the goal of Deep Reflection that we preserve the possibility of creating the “best future possible,” this may not be at all what most people will desire or choose – or at least, most people may feel a great deal of trepidation at the prospect of instantaneously jumping to the best future possible. Indeed, the author of this essay feels that he might prefer to live in a relatively modest utopia and experience the range of “fun” (link) experiences such human and human adjacent utopias have to offer, far from the outer bounds of optimization – at least for a while.
The vast scale of the deep future makes it likely possible to satisfy all of the people who want to stay in such modest and moderate utopias until the end of time. It should be trivial to guarantee trillions of trillions of trillions of human lifetime equivalents so that further expansion would no longer allow any further opportunities for trade, all possible technologies having already been developed, and all diversity of possible desirable experiences having been experienced countless times over.
The mind-boggling size of the pie makes such “existential compromise” (link to short form or something?) an important option for satisfying people who are not interested in optimization, even as viatopia progresses. (((Footnote (or link to my short form or post on this): Diverse experiments with the emergent benefits of different kinds of worlds could also be an important element of converging on the best types of worlds.))) Existential compromise could include a guarantee that this possibility is left open for people who choose it, but that Deep Reflection will also be pursued, with a restriction that leaving the solar system, or perhaps galaxy, will be strictly prohibited until Deep Reflection is complete and people can collectively make an informed decision on what to do with our cosmic endowment.
It is not the intention to give the pollyannaish impression that satisfying all people while converging on Deep Reflection will be easy. A few of the obstacles that will need to be confronted:
- The hedonic treadmill – the more people have, the more they want, making it increasingly difficult to keep people satisfied as time goes on
- Human competitiveness – Humans have tribal tendencies in which they seek to play zero-sum games by which they can establish status and social value by out-competing others, again, requiring ever-escalating resource usage
- Freedom gone astray – Freedom is highly desirable, yet the more freedom people have, the more difficult it is to ensure they don’t do anything harmful, and that they move in primarily morally positive directions
- Impatience - It may be ideal to allow moral progress a great deal of time, but many actors may be eager to expand technological capacity, settle space, etc.
- Opposition – Some people may be highly resistant to changing their worldview or values, and may actively oppose the idea of “moral progress.” With far greater resources, longevity, and freedom to make their own choices, it may be much easier for people to maintain potentially suboptimal values over long periods of time.
- Distributional shift – Relatedly, elimination of survival pressure would create a dramatic, unpredictable distributional shift for humans which may make people increasingly epistemically ungrounded and cause value drift away from the forces which forged human values and created our current age of relative abundance and progress
- Evolutionary and reproductive pressure – If inclusive reproductive fitness does not align with moral progress, this will be a very difficult force to work against if viatopia progresses over many generations. Moreover, ideologies which promote maximal reproduction may have a strong advantage which could create problems given a newfound potentially radically unconstrained ability to reproduce
- Competing maximizers – There may be formidable power-seeking groups seeking to maximize goals other than Deep Reflection
- Stagnation – Given levers for doing so, a group with concentration of power, or even a unified global democratic mob, might decide on locking in current or near-future suboptimal values, or naturally converge to a locked-in suboptimal value attractor state given enough time, rather than maximum moral progress. This could be due to a preference for value stability or due to local moral maxima that are difficult to break out of
- Chicken and egg problem – Without already knowing what direction moral progress should move in, it is difficult to know how to objectively measure and incentivize it to ensure it occurs
- Mechanism robustness – An ideal viatopia mechanism would have virtually no chance of failing to converge on Deep Reflection despite numerous challenges. Such a mechanism should be virtually indestructible, and have no mistakes that would cause it to be a catastrophe in and of itself
- Public approval – Ideally, most people should see this mechanism as an improvement on the way things would have counterfactually been
As mentioned, I have written extensively on how to address some of these issues, and furthermore, technical meta-reflection work is largely about figuring out how to navigate these challenges. Because advanced artificial intelligence could be used for both initial viatopia design and ongoing governance, it seems likely these challenges are solvable. What is needed, at a minimum, is a clear enough vision to achieve stakeholder buy-in, and enough knowledge of the obstacles and desired attributes of a solution so that we can start using advanced intelligence to work on the problem as soon as it is capable.
People in the general public who support Deep Reflection could:
- Request viatopia/Deep Reflection applications and policies from AI labs and policymakers
- Start AI ventures or run for office with the aim of pushing Deep Reflection forward
- Help perform meta-reflection work on how to create effective seed reflection
- Raise awareness around viatopia and Deep Reflection as good options for what to do with advanced AI through social media, one-on-one communication, and through public advocacy groups
- When the time comes, vote for politicians and policies most likely to increase the likelihood of viatopia and Deep Reflection
Viatopia and the Centrality of Moral Progress- One crucial point to add about viatopia is that it seems like the quintessential component of viatopia is the trajectory of moral progress, which could also be called progress on values or wisdom. If we have the right values we can always figure out how to achieve them, but if we have the wrong values we might go farther and farther down the wrong path until any impetus to change our values back has been permanently extinguished. (Link to Beckstead?)
This could be one of the main reasons some technologies need to be slowed down or avoided, and other technologies should be differentially accelerated. This could also include upgrading social institutions to more effectively promote moral reflection and values learning. Examples of such systems include political systems, economic systems, education systems, parenting/child-care systems, and mental health systems. (link to my ways to save the world pieces if desired.)
Some values which seem important to promote include:
- Altruism, doing good; especially pursuing the good de dicto (or “basing” for short)
- Sentience, increasing wellbeing, decreasing suffering
- Epistemics, truth seeking, open-mindedness
- Individual freedom/non-coercion
- Free speech
- Positive-sum, non-negative-sum, paretotopianism, cooperativeness, multi-player strategy
- Option value
- Moral/values progress itself
- Diversity
- Unity
- The deep future, cosmopolitanism, non-space-bias & non-time-bias, moral circle expansion, cognitive empathy
- Opportunity cost ethics (link)
- Scale sensitivity
- Efficiency/effectiveness
- Goode communication and relationships
- Honesty, authenticity
- Kindness
- Love, empathy
- Self-actualization, self-development, being the best person one can be
In the next essay, "Shortlist of Longtermist Interventions," I present dozens of concrete high-leverage Better Futures interventions for moving toward viatopia, organized by the key principles established in Introduction to Building Cooperative Viatopia. These interventions span fellowship programs and incubators, research automation tools, coordination platforms, value reflection infrastructure, field-building initiatives, and AI tools designed to compound in effectiveness over time. The essay addresses the gap between longtermist theory and practical implementation infrastructure, demonstrating the breadth of concrete work we could be doing right now to move from philosophical arguments about viatopia to actual institutional reality.
