Hide table of contents

In this post:

  1. How can we do so that people do what is in the best interest of all people (and other moral patients) instead of just their own interest?

If you see some problem with the presented idea, please comment and let's discuss it.

Different worlds

Selfishland

Imagine a country called Selfishland. In that country, everyone is selfish.

Because of that, everyone is unhappy. Because if there is a situation when person A wants something a lot and person B wants that only a little and the outcome depends on the person B, then person B chooses what is good for them which is suboptimal for their cumulative utility. And then, when the roles are reversed, the person A will choose what is good for them as well which will again result in lower than optimal cumulative utility.

In that land, nobody rewards anyone for being altruistic, so nobody has incentive to change. So, people keep being selfish.

Visualization of Selfishland:

Everyone is selfish -> Nobody gives money to other people to reward for being altruistic (or rewards altruism in a different way) -> Nobody has incentive to be altruistic -> Everyone is selfish -> ...

When that world faces AGI (artificial general intelligence), then everyone rushes towards AGI because they know that first person to get to AGI can use AGI to take control over the world and keep being in power forever. They don't pay enough attention to risks or harms of AGI, and they end up with an AI catastrophe.

Altruisticland

Now, imagine a second country called Altruisticland. In that country, everyone is altruistic - always chooses the action that maximizes cumulative utility of all citizens.

Because of that, everyone is happy. Because all actions are optimized for cumulative utility.

In that country, people reward other people for taking actions that are good to others by giving them money. That includes the action of rewarding itself. Because of that, everyone has incentive to stay altruistic because if they stop being altruistic, then they will stop receiving money.

In that country, if a person takes an action that creates benefit for others at a cost of to themselves, then they receive of reward from other people. If that's the case, then the person has interest in taking the altruistic action if and only if the action benefits the other people more than the cost the person pays by taking that action. The entire reward doesn't have to come from just one person, it can come from many people.

Visualization of Altruisticland:

Everyone is altruistic -> Everyone gives money to other people to reward them for being altruistic, including rewarding them for rewarding -> Everyone has incentive to be altruistic -> Everyone is altruistic -> ...

When that world faces AGI, then people pay enough attention to risks of AGI, because they know that they won't receive money from other people, if they selfishly rush towards AGI while ignoring risks. That world handles AGI without any catastrophe.

How to make the world like Altruisticland

Current state of the world

The world that we live in is something between Selfishland and Altruisticland. In our world, there are some incentives to care about others like: monetary trade (if you create a product or service that is useful to others, then you can trade it for money) and legal system (if you commit a crime, you get penalized).

However, there are still actions that are not incentivized enough. For example:

  1. Projects that create a public good that benefits a large group of people to a small extent, instead of creating a product or service for a small group of customers that benefits them largely. That includes truly decentralized services that intentionally have no moat to avoid concentration of power.
  2. Contributing new information (discoveries or ideas) to the body of knowledge (because of public goods problem, although to some extent people are rewarded through a combination of grants and Matthew effect).
  3. Reviewing and popularizing new discoveries or ideas.
  4. If a small startup creates a useful product, then a big company can often copy that idea and steal the market. It's the small startup that contributed the most to make the world better, but it's the big company that makes money.
  5. Caring about certain risks and harms. Lawmakers are too slow (as a result of an imperfect system of government) and too reactive to create the right law that would incentivize people to care about those risks and harms.

How to make the change

What do we need to do to make the world more like Altruisticland?

The challenge

The problem is that if all other people act like citizens of Selfishland, then a single person has also interest in acting like a citizen of Selfishland. Because if they switch to acting altruistically, then that altruism won't be rewarded. That results with a hard-to-escape vicious cycle.

Using game theory terminology, we could say in the human world game, there are many equilibria: 1. everyone acting selfishly, 2. everyone acting altruistically and rewarding altruism, 3. anything in between those two. Equilibrium is a situation where none of the players have interest in changing their strategy, assuming that all other players won't change their strategy. Being in the 2nd equilibrium would be better for all people.

Textbook game theory says that once people are in some equilibrium, they will remain in that equilibrium because nobody has interest in changing their strategy.

The solution

However, textbook game theory doesn't take into account that people observe the strategy of other players and then adapt their strategy to the observed strategy of other players.

More advanced game theory concepts like Reputation Theory, Bayesian Fictitious Play or Bayesian Perfect Equilibrium take that into account. According to those concepts, if enough people switches their strategy, then other people will adapt their strategy to what other players play. The more people starts to act like citizens of Altruisticland and start to reward altruism (including rewarding the act of rewarding itself), the stronger incentive other people will have to act like citizens of Altruisticland and the more people will start to act like citizens of Altruisticland.

The first people who switch their strategy will have to pay an altruistic sacrifice. So, the question is: does the benefit of switching to Altruisticland outweigh the cost of altruistic sacrifice?

The benefit of switching depends on how long we are going to live after the switch. For example, if we assume that we will live forever, then that benefit is infinite because we are going to experience the benefits of the switch forever.

So, if we're going to live long enough, then it's worth paying the altruistic sacrifice.

Additionally, that altruistic sacrifice is going to be rewarded at the end, since once the switch is made, altruism will be rewarded. Therefore, it's not really a sacrifice.

Implications of LEV

Due to artificial intelligence being able to assist people in improving medicine and the potential of achieving LEV (Longevity Escape Velocity), it's possible that we will live super long. In that case, we should be willing to make the altruistic sacrifice to make the world more like Altruisticland.

It's also possible that we won't live super long.

But if we want to maximize our expected utility, then we need to focus on what our lives will be like if we live super long. Because if we don't live super long, then whatever we experience will be our experience only for a short time. And if we live super long, then what we experience will be our experience for a long time.

For that reason, if there is a non-trivial chance that we will live super long, then it pays off to make an altruistic sacrifice now to make the world better.

Gradual change

Making the altruistic sacrifice is also risky, to some extent. What if we make an altruistic sacrifice, but the rest of the people won't follow?

In order to mitigate that risk, we can make a small altruistic sacrifice, and then gradually increase as the world follows it. For example, we can always make 1% bigger altruistic sacrifice (which includes rewarding altruistic people) than an average person, and then people will have interest to follow that. Once people increase their altruism, then again you increase that by 1%.

The idea is to always be slightly more altruistic than average. If other people match your altruism level, then you will gradually arrive to 100% Altruisticland.

The risk is minimal because if people don't follow your lead, then you will only sacrifice that 1%.

Observability

The above assumes that we can always observe what people do.

Knowledge of who deserves reward

The person who rewards need to know who deserves the reward. We don't have perfect knowledge of that, but we have partial knowledge. And that is sufficient to at least partially solve the problem.

The person who chooses to reward, if they don't know who they should reward or they don't have time to think about that, can donate money to an impact fund where other people will decide how to spend that money.

However, it's important that there should be many funds like this, and not just one or a few. Because if everyone donates their money to one impact fund, there will be too much power concentrated in the hands of too few people.

The exception is if the fund is governed in a sufficiently democratic way, then it's fine. In that case, it's fine if there is only one fund. But I don't know any system of government that would be good and trustworthy without relying on decentralization.

In the future (if the future goes right), I expect that there's going to be decentralized prediction markets that will provide information about who did what impact. For now prediction markets don't provide such information, but I expect that they will evolve in that direction.

Cheating the system

The people also need to know that someone has actually received a reward.

In general, there might be some ways to cheat the system due to incomplete information.

However, the needed information will be known in the future. As we go to the future, we have increasingly more knowledge about what happened because we have better technology. For example, 100 years from now, if all data is preserved, we will have superintelligence and that superintelligence will be able to infer from data what happened today and what people intentions were.

For that reason, if we build a precedent that we reward people to the best of our knowledge, then people have interest in behaving well today because they will be rewarded in the future, proportionally to how well they behave today. It might be possible to cheat today and gain something through cheating in the short-term, but in the long-term the truth will come out and the fairness will be brought.

Of course, if the power in the future will belong to a small group of people, then that won't happen. But if we set the right precedent of acting in altruistic and fair way, we will maneuver into the right (and democratic) future.

Equality

Someone might criticize the idea with the following logic:

If people are incentivized for altruism with money (or other resources), then that system will benefit the wealthy people more than the poor. Because wealthy people have more money (or other resources), so people are rewarded more for doing what wealthy people want.

That is true under certain circumstances. But if the circumstances are such that there is a lot of uncertainty ahead about who will hold power in the future or uncertainty if there exists a stronger agent that we don't know, then the people have interest in altruistically giving money (and/or other resources) to poor people. This is something that I can't briefly explain, but I have written about it in the following posts:

Is it beneficial for people to be nice to weaker agents

How to stop inequality from growing (detailed)

We live in such circumstances that there is a lot of uncertainty.

Decentralization

It must be decentralized - if there was one fund to which everyone puts money and that fund redistributes to those who deserve money and need it, then the people in charge of that fund have too much power.

So, it must be a norm, and not just one institution that does this.

Shouldn't it be the role a government to provide rewards and funding?

I simply don't believe that our systems of governments are good enough. For that reason, I don't believe that the governments will do that in a good and fair way.

What to do

So, what is the exact thing that we need to do?

In summary, you should do:

  1. Give money to people / companies / organizations / anything else who deserve it and/or need it. That giving can have a form of investing in companies.
  2. Do it publicly (e.g. announce it on social media) so that other people can notice that the world moves into the direction of altruism being rewarded. Ideally, let people apply for a reward (e.g. in the comments of a post on X).
  3. Repeat it regularly.
  4. Share the reasoning behind why you're doing it and encourage others to do the same. You can link to this post.
  5. If other people start to do the same, then give more money.

I'm currently in the process of doing what I have written above and I invite you to do that too.

For clarity, the goal is to strengthen the trust that good work will be rewarded by strengthening the precedent that the good work being rewarded. "Good work" includes rewarding good work itself.

This is what to do in more detail (but it's not necessary to read that).

Answering the most anticipated critique points

If you think that the idea is not going to work for any of the below reasons, then I invite you to click into the corresponding link below and read my answer.

"The first-movers don't have sufficient incentive to act altruistically"

I have also written another post that describes the same idea with different words: Effective altruists would be more effective if they did this

And in that post, I have also addressed the anticipated critique:

Lack of incentive for first movers

Inequality

Verification/observability

2

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: The author argues that people can shift society toward a stable “cooperative equilibrium” by publicly rewarding altruistic actions, even if it requires initial sacrifice, because others will adapt and reinforce the norm over time.

Key points:

  1. The author contrasts “Selfishland,” where individually rational selfish behavior leads to worse collective outcomes, with “Altruisticland,” where people reward altruism and achieve higher cumulative utility.
  2. In Altruisticland, people financially reward actions that benefit others, creating incentives to act altruistically when benefits exceed personal costs.
  3. The current world is between these extremes, with some incentives (markets, laws) but persistent under-rewarding of public goods, knowledge creation, and risk mitigation.
  4. The main barrier is equilibrium: if others act selfishly, individuals lack incentive to act altruistically, creating a stable but suboptimal state.
  5. The author claims more advanced game theory (e.g., reputation dynamics, Bayesian learning) implies equilibria can shift if enough մարդիկ change strategies and others update in response.
  6. Early adopters must bear an “altruistic sacrifice,” but the author argues this can pay off if the cooperative equilibrium is reached and sustained.
  7. The expected value of switching increases if there is a non-trivial chance of very long lifespans (e.g., via LEV), since long-term benefits dominate short-term costs.
  8. To reduce risk, individuals can gradually increase altruism (e.g., slightly above average), limiting downside if others do not follow.
  9. Imperfect observability and attribution can be mitigated with partial knowledge, decentralized funding mechanisms, and potentially future tools like prediction markets.
  10. The system should remain decentralized to avoid power concentration, and individuals are encouraged to publicly reward good work, repeat this behavior, and promote the norm to build trust that altruism is rewarded.

Epistemic status: This is a speculative, normative proposal relying on assumptions about behavioral adaptation, future technology, and long-term incentives; key uncertainties include whether coordination dynamics will shift as described and whether sufficient adoption can occur.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from damc4
Curated and popular this week
Relevant opportunities