I

idea21

32 karmaJoined

Comments
77

“you have to take seriously every position with huge implications provided it is not extremely implausible.”

Some views are ridiculously implausible even if you couldn’t out-debate some of their advocates.

 

So it all depends on what is plausible, even if you can't refute what is implausible or not.

There may be many positions with huge implications, but you can't take them all. You have to choose. And to do that, you have to judge: what criteria should be followed to trust the technical expertise of others, particularly in speculations as complex and far removed from the present as long-termism?

In the example of nuclear risk, let's remember what happened in 1962. Was JFK crazy to force a crisis that could have led the world to destruction?

When we work from a vision, we can measure progress against whether actions move us closer or farther from it, not whether a specific forecast proved to be correct. This enables continuous course correction without losing sense of purpose. 

 

A very lucid view. The vision is constructed from what we know about our current conditions, so it is more realistic and tailored to our knowledge. Long-term predictions are often erroneous because they ignore logically unforeseeable circumstances (technological and cultural changes).

But we must not fall into the error of constructing a "vision" from entirely contemporary elements. We must know how to extract the essential and promising from the present. Current progressivism, for example, is based on a political model that is probably exhausted. Let us remember that Voltaire and Montesquieu advocated humanist development... but they could not foresee the structural political changes (universal suffrage, political human rights, etc.) that this would entail.

Is it actually possible to increase human compassion and do we have any reason to believe that investment in this would provide good value or could result in significant progress? 

 

Everything seems to indicate that the cultural evolution of civilization is moving in the direction of increasing compassionate emotions. Even in recent times, we can observe how ethical conceptions urging action to remedy the suffering of our fellow human (and non-humans) beings have become popular almost year after year. The very emergence of the EA movement points in this direction.

However, we are far from achieving the ultimate goal of a prosocial planetary culture, in the sense of the development of an ethos of benevolence, compassion, altruism, and total control of aggression based on rational, enlightened principles. And this is the fundamental problem that needs to be debated. Progress toward a fully compassionate society can no longer be linear (improvements in politics, education, and habits). It will likely require a rupture. And that always entails an epistemic conflict.

increasing compassion is not a stated priority of key EA organisations

Not yet, unfortunately. But interventions in that vein appear even in this very Forum. From a utilitarian perspective, no one can deny that increasing the number of altruistic people is the best means of increasing altruistic action at all levels...

The epistemic conflict: we would have to accept the exhaustion of political change to achieve the highest humanistic goals.

All political change implies acceptance of the system of legal coercion to improve social behavior. Therefore, it will never renounce the mechanisms of aggression, repression, and the instrumentalization of the individual for the supposed common good.

However, there is evidence (or at least a very reasonable hope) that moral changes (moral evolution) originate through non-political mechanisms: ideological movements that, supported by new cultural symbolisms (for example, the very concept of "compassion" or more recent creations such as "empathy" or "effective altruism"), develop lifestyles of a higher moral standard using a wide variety of psychological strategies selected through "trial and error." But until now, all movements to improve moral behavior have developed within the framework of religious traditions (monasticism, Puritanism). Changing this is the task at hand, and it requires a paradigm shift.

Increasing compassion at the cultural level solely through youth education or popular pedagogy can never match the transformative power of the ancient religions. Isn't it a fact that all nations where secular humanitarianism thrives... are those with a historical past of reformed Christianity?

Very valuable post, thank you, RedTeam.

All of this shows that much more can be done to increase the number of altruistic people, because, after all, if happiness is a motivation... happiness is something that, in general terms, we give to each other.

The factor of technological advancement must be taken into account. A fully cooperative humanity committed to the elimination of all forms of suffering can get at its disposal technological means with a power as unimaginable today as our current technology could have been unimaginable to the wise Aristotle more than two thousand years ago.

Recently, another forum post—at least—addresses the issue of increasing motivation for altruistic action. 

https://forum.effectivealtruism.org/posts/gWyvAQztk75xQvRxD/taking-ethics-seriously-and-enjoying-the-process

This post referred too to a very enjoyable book about altruistic action, published ten years ago, which I think we should all read.
 

I honestly believe this should be the real, priority long-term question: how to generate a non-political social movement that motivates altruistic action.

I would love for the EA movement, as it exists today, to be able to achieve the ambitious goals of increasing the number of signatories of the GWWC Pledge to "millions"... but I have my doubts that this will be the case, and is it fair to the people who are suffering and require altruistic action to simply wait and see, and not try anything to accelerate the process of increasing altruistic action?

In my view, the first step would be to generate a discussion group on this issue (how can we motivate more people to be active altruists?). I imagine an inevitable conclusion would be, above all, to try to generate a social support movement for donors and those who are hesitant about whether or not to be one. 

 I'm also sure EA would seem less like a personal sacrifice if you were surrounded by EAs. 

The "Alcoholics Anonymous" model ("mutual aid") is the most obvious: individualized support and the creation of small groups ("cells") at the local level. It's absurd not to consider the psychological implications of making such a significant change in your lifestyle from what's conventional.

All of this is independent of the speculation—which I find logical—about the possibility of organizing a "behavioral ideology" (so we don't call it "religion") that offers individuals the option of developing a behavioral style based on benevolence, aggression control, altruistic idealism, and mutual affection, all within the framework of enlightened rationality, which, to certain temperaments at least, might be attractive as a source of "personal happiness" (let's not forget that there are many ways to "be happy"). There are historical precedents for such social movements being viable (why they failed is a topic that deserves deep reflection).

Thank you very much, Jens, for sharing your point of view, which I find extremely valuable.

Moral behavior evolves especially when it is part of a lifestyle (ethos). Compartmentalizing moral behavior is not in keeping with human nature. The most effective long-term approach would undoubtedly be one that focuses primarily on developing a compassionate, benevolent, and enlightened lifestyle that is viable as a social alternative. Veganism and the end of animal abuse would be necessary consequences of this.

However, the opposite is not so true, as there are well-known examples of social initiatives in favor of animal welfare linked to intolerant political ideologies as well as less-than-benevolent personal behavioral styles.

Do you see frameworks like mine as useful inputs to the kind of movement you're describing? Even if AI alignment alone isn't sufficient, could it be necessary? If we get AI right, does that make the human behavioral transformation more achievable?

 

I've done a bit like you and asked an artificial intelligence about the social goals of behavioral psychology. I've proposed two options: either using our knowledge of human behavior to adapt the individual to the society in which they can achieve personal success; or using that knowledge to achieve a less aggressive and more cooperative society.

""within the framework of radical behavioral psychology applied to society, the goal is closer to:

  • Improving society (through environmental and behavioral design) to expand social efficient cooperation and reduce harmful behaviors like aggression.

The first option, "Adapting to the mainstream society in order to get individual success," aligns more closely with general concepts of socialization and adaptation found across various fields of psychology (including social psychology and developmental psychology), but is not the distinct, prescriptive social goal proposed by the behaviorist project for an ideal society.""   (This is "Gemini")

Logically, AI, which lacks prejudice and uses only logic, opts for social improvement... because it starts from the knowledge that human behavior can be improved based on fairly logical and objective criteria: controlling aggression and encouraging efficient cooperation.

Would AI favor a "behavioral ideology" as a strategy for social improvement?

The Enlightenment authors two hundred years ago considered that if astrology had given rise to astronomy and alchemy to chemistry... religion could also give rise to more sophisticated moral strategies for social improvement. What I call "behavioral ideology" is probably what the 19th-century scholar Ernest Renan called "pure religion."

If, starting with an original movement for non-political social change like EA, a broader social movement were launched to design altruistic strategies for improving behavior, it would probably proceed in a similar way to what Alcoholics Anonymous did in its time: through trial and error, once the goals to be achieved (aggression control, benevolence, enlightenment) were firmly established.

Limiting myself to fantasizing, I find such a diversity of available strategies that it is impossible for me to calculate which ones would ultimately be selected. To give an example: the Anabaptist community of the "Amish" is made up of 400,000 people who manage to organize themselves socially without laws, without government, without physical coercion, without judges, without fines, without prisons, or police... (the dream of a Bakunin or a Kropotkin!) How do they do it? Another example is the one Marc Ian Barasch mentions in his book "The Compassionate Life" about the usefulness of a biofeedback program to stimulate benevolent behaviors.

The main contribution I find in AI is that, although you yourself have detected cognitive biases in its various forms, operating on the basis of logical reasoning stripped of prejudices (far from flawed human rationality... laden with heuristics) can facilitate the achievement of effective social goals. 

AI isn't concerned with the future of humanity, but with solving problems. And the human problem is quite simple (as long as we don't prejudge): we are social mammals; Homo sapiens, who, like all social mammals, have been genetically programmed to be competitive and aggressive in the dispute over scarce economic resources (hunting territories, availability of females, etc.). The problem arises when, thanks to technological development... we now have potentially infinite economic resources... What role do instinctive behaviors like aggression, tribalism, or superstition play now? They are now merely handicaps.

Sigmund Freud made it clear in his book: "Civilization is the control of instinct."

However, what would probably be perfectly logical for an Artificial Intelligence may be shocking for today's Westerner: the solution to the human problem will closely resemble the old Christian strategies of "saintliness." (but rationalist). As psychologist Jonathan Haidt has written, "The ancients may not have known much about science, but they were good psychologists."

Thank you very much for the interest shown in your comment and for the opportunity you've given me to explore new perspectives to explain an issue that, in my opinion, could be extremely important and is not being addressed even in an environment that challenges conventions like the EA Community.

I'm curious how you'd operationalize "control of aggression" as a distinct pillar or principle. Would it be:

  • A prohibition (like the inviolable limits in Article VII: "no torture, genocide, slavery")?
  • A positive virtue (cultivating non-aggressive communication, de-escalation)?
  • A systems-level design principle (institutions structured to prevent violent conflict)?
  • Something else?

 

Moral values ​​are the foundation of an "ethics of principles," but the problem with an "ethics of principles" is that it is unrealistic in its ability to influence human behavior. In theory, all moral principles contemplate the control of aggression, but their effectiveness is limited.

Since the beginning of the Enlightenment, the problem has been raised that moral, political, and educational principles lack the power to affect moral behavior that religions do. We must admit, for example, that, despite the commendable efforts of educators, scholars, and politicians, whether liberalism's values ​​of democratic tolerance and respect for the individual can effectively prevail in a given society depends not so much on proposing impeccable moral principles... but on whether that particular society has a particular sociological foundation that makes the psychological implementation of such benevolent and enlightened principles viable in the minds of its citizens. In the end, it turns out that liberal principles only work well in societies with a tradition of Reformed Christianity.

I believe that the emergence for the first time of a social movement like EA, apolitical, enlightened, and focused on developing an unequivocally benevolent human behavioral tendency such as altruism, represents an opportunity to definitively transform the human community in the direction of aggression control, benevolence, and enlightenment.

The answer, in my view, would have to lie in tentatively developing non-political strategies for social change. Two hundred years ago, many Enlightenment thinkers considered creating "secular religions" (what I would call "behavioral ideologies"), but they always remained superficial (rituals, temples, collectivism). A scholar of religions, Professor Loyal Rue, believes that religion is basically "educating emotions." It's about using strategies to internalize "moral values."

In my view, if EA utilitarians want more altruistic works, what they need to do is create more altruistic people. Altruism isn't attractive enough today. Religions are attractive.

In my view, there are a multitude of psychological strategies that, through trial and error, could eventually give rise to a non-political social movement for the spread of non-aggressive, benevolent, and enlightened behavior (a "behavioral ideology"). The example I always have at hand is Alcoholics Anonymous, a movement that emerged a hundred years ago through trial and error, and was carried out by highly motivated individuals seeking behavioral change.

A first step for the EA community would be to establish a social network to support donors in facing the inevitable sacrifices that come with practicing altruism. This same forum already contains accounts of emotional problems ("burnout," for example) among people who practice altruism without the proper psychological support.

But, logically, altruism can be made much more attractive if we frame it within the broader scope of benevolent behavior. The practice of empathy, mutual care, affection, and the development of social skills in the area of ​​aggression control can yield results equal to or better than those found in congregations of the well-known "compassionate religions"... and without any of the drawbacks derived from the irrationalism of ancient religious traditions (evolution is "copy plus modification"). An "influential minority" could then be created capable of affecting moral evolution at a general level.

Considering the current productivity of human labor, a social movement of this type, even if it reached just 0.1% of the world's population, would more than achieve the most ambitious goals of the EA movement. But so far, only 10,000 people have signed the GWWC Pledge.

Which cultural or moral assumptions am I missing?

 

I think something very obvious but extremely important is missing in your " six-pillar Gold Standard of Human Values"  if we want to approach morality as a process of behavioral improvement: the control of aggression.

We should view morality as a strategy for fostering efficient human cooperation. Controlling aggression and developing mutual trust is equivalent to a culture of benevolence. We can observe that today there are ("national") cultures that are less aggressive and more benevolent than others; it has therefore been demonstrated that such patterns of social behavior are manipulable and improvable.

Just as Marxists said that "what leads to a classless society is good," we should also say "what leads to a non-aggressive, benevolent, and enlightened society is good." I add the word "enlightened" because it seems true that, based on religious traditions,some largely non-aggressive and benevolent societies can already be achieved; however, irrationalism entails a general detriment to the common good.

Load more