English teacher for adults and teacher trainer, a lover of many things (languages, literature, art, maths, physics, history) and people. Head of studies at the satellite school of Noia, Spain.
I am omnivorous in my interests, but from a work perspective, I am very interested in the confluence of new technologies and education. As for other things that could profit from assistance, I am trying to self-teach myself undergraduate level math and to seriously explore and engage with the intellectual and moral foundations of EA.
Reach out to me if you have any questions about Teaching English as a Foreign Language, translation and , generally, anything Humanities-orientated. Also, anything you'd like to know about Spain in general and its northwestern corner, Galicia, in particular.
I really loved this post, both probably because I agree with the core of the thesis (even if I am an atheist) as I've understood it and because I like the style (not a very EA one, but then again my own background is mostly in the Humanities). I think it's spot-on on the recommendations and on the critical appraisal on what is effective to move most people who are not in the subset of young, highly numerical/logical and ambitious nerds who I'd guess are the core audience of EA. Then again, there's an elitistic streak within EA that might say that the value of the movement is precisely in attracting and focusing on that kind of people.
I found this insightful. I find both communities interesting and overlapping, but I can also perceive the conflicts at the seams, but they seem pretty minor from an outsider's pov. Personally, I feel I share more beliefs and priors with Rationalism when all is said and done, but I seem them mostly converging.
It was my lame attempt at making a verb out of the Petersburg Paradox, where a calculation of Expected Value of the type I play a coin-tossing game where if I get heads, the pot doubles, if I had tails, I lose everything. The EV is infinite, but in real life, you'll end up ruined pretty quick. SBF had a talk about this with Tyler Cowen and clearly enjoyed biting the bullet:
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.
I am rather assuming SBF was a radical, no holds barred, naive Utilitarian who just thought he was smart enough to not get caught with (from his pov) minor infringement of arbitrary rules and norms of the masses and that the risk was just worth it.
While I agree that people shouldn't have renounced the EA label after the FTX scandal, I don't quite find your simile with veganism convincing. It seems to fail to include two very important elements:
Depopulation is Bad
I mildly agree that depopulation is bad, but not by much. Problem is I just suspect our starting views and premises are so different on this i can't see how they could converge. Very briefly, mine would be something like this:
-Ethics is about agreements between existing agents.
-Future people matter only to the degree that current people care about them.
-No moral duty exists to create people.
-Existing people should not be made worse off for the sake of hypothetical future ones.
I don't think there's a solid argument for the dangers of overpopulation right now or in the near future, and I mostly trust the economic arguments about increased productivity and progress that come from more people. Admittedly, there are some issues that I can think of that would make this less clear:
-If AGI takes off and doesn't kill us all, it is very likely we can offshore most of the productivity and creativity to it, denying the advantage of bigger populations
-A lot of the increase in carbon emissions come from developing countries that are trying to increase the consumer capacities and lifestyle of their citizens. If scientific breakthroughs do not allow for progress, more people with more Western-like lifestyles will make it incredibly difficult to lower fossil fuel consumption, so if technology doesn't make the breakthroughs, it makes sense to want less people so that more can enjoy our type of lifestyle.
-Again, with technology, we've been extremely lucky in finding low hanging fruit that allowed us to expand food production (i.e., fertilizers, the Green Revolution). Again, one can be skeptic of indefinite future breakthroughs, which could push us down to some Malthusian state.
I imagine both yes. Most current calculations would say the positive outweigh the negative, but I can imagine how this can cease to be so.
Can't really debate this, as I don't think I believe in any sort of intrinsic value to begin with.
I am trying to articulate (probably wrongly) the disconnect I perceive here. I think 'vibes' might sound condescending, but ultimately, you seem to agree with assumptions (like math axioms) not being amenable to disputation. Like, technically, in philosophical practice, one can try to show, I imagine, that given assumption x some contradiction (or at least, something very generally perceived as wrong and undesirable) follows.
I do share the feeling expressed by Charlie Guthmann here that a lot of starting arguments for moral realists are just of the type 'x is obvious/self-evident/feels good to be/feels worth believing', and when stated in that way, they feel equally obviously false to those who don't share those intuitions, and as magical thinking ('If you really want something, the universe conspires to make it come about' Paulo Coelho style). I feel more productive engaging strategies should just avoid altogether any claims of the mentioned sort, and perhaps start with stating what might follow from realist assumptions that might be convincing/persuasive to the other side, and vice versa.
Exactly. What morality is doing and scaffolding is something that is pragmatically accepted as good and external no any intrinsic goodness, i.e., individual and/or group flourishing. It is plausible that if we somehow discovered that furthering such flourishing should imply that we need to completely violate some moral framework (even a hypothetical 'true' one), it would be okay to do it. Large-scale cooperation is not an end in itself (at lest not for me): it is contingent on creating a framework that maximizes my individual well-being, with perhaps some sacrifices accepted as long as I'm still left overall better than without the large-scape cooperation and under the agreed-upon norms.
I wouldn't put mathematics in the same bag as morality. As per the indispensibility argument, one can make a fair case (that one can't for ethics) that strong, indirect evidence for the truth of mathematics (and some types of it actually 'hard-coded into the universe') is that all the hard sciences rely on it to explain stuff. Take the math away and there is no science. Take moral realism away and... nothing happens, really?
I agree that ethics does provide a shared structure for trust, fairness, and cooperation, but it makes much more sense to employ, then social-contractual language and speak about game-theoretic equilibria. Of course, the problem with this is that it doesn't satisfy the urge some people have of trying to force their deeply felt but historically and culturally deeply contingent values into some universal, unavoidable mandate. And we all can feel this when we try, as BB does, to bring up examples of concrete cases that really challenge those values that we've interiorized.
They could, but they could also not. Desires and preferences are malleable, although not infinitely so. The critique is presuposing, I feel, that the subject is someone who knows with complete detail not only their preferences, but their exact weights, and that this configuration is stable. I think that is a first model approximation, but it fails to reflect the more messy and complex reality underneath. Still, even accepting the premises, I don't think an anti-realist would say procrastinating in that scenario is 'irrational', but rather that it is 'inefficient' or 'counterproductive' to attaining a stronger goal/desire, and that the subject should take this into account, whatever decision he or she ends up making .which might include changing the weights and importance of the originally 'stronger' desire.
Really liked this post, and as an oldie myself (by which I mean in my 40s, which feels like quite old compared to the average EA or EA-Adjacent), I resonated a lot with it. In my case, I am not an 'old hand EA' though: I rather arrived relatively circuitously and recently (about 3 years ago) to it.
Some have commented, here or elsewhere, that the fact that EA puts so much emphasis on the effectiveness means that it generally doesn't care much about either community building, general recruitment/retention and group satisfaction, and when it half-heartedly tries to engage in this, it is with a utilitarian logic that doesn't seem congenial to the task. Once could make a good case, though, that this isn't a bug, but a feature: EA as resources-optimizer with little time to waste, given the importance of the issues it tries to solve or ameliorate on dealing with a less active, talented and effective series of people and needs. Once senses an elitist streak inevitably tied to its moral seriousness and focus on results.
On the other hand, I feel communities tend to thrive when they manage to become hospitable and nice places where people are happy to be in, in different degrees. This is what most successful movements -and religions- manage successfully: come for the values, stay for the group.
Passion and intellectual engagement also help a lot, but these perhaps vary a lot in a way that isn't tractable. Like the OP, I find much of the forum posts dull and uninteresting, but then again, the type of person I am, my priorities, values and interests mean I am probably badly fitted to become anything more than mildly EA-Adjacent, so I don't think I'd be a good benchmark in this regard. I think Will's recent post on EA in the age of AGI does hit the nail on the head in many respects, with interesting ideas for revitalizing and updating EA, its actions and its goals. EA might never match religion’s or some group's capacity for lifelong belonging, but recognizing that limitation, and trying to soften its edges, could make it more resilient.