Hide table of contents

An Anecdote

I had to wait at least five days to make this post, after waiting five days to make my previous one. Why? Because I have “negative” karma. Mostly from a single comment I made on another post — I pointed out the irony of debating “AI rights” when basic human rights are still contested in this day and age. I guess people didn’t like that. But no one bothered to explain why.

So this has been my introductory experience on the EA Forum: silence, downvotes, and the lingering impression that dissent isn’t welcome.

What I expected to be a platform for rigorous intellectual debate about doing the most good has instead proven to be an echo chamber of unoriginality, one that suppresses outside-the-box thinking. And that, I think, points to a larger problem.

 

From Observation

As someone interested in AI safety, I’d heard the EA Forum was the place for serious discourse. So I browsed before posting.

After going through several of the most upvoted posts, I started to notice a pattern — sameness in tone, sameness in structure, even sameness in thought. Ideas endlessly repackaged, reframed, and recycled. A sort of intellectual monoculture.

And this sort of culture, if left unexamined, risks reproducing the same narrow, ineffective solutions to the very problems it purports to want to solve.

 

From Experience

Eventually, I posted my own argument: that unsafe AI is already here because we are unsafe humans. The training data mirrors our history, our culture — all steeped in domination and hierarchy — which becomes an inherent part of the models.

Within hours, it was downvoted. No comments. No engagement. No critique. Just silent rejection.

Why? Maybe because I have a completely baseless argument. Or more likely — I proposed a view that doesn’t fit within the conventional EA framing of “AI safety,” because genuine dissent is simply unwelcome. Either way, it reflects a kind of intellectual self-censorship where ideas that don’t conform to the dominant worldview get brushed under the rug.

 

Insight or Rant?

So what does it say when a movement that aims to “do the most good” reflexively suppresses ideas it doesn’t approve of?

Maybe this post will answer that — will it be ignored, downvoted, or discussed?

Either way, that response will tell us more about the state of the philosophy itself than about the validity of the argument.

Because if effective altruism can’t tolerate challenge or discomfort, then it’s not really effective, and it’s certainly not altruistic.

5

0
0

Reactions

0
0
Comments9
Sorted by Click to highlight new comments since:

Any intellectual community will have (at least implicit) norms surrounding which assumptions / approaches are regarded as:

(i) presumptively correct or eligible to treat as a starting premise for further argument; this is the community "orthodoxy".

(ii) most plausibly mistaken, but reasonable enough to be worth further consideration (i.e. valued critiques, welcomed "heterodoxy")

(iii) too misguided to be worth serious engagement.

It would obviously be a problem for an intellectual community if class (ii) were too narrow. Claims like "dissent isn't welcome" imply that (ii) is non-existent: your impression is that the only categories within EA culture are (i) and (iii). If that were true, I agree it would be bad. But reasoning from the mere existence of class (iii) to negative conclusions about community epistemics is far too hasty. Any intellectual community will have some things they regard as not worth engaging with. (Classic examples include, e.g., biologists' attitudes towards theistic alternatives to Darwinian evolution, or historians' attitudes towards various conspiracy theories.)

People with different views will naturally dispute which of these three categories any given contribution ideally ought to fall into. People don't tend to regard their own contributions as lacking intellectual worth, so if they experience a lack of engagement it's very tempting to leap to the conclusion that others must be dogmatically dismissing them. Sometimes they're right! But not always. So it's worth being aware of the "outside view" that (a) some contributions may be reasonably ignored, and (b) anyone on the receiving end of this will subjectively experience it just as the OP describes, as seeming like dogmatic/unreasonable dismissal.

Given the unreliability of personal subjective impressions on this issue, it's an interesting question what more-reliable evidence one could look for to try to determine whether any given instance of non-engagement (and/or wider community patterns of dis/engagement) is objectively reasonable or not. Seems like quite a tricky issue in social epistemology!

From my own experience and from what I've seen, I think it's common for new contributors to the forum to underestimate the amount of previous work that the discourse here builds on. And downvotes aren't used to disagree with a post, but are supposed to be used as something like a quality assessment. So my guess from a read of your downvoted post is that the downvotes reflect the fact that the argument you're making has been made before on the forum and within the wider EA community and you haven't engaged with that.

Maybe search for stuff like "AI-enabled coups", "power grabs", and "gradual disempowerment".

thanks for the comment - I’ll look into the key phrases you mentioned. i guess i’m kind of surprised that if it’s been discussed before, there doesn’t appear to be urgency around addressing it - it seems pretty immediate to me if unsafe ai is already here as opposed to hypothetical, no?

Observations:

  1. Echoing Richard's comment, EA is a community with communal norms, and a different forum might be a better fit for your style. Substack, for instance, is more likely to reward a confrontational approach. There is no moral valence to this observation, and likewise there is no moral valence to the EA community implicitly shunning you for not following its norms. We're talking about fit.
  2. Pointing out "the irony of debating “AI rights” when basic human rights are still contested" is contrary to EA communal norms in several ways, e.g. it's not intended to persuade but rather to end/substantially redirect a conversation, its philosophical underpinnings have extremely broad and (I think to us) self-evidently absurd implications (should we bombard the Game of Thrones subreddit with messages about how people shouldn't be debating fiction when people are starving?), its tone was probably out of step with how we talk, etc. Downvoting a comment like that amounts to “this is not to my tastes and I want to talk about something else.”
  3. "I started to notice a pattern — sameness in tone, sameness in structure, even sameness in thought. Ideas endlessly repackaged, reframed, and recycled. A sort of intellectual monoculture." This is a fairly standard EA criticism. Being an EA critic is a popular position. But I think you can trust that we've heard it before, responded before, etc. I am sympathetic to folks not wanting to do it again.

i think your comment highlights exactly what i’m trying to get at: 

“… a different forum might be a better fit for your style.”

“its tone was probably out of step with how we talk, etc. Downvoting a comment like that amounts to 'this is not to my tastes and I want to talk about something else.’“

ea is a community with the power to influence research/ policy/ etc with real world implications — to dismiss ideas you simply don’t care for is dangerous in this context. especially when, for example, it is posited that unsafe ai is already here, and ai development arguably has cascading effects/ impacts/ implications on all these other areas of concern on ea — to fail to make an argument for why this is unfounded or incorrect appears as negligence and ultimately a failure of the “better” ea aims to bring about. if it’s been harped on before and addressed, why not then point someone new or misguided in the right direction? discourse/ conversations is how mutual collective progress is made, not by a small few deeming what is worthy or not. 

I can't see the downvoted comment in your comment history. Did you delete it? 

By the way, did you use an LLM such as ChatGPT or Claude to help write this post? It has the markings of LLM writing. I think when people detect that, they are turned off. They want to read what you wrote, not what an LLM wrote.

Another factor is that if you are a new poster, you get less benefit of the doubt and you need to work harder to state your points in plain English and make them clear as day. If it's not immediately clear what you're saying, and especially if your writing seems LLM-generated/LLM-assisted, people will not put in the time and effort to engage deeply. 

yes - my writing tends to start out as a loose collection of thoughts/ ponderings that i then try to flesh out more clearly of what i’m trying to get at, and to draw a clearly through line in my logic. i don’t think there is anything wrong using assistance as long as the core ideas/ arguments aren’t being artificially generated - i do not do this. to be fair, i assume a loose collection of thoughts would not be well received given what i’ve seen posted here but i can test that out and see if what i have to say is received any better. 

I think you should practice turning your loose collections of thoughts into more of a standard essay format. That is an important skill. You should try to develop that skill. (If you don't know how to do that, try looking for online writing courses or MOOCs. There are probably some free ones out there.)

One problem with using an LLM to do this for you is that it's easy to detect, and many people find that distasteful. Whether it's fully or partially generated by an LLM, people don't want to read it. 

Another problem with using an LLM is you're not really thinking or communicating. The act of writing is not something that should be automated. If you think it should be automated, then don't post on the EA Forum and wait for humans to respond to you, just paste your post into ChatGPT and get its opinion. (If you don't want to do that, then you also understand why people don't want you to post LLM-generated stuff on here, either.)

case in point: you urging that i fall back on convention (“standard essay format”) to conform to this community. in fact, it’s precisely why i even used the llm in the first place. 

why is it that a raw, unrefined post albeit able to make a clear argument, cite sources, etc should be rejected because of formatting? it would appear optics are more important than ideas, no?

More from keivn
Curated and popular this week
Relevant opportunities