XO

Xavier_ORourke

410 karmaJoined

Comments
32

Love to see it.

The true valuation is probably a lot more than $26b USD - there was recently a secondary sale which valued company at closer to 42 billion. If the company became public, and were to trade at a revenue multiple similar to Figma's (extremely high at like 30x right now) - this would set the company's valuation a lot higher than that

I might not be the target audience for this proposal (my EA involvement weakened before FTX, and I'm not on track for high-leverage positions) - so take this perspective with appropriate skepticism. I'm also making predictions about something involving complex dynamics so good chance I'm wrong...

But I see fundamental challenges to maintaining a healthy EA movement in the shape you're describing. If we really encourage people to be vocal about their views in the absence of strong pressure to toe a party line - we can expect a very large, painful, disruptive disagreement to rise to the forefront.

Even among EA Forum readers who care about strangers + future generations and recognize that some post-AGI worlds are far better than others - nobody acts perfectly according to their moral philosophy.

As more people start to viscerally sense their loved ones and way-of-life are in imminent danger, we'll discover a lot of revealed preferences. I suspect many will prioritize stretching out the time their families get to live in a "normal" world - regardless of what effect these delays have on the chance of a good future.

Predictably, there'll be a growing faction who want AI slowdown for its own sake and pursue populist avenues like promoting data-center NIMBYism to the general public. Some might even consider campaigns designed to expose and discredit specific public figures. Eventually, a serious funder might come on the scene who supports this kind of thing.

From my (very uninformed) position - it seems likely that Anthropic billionaires coming in to fund a large segment of EA won't be happy about a faction like this existing. Or at the very least, won't want to appear to be associated with it.

I think an important consideration being overlooked is how comptetntly a centralised project would actually be managed.

In one of your charts, you suggest worlds where there is a single project will make progress faster due to "speedup from compute almagamation". This is not necessarily true. It's very possible that different teams would be able to make progress at very different rates even if both given identical compute resources.

At a boots-on-the-ground level, the speed of progress an AI project makes will be influenced by thosands of tiny decisions about how to:
 

  • Manage people
  • Collect training data
  • Prioritize research direcitons
  • Debug training runs
  • Decide who to hire
  • Assess people's perfomance and decide to should be promoted to more influential positions
  • Manage code quality/technical debt
  • Design+run evals
  • Transfer knowledge between teams
  • Retain key personnel
  • Document findings
  • Decide what internal tools to use/build
  • Handle data pipeline bottlenecks
  • Coordinate between engineers/researchers/infrastructure teams
  • Make sure operations run smoothly
     

The list goes on!

Even seemingly minor decisions like coding standards, meeting structures and reporting processes might compound over time to create massive differences in research velocity. A poorly run organization with 10x the budget might make substantially less progress than a well-run one.

If there was only one major AI project underway it would probably be managed less well than the overall best-run project selected from a diverse set of competing companies.

Unlike the Manhattan project - there's already sufficently strong commercial incentives for private companies to focus on the problem, it's not already clear exactly how the first AGI system will work, and capital markets today are more mature and capable of funding projects at much larger scales. My gut feeling is if AI was fully consolidated tomorrow - this is more likely to slow things down than speed them up.

I don't really know... I'm suspect some kind of first-order utility calculus which tallies up the number of agents which are helped per dollar weighted according to what species they are makes animal welfare look better by large degree. But in terms of getting the world closer on the path of the "good trajectory", for some reason the idea of eliminating serious preventable diseases in humans feels like a more obvious next step along that path?

Not really a question but...  if you guys ever released a piece of merch that was insanely expensive but most of the cost went to charity (e.g. some special edition $3000 Mr Beast branded Tshirt where you guys give 3k to GiveDirectly for every unit sold), I'd wanna buy them for all my friends.

A priori, what is the motivation for elevating the very specific "biological requirment" hypothesis to the level of particular consideration? Why is it more plausible than than similarly prosaic claims like "consciousness requires systems operating between 30 and 50 degrees celsius" or "consciousness requires information to propegate through a system over timescales between 1 millisecond and 1000 milliseconds" or "consiousness requires a substrate located less than 10,000km away from the center of the earth"?

It seems a little weird to me that most of the replies to this post are jumping to the practicalities/logistics of how we should/shouldn't implement official, explicit, community-wide bans on these risky behaviours.

I totally agree with OP that all the things listsed above generally cause more harm than good. Most people in other cultures/communities would agree that they're the kind of thing which should be avoided, and most other people succeed in avoiding them without creatiing any explicit institution responsible for drawing a specific line between correct/incorrect behavior or implementing overt enforcment mechanisms.

If many in the community don't like these kind of behaviours, we can all contribute to preventing them by judging things on a case-by-case basis and gently but firmly letting our peers know when we dissaprove of their choices. If enough people softly disaprove of things like drug use, or messy webs of romantic entanglement - this can go a long way towards reducing their prevalance. No need to draw bright lines in the sand or enshrine these norms in writing as exact rules.

Sorry I might not have made my point clearly enough. By remaining anonymous, the OP has shielded themselves from any public judgement or reputational damage. Seems hypocritical to me given the post they wrote is deliberately designed to bring about public judgement and affect the reputation of Nick Bostrom.

So I'm saying "if OP thinks it's okay to make a post which names Nick and invites us all to make judgements about him, they should also have the guts to name themselves"

I really don't think the crux is people who disagree with you being unwilling to acknowledge their unconscious motivations. I fully admit that sometimes I experience desires to do unsavory things such as

- Say  something cruel to a person that annoys me
- Smack a child when they misbehave
- Cheat on my taxes
- Gossip about people in a negative way behind their backs
- Eat the last slice of pizza without offering it to anyone else
- Not stick to my GWWC pledge
- Leave my litter on the ground instead of carrying it to a bin
- Lie to a family member and say "I'm busy" when they ask me to help them with home repairs
- Be unfaithful to my spouse
- etc.

If you like, for sake of argument let's even grant that for all the nice things I've ever done for others, ultimately I only did them because I was subconsciously trying to attract more mates (leaving aside the issue that if this was my goal, EA would be a terribly inefficient means by which to achieve it).

Even if we grant that that's how my subconscious motivations are operating, it still doesn't matter. It's still better for me to not go around hitting on women at EA events, and the EA movement is still better off if I'm incentivised not to do it.

Maybe all men have have a part of ourselves which wants to live the life of Genghis Khan and torture our enemies and impregnate every attractive person we ever lay eyes on - but if that were true, that wouldn't imply it's ethical or rational to indulge that fantasy! And it definitely wouldn't imply that the EA project would be better off if we designed our cultural norms+taboos+signals of prestige in ways which encourage it.

The better I am at not giving in to these shitty base urges, and the more the culture around me supports and rewards me for not doing these degenerate things, the happier I will be in the long run and the more positive the impact I have on those around me will be.

If EA community organisers are ending up isolated from everyone not involved in EA, that a really big problem!

Load more