D

DC

1769 karmaJoined

Comments
226

Topic contributions
2

Some musings:

What counts as an idea? Is an entire book an idea, or does it have to be tweet-length? What about a whole EA Forum post, is that one idea or a collection of ideas? E =MC^2 is an idea... but it takes a lot of background knowledge to understand it. One should probably understand Newton's Laws first. So which idea is more important, relativity or the precursor? What about the idea of numbers?

Context seems really important. An idea without the resources to execute it is basically useless. There could be ecosystems where there are ideators and executors, but it takes a lot of coordination between those people. 

There definitely seems to be a power law to ideas. But it's also not necessarily easy for people to identify how good an idea is in advance. 

Often we need to build up the dependencies to effectuate a good idea or even recognize it in the first place. Maybe even just recognizing all the good ideas we already have laying around, and wiring them together appropriately. Maybe normal people are already good at acting on most of the goodness of ideas. Maybe even people in dire states, say a homeless drug addict, could already be tapping most of the good ideas that are already laying around just by the sheer fact of being a biological organism existing! You didn't specify whether the capability to have vision counts as an idea. I didn't expect to be making this point but I could argue we're already surfing some pretty damn good ideas like "seeing things" and on the margin the multipliers we can get from the additional stuff we call "ideating" isn't worth that much extra.

I would be hesitant to discount the accumulated wisdom of entrepreneurs on this question. One thing they're reacting to is that for every executor there's 10 idea people, or some ratio like that. "Talk is cheap". Having many ideas likely indicates some level of overthinking and paralysis. Success requires not just picking the right idea but sticking to it, and if one is trying to optimize for the best idea then something else more shiny may come along and derail the work that was built up, turning it into an unfinished bridge. Maybe it's good to have more discourse where people share their ideas, but also it makes sense why doing that too much would be penalized as it's a tax on bullshit. 

Also, the best ideas often have something antimemetic about them that means they weren't picked before. This means it's hard to tell what's the best idea; it requires discernment, taste, and building up a solid worldview. Also, the idea's probably high variance and therefore risks negative externalities that could be bigger than the expected positive externalities. There's an optimizer's curse.

The world is often chaotic and the end result can only come from many iterations until the system is far from the initial conditions.

The goodness of ideas seems more easily evaluable in retrospect. Maybe the best result from this line of thinking is to do retroactive funding of the best ideas already out there. Speaking from tons of experience, I really doubt the best idea can come from sitting down afresh with a piece of paper and thinking "hmm what ideas can I list down" and then picking the best one. 

OTOH, let me explore this idea more favorably, but with a different frame.

The world is a giant map. We need to get to X. It's somewhere but we don't know where. I like thinking in terms of navigation, at least because it befits my mind well. We don't have a map before us, so we start heading anywhere (since we don't know where we're going). This is foolish, except insofar as we have never navigated before and need to calibrate how it even works to traverse the territory before we try to do it for real. If you're going the wrong way you'll need to turn around. It's good to save tons of energy by getting your route right first. If you're leading a party of people it is especially important you have good discourse about where X is, especially if there are disagreements you should try to resolve them first. Unless you intentionally plan to split up and cover more ground! There should be some Xploration, but also groups should stick together in order to survive.

It is very important to discern between information saying to go in opposite directions and figure out what is true before heading there, or maintaining an average between those epistemic states until you learn more (e.g. AI doom vs optimism). 

But I think most of the navigation is pretty straightforward. Eat and sleep well, have friends, save the world, don't hurt others. You can probably figure out you need to "head north" to get to X, in the analogy. Even if X ends up being in Norway instead of Sweden, it probably didn't change your instrumentally convergent trajectory that much, assuming the best ideas are near each other. But if there are wild swings between where X is, then one should stop moving and resolve those cruxes. It's about the journey though, and one should probably keep moving and doing various sidequests in the local city, checking the tavern's bounty board, while you debate which way to go next. This means building up convergent resources, with the Slack to keep exploring indefinitely. 

People who play various games probably have something to say about ideal strategy. I worry I'm not cut out to be an entrepreneur as much as I'd like, I'm not that good at real-time strategy with fast decisionmaking under VUCA like Starcraft, or RPGs with complex decisionmaking on loadouts and inventory. 

As a human with a tendency towards perfectionism, it's probably a bad idea for me to try to evaluate the bestness of ideas with too much granularity. Better for AI agents to pick up the work. Maybe we need to just generate more ideas and put them out there in the marketplace for them to be evaluated at all.

I talked to Claude a bit about this and slightly want to walk something back; I think idea generation and listmaking can be great if done structuredly and probably collectively. Charity Entrepreneurship goes through hundreds of ideas before picking the best one; this is a lot more structured and systematic than when I list things out on my notebok. That said I am also skeptical their approach scales that well, it feels high modernist and I'm more of the school that thinks founders should be the ones coming up with the ideas out of a personal Weltaanschaung, out of deep personal engagement with the world building up tons of context about how things work.

DC
25
0
0

Reminder that there is an EA Focusmate group, where you can do 50 minute coworking calls with other EAs. Also, if you're already in the group, please give any feedback on it here or via DM.

DC
7
0
0
1

I'm glad you're alive. I wasn't sure what happened to you, and was worried.

DC
12
3
0

This post is mostly noise because this is a basic point going back over a decade and you do nothing to elaborate it or incorporate objections to naive utilitarianism. There is prior literature on the topic. I want you to do better because this is an important topic to me. The SBF example is a poor one that's obfuscatory of the basic point because you don't address the hard question of whether his fraud-funded donations were or weren't worth the moral and reputational damage, which is debatable and a separate interesting topic I haven't seen hard analysis of; you open up a can of ethical worms and don't address it in a way that reasonably looks bad to low decouplers, which is probably the reason for the downvoting. Personally I would endorse downvoting because you haven't contributed anything novel about increasing the number of probably good high net worth philanthropists, though I didn't downvote. I only decided to give this feedback because your bio says you're an econ grad student at GMU, which is notorious for disagreeable economists, and so I think you can take it.

DC
0
20
10

"First they came for the high decouplers..."

DC
2
0
0

I forget what you told me in our shared car ride a few months ago about why you ended up handing off ALERT, but my naive pattern match is that you didn't do the thing cflexman suggested and that was a large factor for why it didn't work out for you. Is that right or am I off?

DC
14
3
4

when we have no evidence that aligning AGIs with 'human values' would be any easier than aligning Palestinians with Israeli values, or aligning libertarian atheists with Russian Orthodox values -- or even aligning Gen Z with Gen X values?

When I ask an LLM to do something it usually outputs something that is its best attempt at being helpful. How is this not some evidence of alignment that is easier than inter-human alignment?

DC
3
2
0

The eggs and milk quip might be offensive on animal welfare reasons. Eggs at least are one of the worst commonly consumed animal products according to various ameliatarian Fermi estimates.

DC
2
0
0

I know of one that is less widely reported; not sure if they're counted in the two Joseph Miller knows of that are less widely reported, or if separate.

Answer by DC4
1
1

I would personally recommend waiting to sell your kidney when there is a feasible jurisdiction you can travel to that allows kidney markets (e.g. Argentina under Milei).

Load more