NunoSempere

Director, Head of Foresight @ Sentinel
13703 karmaJoined
nunosempere.com/blog

Bio

I run Sentinel, a team that seeks to anticipate and respond to large-scale risks. You can read our weekly minutes here. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable.  I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.


I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:

  • I disagree with the EA Forum's moderation policy—they've banned a few disagreeable people whom I like, and I think they're generally a bit too censorious for my liking.
  • The Forum website has become more annoying to me over time: more cluttered and more pushy in terms of curated and pinned posts (I've partially mitigated this by writing my own minimalistic frontend)
  • The above two issues have made me take notice that the EA Forum is beyond my control, and it feels like a dumb move to host my research in a platform that has goals different from my own. 

But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value.


My career has been as follows:

  • Before Sentinel, I set up my own niche consultancy, Shapley Maximizers. This was very profitable, and I used the profits to bootstrap Sentinel. I am winding this down, but if you have need of estimation services for big decisions, you can still reach out.
  • I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms—a more up to date alternative might be adj.news. I spent some time in the Bahamas as part of the FTX EA Fellowship, and did a bunch of work for the FTX Foundation, which then went to waste when it evaporated. 
  • I write a Forecasting Newsletter which gathered a few thousand subscribers; I previously abandoned but have recently restarted it. I used to really enjoy winning bets against people too confident in their beliefs, but I try to do this in structured prediction markets, because betting against normal people started to feel like taking candy from a baby.
  • Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term."
  • Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

You can share feedback anonymously with me here.

Note: You can sign up for all my posts here: <https://nunosempere.com/.newsletter/>, or subscribe to my posts' RSS here: <https://nunosempere.com/blog/index.rss>

Posts
115

Sorted by New

Sequences
3

Vantage Points
Estimating value
Forecasting Newsletter

Comments
1266

Topic contributions
14

<https://forum.effectivealtruism.org/posts/4DeWPdPeBmJsEGJJn/interview-with-a-drone-expert-on-the-future-of-ai-warfare>

If 3H is false but we act urgently, this false positive is far less bad, as we will have many years (maybe millions) later in which to invest resources for the real hinge of history.

 

But you lose the compounding, particularly if later generations make the same calculus, and so you can't implement something like a Patient Philanthropy Fund. https://www.founderspledge.com/programs/patient-philanthropy-fund

Distribution rules everything around me

 

First time founders are obsessed with product. Second time founders are obsessed with distribution.

 

I see people in and around EA building tooling for forecasting, epistemics, starting projects, etc. They often neglect distribution. This means that they will probably fail, because they will not get enough users to justify the effort that went into their existence.

 

Some solutions for EAs:

  • Build a distribution pipeline for your work. Have a mailing list on substack. Have a twitter account. This means that attention for your work compounds. Twitter is also good for fast feedback loops.
  • Tap into existing distribution networks. You can try to figure out who has a large mailing list and ask them to mention you. At a lower scale, you can write something like my forecasting newsletter but for your field.
  • You can go on podcasts (I've been avoiding this).
  • The EA forum doesn't suffice for distribution. This post had 169 views on the EA forum, 3K on substack, 17K on reddit, 31K on twitter.
  • There are probably many other moves, and people who are really good at it. But the point is that some projects, including my own in the past, just catastrophically fail.

At least an equal level of data efficiency

...

This is the only kind of AI system that could plausibly automate all human labour

Your bar is too high, you can automate all human labour with less data efficiency.

No, I think on that post I'm saying something that is more like "what if we were all much more capable", which seems tamer.

I had reason to come back to this comment. Rereading it, I don't think I'm exactly wrong, but I'm not paying enough face, enough respect to the challenges of running an organization, and so the bar that I am setting is in some sense inhuman. These days if I wanted to give similar feedback I would do so in private, and I would make sure it is understood to come from a place of appreciation.

You are underrating the geographical closeness of China and Taiwan, and overrating the cost of shipping military materiel continuously to a contested area. 

Load more