Bio

Participation
7

Researcher focused on China policy, AI governance, and animal advocacy in Asia.

Currently transitioning from a researcher role at Good Growth to projects at the intersection of China x AI.

Also interested in effective giving, economic development (and how AI will affect it), AI x Animals, wild animal welfare, cause prioritisation, and various meta-EA topics.

Comments
38

I'd lean towards the World Happiness Report results here. IPSOS uses a fully online sample, which means you end up losing the "bottom half" of the population. World Happiness Report is phone and in-person.

Hi Klara, thanks for the response.

I don't think I am entering the abortion debate by assigning moral value to unborn lives any more than I'm entering any other debate that considers unborn or potential lives (e.g. the ethics of moderate drinking while pregnant, the ethics of having children in space, or the repugnant conclusion). 

I think I'm comfortable with having mostly sidestepped the maternal health issues, given that I was focusing on interventions that are robustly good for the mother. If I were to do a stronger and more robust cost-effectiveness analysis, or tackle more controversial interventions where the interests of the mother and child clearly diverged, I would consider maternal health outcomes separately. I hope my piece makes it clear that we should prioritise uncontroversial and neglected interventions that treat or prevent painful conditions that women suffer from.

Although I do recognise that the ethics of pregnancy, lived experience of the mother, and autonomy trade-offs are important considerations, I'm afraid that attempting to tackle these here would have made this an impossibly long post!

When I say “the economics are looking good,” I mean that the conditions for capital allocation towards AGI-relevant work are strong. Enormous investment inflows, a bunch of well-capitalised competitors, and mass adoption of AI products means that, if someone has a good idea to build AGI within or around these labs, the money is there. It seems this is a trivial point - if there were significantly less capital, then labs couldn’t afford extensive R&D, hardware or large-scale training runs. 

WRT Scaling vs. fundamental research, obviously "fundamental research" is a bit fuzzy, but it's pretty clear that labs are doing a bit of everything. DeepMind is the most transparent about this, they're doing Gemini-related model research, Fundamental science, AI theory and safety etc. and have published thousands of papers. But I'm sure a significant proportion of OpenAI & Anthropic's work can also be classed as fundamental research. 

I think there are two categories of answer here: 1) Finance as an input towards AGI, and 2) Finance as an indicator of AGI.

For 1) regardless of whether you think current LLM-based AI has fundamental flaws or not, the fact that insane amounts of capital are going into 5+ competing companies providing commonly-used AI products should be strong evidence that the economics are looking good, and that if AGI is technically possible using something like current tech, then all the incentives and resources are in place to find the appropriate architectures. If suddenly the bubble were to completely burst, even if we believed strongly that LLM-based AGI is imminent, there might be no more free money, so we'd now have an economic bottleneck to training new models. In this scenario, we'd have to update our timelines/estimates significantly (especially if you think straightforward scaling is a our likely pathway to AGI). 

For 2), probably not - depends on the situation. Financial markets are fickle enough that the bubble could pop for a bunch of reasons unrelated to current model trends - rare-earth export controls having an impact, slightly lower uptake figures, the decision of one struggling player (e.g. Meta) to leave the LLM space, or one highly-hyped but ultimately disappointing application, for example. If I was unsure of the reason, would I assume that the market knows something I don't? Probably not. I might update slightly, but I'm not sure to what extent I'd trust the market to provide valuable information about AGI more than direct information about model capabilities and diffusion.

But of course, if we do update on market shifts, it has to be at least somewhat symmetrical. If a market collapse would slow down your timelines, insane market growth should accelerate your timelines for the same reason.

"Longtermism isn't necessary to think that x-risk (of at least some varieties) is a top priority problem."

I don't think it's a niche viewpoint in EA to think that, mainly because of farmed and wild animal suffering, the short term future is net-negative in expectation, but the long-term future could be incredibly good. This means that some variety of longtermism is essential in order to not embrace x-risk in our lifetimes as desirable.

I definitely identify with where you're coming from here, but these insights might also imply a potential partner post on "How to avoid EA senescence (if you want to)".

Based on your examples, this might look like:

  • Specialise, even if it's not your job - dive very deep into at least one relevant EA area. If you can find something you find interesting and neglected, can you become top 1% knowledgeable (within EA) in an obscure sub-field?
  • Develop (and share) a niche perspective on where to donate based on your specific worldview. If you're very convinced about insect sentience, or you lean negative utilitarian, you will very quickly realise that EA Funds are not the highest EV option for you!
  • Prioritise boosting/maintaining your "EA energy"
  • Host more parties

Thanks for the post! This is a very valuable topic, and the development econ mainstream is totally lost on this question!

I agree with some of your points, but I think we need to distinguish very carefully between "developing countries". All the factors you mention with regards to labour displacement (structure of the economy, data availability, telecommunication infra) are wildly different between, say, Togo, Brazil, and Indonesia. Same with private- vs. public sector diffusion; within "developing countries", you've got countries with massive tech hubs and their own tech billionaires, and those where most people still don't have electricity.

For me, the most important development question with regards to TAI (and the reason it's important to distinguish) is the feasibility of the export-led development model. Generally, if countries manage to develop a high-value added export sector, they attract FDI, get foreign currency, climb up the value chain, and become richer. If they don't, they stay poor. Except for the occasional country finding insane levels of natural resources, this is the only real way that countries have become rich over the last 100 years.

If we get safe, transformative AI, we can imagine that demand for imports massively rises in the West, and middle-income countries like China, Vietnam, and Indonesia with strong export sectors (and the infrastructure to build on their existing exports) are able to take advantage of this. As these countries already have good infrastructure (e.g. electrification, internet access, land and shipping transport) they can probably also benefit from AI & Robotics to develop "Industry 4.0" and make their export sector even more dominant.

I'd therefore estimate that a few of these "developing" countries with existing strong export sectors will catch-up and become rich relatively soon.

But what of the poorest countries?!

Most African countries with a GDP below, say $3000 are very low down the value chain in all sectors, with little but raw materials (e.g. coffee, cocoa, oil if they have it) as exports. They're struggling to compete with Asian developing powerhouses, and they haven't got the transport infrastructure, governance, or capital etc. to develop a quality export-led economy. In a world without TAI, as middle-income countries get rich, poor countries would develop the export industries and climb the ladder themselves, but this seems very unlikely with displacement of manufacturing labour by robotics.

My overall take is that (in an optimistic AI scenario) well-governed middle-income countries would probably end up more similar to rich countries. But we'd have really "kicked away the ladder" from the very poor countries.

My ideas for posts (I'll try to write at least one):

  • I recently learned that malaria causes about as many miscarriages and stillbirths as it causes live infant deaths, but we only count neonatal deaths in most cost-effectiveness estimates. Intermittent preventive treatment with Sulphadoxine-Pyrimethamine (IPTp-SP) for pregnant women seems to be more cost-effective than bed-nets for preventing malaria-related stillbirths and miscarriage. Unsure whether to write a narrow post on that, or a deeper post on "What are the most effective charities, given worldviews where unborn children have similar value to new-borns?"
  • Some Europeans have been asking me a lot about what people in smaller countries can do to make AI go better (or slow down) - especially with regards to China. I think we've got a lot of lessons from (especially Cold War) history about third countries using their relations with superpowers to increase existential safety, but I don't think anyone's written an EA forum post about it.
  • I wrote a blog post on what I call "The Great Happiness Stagnation" - looking at the flattening of happiness in many rich countries since they became rich. I've been thinking about converting it to a forum post, but it currently seems insufficiently rigorous to be worthy of the forum! 

Agree with your point about the Chinese study reference, about healthy aging for elderly Chinese people. The OP uses it to make three separate points, about cognitive impairment, dose-response effects and lower overall odds of healthy aging, but it's pretty clear that the study is basically showing the effects of poverty on health in old age. 

Elderly Chinese people are mostly vegetarian or vegan because a) they can't afford meat, or b) have stopped eating meat because they struggle with other health issues, both of which would massively bias the outcomes! So their poor outcomes might be partly through diet-related effects, like nutrient/protein deficiency, but could also be sanitation, malnutrition in earlier life (these are people brought up in extreme famines), education (particularly for the cognitive impairment test), and the health issues that cause them to reduce meat.

The study fails to control for extreme poverty by grouping together everyone who earned <8000 Yuan a year (80% of the survey sample!), which is pretty ridiculous, because the original dataset should have continuous data...

The paper also makes it very clear that diet quality is the real driver, and that healthy plant-based diets score similarly to omnivorous diets "with vegetarians of higher diet quality not significantly differing in terms of overall healthy aging and individual outcomes when compared to omnivores". 

Probably less importantly, it conditions on survival to 80, which creates a case of survivorship bias/collider bias. So there could be a story where less healthy omnivores tend to die earlier (you get effects like this with older smokers, sometimes), and the survivors appear healthier.

I agree with the upfront tagline "Having children is not the most effective way to improve the world", but feel I disagree pretty strongly with a bunch of these takes:

  1. "Owing" it to your parents. This feels a little straw-manned. Wanting to have kids for your parents' sake might be about feeling grateful for 16+ years of love & care, or just making someone you care about happier in their old age. From an EA perspective, you perhaps shouldn't weight this too highly. But when choosing to have kids or not, especially if your parents really want grandchildren, you are making this trade-off. One of my explicit considerations when considering having kids was thinking about my in-laws and extended family.
  2. Donating to AMF to increase population. Don't strongly disagree with the principle here, but donating to AMF is probably not optimal. I think it would be cheaper to incentivise births directly than donating to AMF, if that's your goal. (Edit: I wrote something else that finds that AMF might actually incentivise births cheaply, because of maternal/placental malaria) I've written about this: (Who should we pay to increase birth rates?), where I make a toy model about choosing where you might want to generate new lives. I suggest lower-middle income countries other than Sub-Saharan Africa, mainly because of quality of life concerns.
  3. It’s a bad idea to make ethical arguments either way about having children. This one surprised me the most.  Do you mean we shouldn’t make these arguments at all, or simply that we should avoid certain impolite judgements of others’ choices? My take: of course you shouldn't overdo it and rant to expectant mothers about the meat-eater problem, risk of population collapse, and negative utilitarianism, but it's still one of the biggest ethical decisions in a human's life. There's no reason why this should be less suitable for ethical debate than what job you choose or what charity you donate to. 
Load more