Gergely Máté

Software / DevOps / ML Engineer @ Efirfira Ltd.
10 karmaJoined Working (15+ years)Szentendre, 2000 Hungary
mategergely.hu

Bio

I'm a generalist senior software engineer with an affection for math, psychology, economics and philosophy, hooked by 80000 Hours a few years ago and now pivoting my career of 20+ years towards higher probabilities of doing something useful for society. 

How others can help me

I'm looking for doing meaningful work.

How I can help others

I often can solve seemingly complex problems across the Systems / Backend / DevOps / Cloud / ML domains. I'm most effective when things run on open source gears.

Comments
3

With charity we're looking for global utility. That would be analogous to total market expansion in the for-profit world. When someone arbitrages a pricing failure and gains some, someone else loses the same amount. That's a zero-sum game around the market expansion line. We don't have to take that into account here, there's likely no speculation around charity utility.

But say charity A has a track record of $20 cost per DALY, charity B has $30, charity C has $40. As we don't know the future, hand picking charity A and giving all our donations to them would be a mistake - maybe they will not be that efficient next year, maybe they will operate on $50 cost per DALY (the circumstances might change, etc). We can reduce that risk by distributing our donations based on, for example, 1/2 and 1/3 weights for charity A and B (so based on cost efficiency). That bet is more safe. But we can get even better with donating to all three in a 1/2 : 1/3 : 1/4 ratio.

Given we hand pick a few hundred charities based on cost efficiency, distributing donations among them based on such (or probably more complex) criteria is closer to optimum utility.

Here's how the above "risk aversion" would translate to charity: the risk is not financial loss, but donating to a charity that uses that donation in an inefficient way. So risk aversion in the above described context is not avoiding financial loss, but avoiding utility loss. 

The problem with mutual funds in the for-profit domain is that those mostly underperform index funds. General market indexes select the N best-performing companies - like 500 out of tens of thousands. This would translate to charity as selecting the N best-performing" charities, measured in something like "delivered utility per dollar". So let's select the 500 most cost-efficient charities, out of tens of thousands. 

The selection process is still the hand-picking part, and that's an unavoidable part of donating, as we need to find out which charities are cost-efficient (we don't have the market efficiency clue as you pointed out). What I'm arguing for is then to have a broad selection of the hand-picked ones, and to construct a weighting among them, based on their properties. There exists an optimal distribution of donations, and we're very unlikely to find it by hand-picking alone. The for-profit market analogy shows that using the right statistical methods for finding the distribution (among the selected) is very likely superior, and the broader the sample (again, among the selected), the better the results will be. 

Nice article, thanks!

Maybe the closer we are to a singularity-like moment, the larger the deviations become in our expectations about the future. It would make sense, because at singularity our uncertainty about the future should be maximal, I think.
However, hopefully that's a bit off for now. And maybe (I really hope!) we can keep it at a distance.

I was thinking about the 50% success definition of time horizons, and the possible practical consequences of that. 

One thing that may be still interesting is how long it takes the AI agent to do the task. Let's take, for example, a task that takes 4 hours for a human developer. Is it 10 minutes for an AI agent, or is it 8 hours? I got it that this is temporary anyway, and AI agents will be much faster pretty soon, but another question may be: when?

Another thing is what's happening on failure. If an AI agent tries a 4-hours-human-level-task, and fails in those other 50% (... 25%, or any less), what's next, when one wants do deliver something? Doing it by hand of a human and accepting the "wasted" time and cost? Restarting the AI agent n times or up to a cost limit? Companies hire engineers in the hope they'll have a very high success rate, and working in teams usually provides a multiplier on top of that. How does that scale with teams of AI agents?