D

Denkenberger🔸

Director, Associate Professor @ Alliance to Feed the Earth in Disasters (ALLFED), University of Canterbury
3161 karmaJoined Working (15+ years)Christchurch, New Zealand

Bio

Participation
3

Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 143 publications (>4800 citations, >50,000 downloads, h-index = 36, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.

How others can help me

Referring potential volunteers, workers, board members and donors to ALLFED.

How I can help others

Being effective in academia, balancing direct work and earning to give, time management.

Posts
32

Sorted by New

Comments
722

So, if Waymos are prone to suddenly breaking due to harmless obstacles like plastic bags or simply because of mistakes made by the perception system, then they could get rear-ended a lot, but rear-ending is always the fault of the car in the rear as a matter of law.

Interesting, but AI says:

"Exceptions exist where the lead driver may share or bear full fault, such as if they:

  • Stop suddenly without a valid reason
  • Reverse unexpectedly
  • Have malfunctioning brake lights or turn signals
  • Make unsafe lane changes without signaling
  • Were driving negligently (e.g., distracted driving)"

So if plastic bags are not a valid reason to stop, it sounds like the Waymo would be at fault for the rear-end accident.

I agree other company failures are evidence for your point. I think Waymo is trying to scale up, and they are limited by cars at this point. 

It sounds like Tesla solves the problem of unreliability by having the driver take over. And Waymo solves it by the car stopping and someone remotely taking over. So if that has to happen every 464 miles (say 10 hours) and it takes the remote operator five minutes to deal with (probably less but there would be downtime), then that means you would only need one remote operator for 120 cars. I think that's pretty scalable. It would not be as safe when it can drive anywhere as Waymo's claim of roughly 20 times as safe (it's five times fewer accidents but then the remaining accidents were mostly the other drivers' faults), but I still think it would be safer than the average human driver.

On a final note, I'm bothered by METR's focus on tasks with a 50% success rate. I mean, it's fine to track whatever you want, but I disagree with how people interpret what this means for overall AI progress. Humans do many, many tasks with a 99.9%+ success rate. Driving is the perfect example. If you benchmarked AI progress against a 99.9% success rate, I imagine that graph would just into one big, flat line on the bottom at 0.0, and what story would that tell about AI progress?

I agree that 50% is not realistic for many tasks. But they do plot some data for higher percent success:

Chart

Roughly I think going from 50% to 99.9% would be 2 hours to 4 seconds, not quite 0, but very bad!

  • Disruption of global synchronization and navigation: Global navigation satellite systems (things like GPS) could be disrupted for several days. This would be annoying for navigation in general, but more importantly, we also use it for global synchronization for all kinds of things, like our financial system. This would not be possible during the storm.

 

I thought some satellites would get their cumulative dose in a bad solar storm, so then they would be permanently damaged (though that might just be with EMP particles).

  • Aviation problems: Airplanes might be exposed to higher radiation doses than normal. This would not have any immediate effects, but would condemn a random collection of people on the airplanes to cancer later in their life.

Have they thought about flying lower in an event?

Thanks-I appreciate the skeptical take. I have updated, but not all the way to thinking self driving is less safe than humans. Another thing I've been concerned about is that the self driving cars I assume would be relatively new, so they would have fewer mechanical problems than the average car. But mechanical problems make up a very small percent of total accidents, so that wouldn't change things much. I agree that if there is a lot of babysitting required, then it would not be scalable. But I'm estimating that even without the babysitting and geofencing, it would still be safer than humans with all the drunk/sleepy/angry/teen/over 80 drivers.
I agree this does have some lessons for AGI. Though interestingly the progress on self driving cars is the slowest of all the areas measured in the graph below:

Chart

Waymo’s self-driving cars are geofenced down to the street level, e.g., they can drive on this street but not on that street right next to it. Waymo carves up cities into the places where it’s easiest for its cars to drive and where the risk is lowest if they make a mistake. 

Interesting. But if the vast majority of the serious accidents that did occur were the fault of other drivers, wouldn't that be strong evidence that the Waymos were safer (apples to apples at least for those streets)?

Self-driving cars are geofenced to the safest areas (and the human drivers who make up the collision statistics they are compared against are not). 

I wouldn't say that the cities Waymo are in are the safest areas. But I was critical of the claim that Teslas were ~4x safer a few years back because they were only driving autonomously on highways (which are much safer). What's the best apples-to-apples comparison you have seen?

I think there is essentially no chance that self-driving cars will substitute for human drivers on a large scale within the next 5 years. By a large scale, I mean scaling up to the size of something like 1% or 10% of the overall U.S. vehicle fleet.

Metalculus says Dec 31 for half of new cars sold being autonomous, which would pretty much guarantee 1% of US fleet in 5 years, so you might find some people there to bet...

My understanding is that in the US and the UK, employers do not have to hire the most qualified, and they can take into account cost-effectiveness. It is hard to get an EA job, so I think it does make sense to indicate you are willing to work for less, but maybe keep it qualitative. Or only say 20% less and then just donate the rest. Then that should cause fewer of the potential problems you and others note.

That's interesting that people may still be kind of in the loop through teleoperations centers, but I doubt it would be one to one, and the fact that Waymo is 5x safer (actually much safer than that because almost all the serious accidents were caused by humans, not the Waymos) is mostly due to the tech.

self-driving cars remain at roughly the same level of deployment as 5 years ago or even 10 years ago, nowhere near substituting for human drivers on a large scale

The first part is false, e.g. "Driverless taxi usage in California has grown eight fold in just one year". The second part is true, but the fact that self-driving cars are safer and cheaper (once they pay off the R&D) means that they most likely will soon substitute for human drivers on a large scale absent banning.

With dismay, I have to conclude that the bulk of EA is in Camp A "Race to superintelligence "safely"”. I'd love to see some examples where this isn't the case (please share in the comments), but at least the vast majority of money, power and influence in EA seems to be on the wrong side of history here.

 

I made a poll to get at what the typical EA Forum user thinks, though that is not necessarily representative of the money, power, or influence. My takeaway was:

"For the big picture, it received 39 votes. 13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/stopped than for the claim that EA wants AI to be accelerated."

@Julia_Wise🔸 There are lots of reasons that living can have meaning even without work. In fact, there's a book about that. Having some savings is very valuable to give you the flexibility of switching careers or for unforeseen circumstances. Also, it may be very valuable if there is a good AI outcome. But I do think that the possibility of transformative AI means we should save less for retirement.

Load more