Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
I'm not sure what StopAI meant by Mr. Kirchner not having -- to its knowledge -- "yet crossed a line [he] can't come back from," but to be clear: his time working on AI issues in any capacity has to be over.
This unfortunately do not seem to be StopAI's stance.
One point I made that didn’t come across:
- Scaling the current thing will keep leading to improvements. In particular, it won’t stall.
- But something important will continue to be missing.
Social media recommendation algorithms are typically based on machine learning and generally fall under the purview of near-term AI ethics.
I'm giving a ∆ to this overall, but I should add that conservative AI policy think tanks like FAI are probably overall accelerating the AI race, which should be a worry for both AI x-risk EAs and near-term AI ethicists.
You can formally mathematically prove a programmable calculator. You just can't formally mathematically prove every possible programmable calculator. On the other hand, if you can't mathematically prove a given programmable calculator, it might be a sign that your design is an horrible sludge. On the other other hand, deep-learnt neural networks are definitionally horrible sludge.
Yes. One of the Four Focus Areas of Effective Altruism (2013) was "The Long-Term Future" and "Far future-focused EAs" are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.