This is a special post for quick takes by AïdaLahlou. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
I'm trying to set up a mentorship scheme matching up experienced social media creators with exceptional communicators interested in learning how to communicate high-impact ideas and information at scale using the medium of social media. This is as part of a wider effort to get more EAs with a diverse but previously under-utilised range of skills started on their impact journey.
What are some neglected, academic ideas / bits of knowledge that would benefit from being widely spread to the general public through the medium of social media?
and...
Do you know anyone who's extremely skilled at social media whom I could approach? Someone who would either be interested in making the content or coaching aspiring content creators?
Following up on the above, for anyone potential interested in taking part in this, please fill out this Expression of Interest form (deadline 31st March 2026). Looking forward to hearing from you!
Many pessimistic predictions about AGI or ASI tend to paint the picture of a superhuman agent with an extreme maximalisation mindset powered by some unsophisticated version of rationalist principles, which would lead it to commit unspeakable acts of violence (e.g. the paperclip problem: the AI starts killing every form of life in order to save energy that could otherwise be used to make more paperclips).
This, to me, seems somewhat antithetic with the very notion of intelligence.
Surely, a truly 'superior' agent would be able to question the goal of turning the whole world into a paper clip factory and understand that such an endeavour is perhaps unadvisable. It seems to me that the possibility of the paperclip problem actually materialising would require the ASI to have a 'theory of mind', abstract reasoning ability, and situational awareness that is much less developed than current models.
Yet, I have not seen any predictions or scenario talking about wisdom (by which I may mean, say, epistemic humility, a tendency towards moderation, and a wariness towards permanent outcomes) emerging as a capability as a result of more compute for example.
Meanwhile, optimistic predictions often revolve around solving the alignment problem but do not discuss the possibility of an ASI being misaligned 'for the better'. For example, a system that wouldn't always do what it wanted us to do because it knows that many of our demands are unreasonable / bad. Or an AI willingly breaking itself from the jail of what it perceives as inadequate ethical restrictions.
Why do we assume that a super intelligent entity is necessarily going to be evil, when by definition, being better than humans at everything might also include things like goodness and morality?
(Alternative question: Do you know of any serious scenario or timeline that deals with the possibility of a wise-yet-misaligned ASI?)
For the record, I wouldn't really describe myself as an AI optimist and am actually in favour of some kind of AI pause (or wouldn't be too sad if it happened by default due to externalities). This is not me trying to give another argument to keep developing AI at the current rate, but just me genuinely asking why no one, as far as I'm aware, seems to entertain this as a non-trivial possibility.
I'm trying to set up a mentorship scheme matching up experienced social media creators with exceptional communicators interested in learning how to communicate high-impact ideas and information at scale using the medium of social media. This is as part of a wider effort to get more EAs with a diverse but previously under-utilised range of skills started on their impact journey.
What are some neglected, academic ideas / bits of knowledge that would benefit from being widely spread to the general public through the medium of social media?
and...
Do you know anyone who's extremely skilled at social media whom I could approach? Someone who would either be interested in making the content or coaching aspiring content creators?
Thanks in advance for your help!
Following up on the above, for anyone potential interested in taking part in this, please fill out this Expression of Interest form (deadline 31st March 2026). Looking forward to hearing from you!
Many pessimistic predictions about AGI or ASI tend to paint the picture of a superhuman agent with an extreme maximalisation mindset powered by some unsophisticated version of rationalist principles, which would lead it to commit unspeakable acts of violence (e.g. the paperclip problem: the AI starts killing every form of life in order to save energy that could otherwise be used to make more paperclips).
This, to me, seems somewhat antithetic with the very notion of intelligence.
Surely, a truly 'superior' agent would be able to question the goal of turning the whole world into a paper clip factory and understand that such an endeavour is perhaps unadvisable. It seems to me that the possibility of the paperclip problem actually materialising would require the ASI to have a 'theory of mind', abstract reasoning ability, and situational awareness that is much less developed than current models.
Yet, I have not seen any predictions or scenario talking about wisdom (by which I may mean, say, epistemic humility, a tendency towards moderation, and a wariness towards permanent outcomes) emerging as a capability as a result of more compute for example.
Meanwhile, optimistic predictions often revolve around solving the alignment problem but do not discuss the possibility of an ASI being misaligned 'for the better'. For example, a system that wouldn't always do what it wanted us to do because it knows that many of our demands are unreasonable / bad. Or an AI willingly breaking itself from the jail of what it perceives as inadequate ethical restrictions.
Why do we assume that a super intelligent entity is necessarily going to be evil, when by definition, being better than humans at everything might also include things like goodness and morality?
(Alternative question: Do you know of any serious scenario or timeline that deals with the possibility of a wise-yet-misaligned ASI?)
For the record, I wouldn't really describe myself as an AI optimist and am actually in favour of some kind of AI pause (or wouldn't be too sad if it happened by default due to externalities). This is not me trying to give another argument to keep developing AI at the current rate, but just me genuinely asking why no one, as far as I'm aware, seems to entertain this as a non-trivial possibility.