A 16-year-old teen deciding if he should devote his career in AI s-risks discussions. Needing information on AI development and AI sentience.
The reason of thinking up this question (This is not directly related to the question so I put in common section here):
I'm currently an 18-year-old guy that's having hard time between double majoring on medicine and CS(computer science) and single major on CS.
One advantage of medicine is its high salary. In fact my parents think it's necessary to save retirement funds for like one million dollars, so they strongly advice me to do double major in medicine. Before, I thought: Is there need for retirement fund? I work for EA till my body can't physically afford, and after that, I think I can suicide. Because if you can't work anymore, what's the meaning of living? However, there's a flaw of this thinking, which is my question talking about.
Yes, I have read those and accepted the truth lots of people believe human level AGI will come in 20 years, and it's just a matter of time. But I don't know why people are so confident on this. Do people think the AI algorithms now are well enough to do most of the tasks on earth "theoretically", and what we need are only fast computing speed?
Some s-risks people may be afraid of informantion hazard of publicly answering this question, if that's the case, you can gmail carlosgpt500@gmail.com to privately answer this question