I am a social entrepreneur focused on advancing a new community-building initiative to ensure AI development benefits all sentient beings, including animals, humans, and future digital minds. For over a decade, my work has been at the intersection of technological innovation and animal advocacy, particularly in the alternative protein and investigative sectors.
I am the co-founder and former CEO of Sentient, a meta animal rights non-profit. My background includes work as an investigative journalist on television and undercover employment in slaughterhouses.
Feel free to reach out to me on LinkedIn or email (ronenbar07@gmail.com).
I am looking for a co-founder and collaborators for the new initiative to ensure AI development benefits all sentientkind. I am happy to share ideas and receive feedback.
I have been practicing Vipassana meditation for several years.
I'm looking for collaborators, volunteers and a co-founder for the AI for All Sentient Beings initiative I've started (The Moral Alignment Center). I'm eager to connect with sentientists who care about animals, humans, and future digital minds. I'm open to feedback, idea-sharing, and deepening mutual understanding.
I offer free help with ideation sessions using creativity methods, and advice on topics related to entrepreneurship, AI ethical alignment, meta activism, technology and animals, knowledge management systems, storytelling, language bias, journalism, and undercover investigations.
I think Ethical co-evolution of humanity and AI is a very interesting concept, that on the one hand points out to humans staying in control and not handing over power and knowldge of what is "right" to AI, but on the other hand willing to learn and develop, understanding we are shortsighted when it comes to morality, and also may have a lot to learn from AI!
@Beyond Singularity Another issue is how you calculate positive vs negative valence. David Pearce thinks ethics is only about reducing negative experiences, and although it is good to make beings have more positive ones, it is not within ethics. So his view is, I think, the more extreme side of negative utilitarianism. 
I think positive experiences are within the ethical realm, and although reducing suffering is more important than increasing happiness, I would still try calculate how much beings are happy as well and how to optimize that. 
I don't think this large-scale cooperation or society or groups function is morality. It is linked to morality but it is fundamentally something else. A society can "function" well with having part of it suffer tremendously for the benefit of another group. There is nothing objective with longing for a world with less suffering, it is basically in another realm, not in the one of math or rational, though it is tied to rationality in some way
I tend to think the word objective doesn't fit morality from a philosophical standpoint. "objective" truths are arguments that we can decide upon who is right by checking predictions, each of them is evidence for the validity of a claim. If I say earth is round, we can check this claim by talking to experts or flying to space and looking at earth, etc. All of which are prediction of subjective experiences we will have.
So "objective" argument means I  am guessing something about the future world, that it will look one way and not another. Morality is in a totally different domian. I would like the future world to be with less suffering and not more, but this is a longing, not a prediction that can be refuted. So with morality there is some sense of logic and being systematic, but at the core it is not an objective question becasue it can't be decided upon with predictions
I don't think we need to solve ethics in order to work on improving the ethics of models. Ethics may be something unsolvable, yet some AI models are and will be instilled with some values, or there will be some system to decide on the value section problem. I think more people need to work on that. 
Just now a great post relating to the value selection problem was published :
Beyond Short-Termism: How δ and w Can Realign AI with Our Values 
 
Do we need a yearly strategy paper of the EA movement?