Hello everyone,
I am relatively new to Effective Altruism and have been reflecting on some ethical questions surrounding AI and personal responsibility.
Recently, I’ve been thinking more deeply about the ethical implications of AI use, partly prompted by discussions I encountered in spaces such as the subreddit r/antiai. I understand that AI is a frequent topic within Effective Altruism, but I am particularly interested in aspects that seem less discussed — especially its environmental and everyday ethical dimensions.
1. Environmental Impact of AI
I’ve come across claims that training and operating large AI models can produce a significant carbon footprint, particularly for image and video generation. For example, some posts suggested that certain AI deployments (such as Grok) have had localized environmental effects, including the drying of nearby bodies of water and health issues among residents near data centers.
I’m curious about what reliable evidence exists on this topic. How substantial is AI’s environmental impact compared to other industries, and how should we approach it from an EA or longtermist perspective?
I’ve read that generating AI videos is particularly energy-intensive — reportedly more so than text generation. According to ChatGPT, producing a one-minute video with Sora 2 might consume around 1 kWh of electricity, roughly equivalent to powering an average U.S. household for one hour. Of course, this estimate may not be accurate, as the specific energy use of Sora 2 has not been publicly documented and also because ChatGPT is not a reliable source. I haven’t personally used AI tools like Sora or image generation models, partly due to their cost, but learning about these potential impacts makes me wonder whether it would be ethically responsible to use them in the future.
2. Intellectual Property and Originality
Another issue I’ve been considering is the question of how AI models are trained on copyrighted or proprietary data. To what extent can the outputs of these models be regarded as original, and does their use raise serious ethical concerns — particularly for individuals (like myself) who use tools such as GitHub Copilot or Microsoft Copilot in their work?
3. Personal Responsibility and Lifestyle
On a personal level, I’ve realized that I’ve been using AI tools for some time without much reflection on their broader ethical implications. I’m not particularly concerned about “Terminator”-style scenarios, but I do care about how my choices affect others and the planet.
Frankly, I struggle to understand why some people view AI as an existential risk due to potential takeovers. I find it somewhat misguided to prioritize funding for speculative “sci-fi” scenarios rather than addressing urgent issues such as global poverty and preventable deaths. I have not seen convincing evidence that AI poses an existential threat in that sense.
While I recognize that AI could reduce employment in certain sectors, I don’t believe it will lead to mass unemployment or universal basic income for everyone. More importantly, I feel there is little that individuals can do to prevent such large-scale developments through personal lifestyle choices, so I’m uncertain how this should factor into one’s personal moral responsibilities.
Beyond AI, I often wonder what kinds of everyday actions truly matter from a moral perspective — for example, how much personal energy or resource consumption is ethically appropriate for an individual.
I maintain a mostly plant-based diet primarily for animal welfare reasons (with the environmental benefits being a welcome secondary effect), but I have not been particularly conscientious about electricity or water use. In elementary school, I was taught that actions such as turning off lights, limiting showers to two minutes, and conserving water while brushing teeth were critical for addressing global warming. As an adult, these practices now seem to have minimal overall impact.
In daily life, my electricity and water consumption are about average, though I occasionally leave lights on overnight or take long showers. I generally try to avoid waste, though mostly for practical reasons such as managing utility costs rather than environmental concerns.
From an EA standpoint, how should one think about the balance between personal lifestyle choices and broader efforts to create positive impact?
In summary, I’m primarily interested in understanding my personal role in relation to AI and issues like global warming and water usage. While I recognize that much of this lies beyond my control, I’d like to know which aspects fall within my moral responsibility.
I would greatly appreciate any insights or resources on these questions.
Thanks.
