Uladzislau Linnik

Organization building @ in transition for impact
0 karmaJoined Seeking work

Bio

Hello, I’m Uladzislau. I’m a generalist in a full-time career transition for impact, aiming to use my most productive hours and real-world organization-building skills to tackle important, neglected, and solvable problems—by joining or creating a high-impact organization.

My current interest is humanity’s resilience in the face of AI and global catastrophic risks, especially our epistemic and cognitive readiness for ongoing changes. However, I’m open to contributing to other high-impact causes if there’s evidence for greater counterfactual impact and a strong personal fit.

I have spent 10+ years building evidence-driven operations, helping small businesses grow into international companies amid crises—by establishing sustainable departments in finance, project management, PR, and more. I’m currently engaged in EA-related 1:1 advisory and accelerator programs, searching for ways to refine my focus and test my skillset against the challenges high-impact organizations face. I’m open to skilled volunteering and test assignments.

How others can help me

  • As I begin posting here and work to overcome imposter syndrome, I especially appreciate feedback—on both what I get wrong and what I get right.
  • If you know of initiatives that could benefit from my work time, organization-building skills, or research assistance, I’d be grateful for an introduction.

How I can help others

  • As a full-time career transitioner, I’ve had the opportunity to test various transition routes and am happy to share my experience with others earlier in their journey.
  • If you have a project or research assignment needing organization-building or hands-on support, I’d be glad to consider volunteering or taking on a test task.

Comments
1

I am curious to read more about EA community’s current takes on humanity’s epistemic resilience in view of growing AI. In other words, I’m wondering: What are the risks that our capacity for curiosity, agency, critical thinking, sourcing, and vetting information, and evaluative decision-making might deteriorate as AI usage increases? How big, tractable, and neglected are these risks, especially as AI systems may reduce our incentives to develop or use these skills?

My intuition is it could drive challenges even with aligned AI and without direct misuse—we as humans could disempower ourselves voluntarily out of mere laziness or lost skills. The risk could be aggravated if, following the “Intelligence Curse” logic, the “powerful actors” see no reasons to keep humans epistemically capable. Besides, it could threaten AI alignment if our capacity to make informed decisions about AI governance diminishes.

I’m now only learning the EA ways and hopefully in some time will be able to myself evaluate whether this is a valid issue or I’m just doomsaying. However, if I imagine that for AI to go well we need both AI aligned with humans and humans evolved for AI, I’m under the impression that current EA efforts lean towards the former more than to the latter. Is my estimate sensible?

Far from claiming to have conclusive evidence, I've made some observations to fuel the above subjective impression. I draw them from reflecting on the information bubble I’m building around myself as I’m now delving into Effective altruism.

For example, as I searched for skilled volunteering opportunities, I reviewed 20 AI orgs through EA-related opportunities boards (EA, 80,000 Hours, ProbablyGood, AISafety, BlueDot Impact, Consultants for Impact). I tried to be impartial, though if I had any bias, it was toward preferring work on epistemic resilience issues. Of these organizations, I found 4 that tackle the issue more or less explicitly—focusing on the human side—compared to 16 that seem to mainly address the AI side.

Also following 80,000 Hours problem profiles and AI articles, BlueDot Impact Future of AI course, EA Forum digest and several AI newsletters of the recent 1-2 months, supplied with some express googling, I extracted 5 more or less explicit mentions on the topic of preparing humans for AI. While I didn’t count precisely, the proportion of articles focusing on AI-side problems (e.g., compute, AI rights, alignment) seemed subjectively much higher. 2 of those 5 specifically tackle intentional misuse, and the rest 3—more general changes in cognitive patterns including but not limited to malevolent usage, e.g., Michael Gerlich’s 2025 study “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking”. I am asking about the latter 3—broader implications for our thinking without bad intentions as a key risk factor.

Does EA community have any view on the topic of our readiness to use AI without degrading? Is my impression about EA community leaning more towards the AI side of the issue vs. the human side of it sensible? Is it a problem, worth exploring further? Are there any drafts on the topic that wait to be published?