AI can solve all EA problems, so why keep focusing on them?

AI can solve all EA problems, so why keep focusing on them?
Superintelligence could solve everything.

Suppose you believe AGI (or superintelligence) will be created in the future. In that case, you should also acknowledge its super capabilities in addressing effective altruism (EA) problems like global health and development, pandemics, animal welfare, and impactful decision-making.

Suppose you don't believe superintelligence is possible. In that case, you can keep pursuing other EA problems, but if you do believe superintelligence is coming, then why are you continuing to focus time and money on issues that will likely all be resolved, assuming superintelligence comes to our beneficial fruition?

I've come up with a few potential reasons why people are continuing to devote their time and money to non-AI-related EA causes:

  • They aren't aware of the potential capabilities of superintelligence.
  • They don't think that superintelligence will be here for a long time, or at least they remain uncertain about a timeframe.
  • They're passionate about a particular cause, and AI doesn't interest them.
  • They think that present suffering matters intrinsically, that the suffering happening now has a moral weight that can't be dismissed.
  • They might even think that superintelligence will prove unable to address particular problems.

It's widely believed (in the AI safety community at least) that the development of sufficiently advanced AI could cause major catastrophes, a global totalitarian regime, or human extinction, all of which seem to me to be more pressing and more critical than any of the above reasons for focusing on EA. I bring this up because I'd like to see more time and money being put into AI governance and safety, particularly in working on the alignment problem through automated AI labor.

So, do any of the reasons presented above apply to you? Or do you have different reasons for not focusing on AI risks and rewards?