Hello Reddit!!

I’m William MacAskill (proof: picture and tweet) - one of the early proponents of what’s become known as “effective altruism”. I wrote the book Doing Good Better (and did an AMA about it 7 years ago.)

I helped set up Giving What We Can, a community of people who give at least 10% of their income to effective charities, and 80,000 Hours, which gives in-depth advice on careers and social impact. I currently donate everything above £26,000 ($32,000) post-tax to the charities I believe are most effective.

I was recently profiled in TIME and The New Yorker, in advance of my new book, What We Owe The Future — out this week. It argues that we should be doing much more to protect the interests of future generations.

I am also an inveterate and long-time Reddit lurker! Favourite subreddits: r/AbruptChaos, r/freefolk (yes I’m still bitter), r/nononoyes, r/dalle2, r/listentothis as well as, of course r/ScottishPeopleTwitter and r/potato.

If you want to read What We Owe The Future, this week redditors can get it 50% off with the discount code WWOTF50 at this link.

AMA about anything you like![EDIT: off for a little bit to take some meetings but I'll be back in a couple of hours!]

[EDIT2: Ok it's 11.30pm EST now, so I'd better go to bed! I'll come back at some point tomorrow and answer more questions!]

[EDIT3: OMFG, so many good questions! I've got to head off again just now, but I'll come back tomorrow (Saturday) afternoon EST)]

Comments: 383 • Responses: 44  • Date: 

dydxdz126 karma

Hello! I've listened to you on Sam Harris and on the 80,000 podcasts, and done quite a bit of reading (though not your book yet!)

I have two questions if possible:

  1. Why is effective altruism and longterm-ism used almost always interchangeably? Cant you be an EA but place huge value on the people that are alive and suffering //today// much more than any possible improvements to the trillions of the future?

  2. If so, then how can one place more value to improving future lives (of those who dont exist), vs improving current lives (those who do)? This is related to a brief point Sam made in his discussion with you about the asymmetry of people suffering //now// vs people who do not exist not suffering in future.

Thank you!

WilliamMacAskill103 karma

  1. Aw man, this is a bad state of affairs if it seems they’re used interchangeably!! EA is about trying to answer the question: “How can we do as much good as possible with our time and money?” and then taking action on that basis (e.g. giving 10%, or switching career). But the answer to that is hard, and I don’t think anyone knows the answer for certain. So, yes, some people in EA come to the conclusion that it’s about positively impacting the long-term future; but other people think the best way of doing good is improving global health and wellbeing; other people think it’s to end factory farming, and more. In fact, most funding in EA still goes to global health and development.

  2. My inclination is to place equal moral value on all lives, whenever they occur. (Although I think we might have special additional reasons to help people in the present - like your family, because you have a special relationship with them, or someone who has benefitted you personally, because of reciprocity.)

xoriff41 karma

Re: point 2, can't you take that to the logical extreme and say "there are an effectively infinite number of future humans. Therefore all present humans are infinitely unimportant by comparison"?

PM_ME_UTILONS40 karma

The common EA response is moral uncertainty: yeah, maybe that logically follows, but maybe we should be discounting future people, so let's still care about the present in case we're wrong.

At any rate, this becomes a serious problem when we start talking about "we already put 2% of GDP towards helping the distant future, should we really be increasing this? At the moment this is so fringe that we're not thinking long term enough even if you do apply a discount rate.

ucancallmealcibiades16 karma

The user name and thread combo here is among the best I’ve ever seen lmao

WilliamMacAskill11 karma

I wish I knew how to PM utilitons. If someone figures it out, can I get some?

TrekkiMonstr7 karma

With 2, do you not account for risk? Risk that the research doesn't pan out, obviously, but what about the risk that the problem is solved? If I set aside $5000 for malaria prevention, but invest it so I can help more people -- let's say I get 7% real return, so in ten years I can save two lives, in twenty four, in thirty eight. So I decide to put the money away and wait thirty years -- but then they somehow otherwise solve malaria, and now my money is useless. So wouldn't that translate to a discounting rate for those future lives?

WilliamMacAskill14 karma

The questions of discounting and "giving now vs giving later" are important and get complex quickly, but I don't think they alter the fundamental point. I wanted to talk about it in What We Owe The Future, but it was hard to make both rigorous and accessible. I might try again in the future!

In my academic work, I wrote a bit about it here. For a much better but more complex treatment, see here. For a great survey on discounting, see here.

13AngryMen5 karma

When you say future lives matter you're being ambiguous about whether we should be bringing people into existence or just caring for people who are likely to exist. You seem to not make that distinction even though to most people it makes a huge difference. For example most people care about how climate change will affect future people but few except some right wingers believe we have an obligation to have children and maximise the future population.

WilliamMacAskill8 karma

I talk about this issue - "population ethics" - in chapter 8 of What We Owe The Future. I agree it's a very important distinction.

What I call "trajectory changes" - e.g. preventing a long-lasting global totalitarian regime - are good things to do whatever your view of population ethics. In contrast, "safeguarding civilisation" such as by reducing extinction risk is very important because it protects people alive today, but it's more philosophically contentious whether it's also a moral loss insofar as it causes the non-existence of future life. That's what I dive into in chapter 8.

philosophyisthebest92 karma

Knowing that it costs less than $5,000 to save a life, it can be tough to manage feelings of guilt whenever spending anything on myself. When you chose to give everything you earn above $32,000, how did you come to terms with the fact that giving, say, everything you earn above $27,000 would save an extra life each year?

WilliamMacAskill131 karma

Yeah, it’s really tough. When I first started giving, I really stressed out over everyday purchases. But that’s not a healthy or effective way to live.
I’ve had interviewers criticise me for giving too little (giving more could save a life!) and for giving too much (you’ll turn people off!).
Ultimately, there will always be some amount of arbitrariness. I think a good strategy is to take some time to think about it, decide on a policy, then stick to that.

randomusername847253 karma

I'm wondering about this too, but from a different angle. I live in a rich country with relative stability. But my countries political direction is all about "fuck poor people, fuck public services".

So I'm incentivised to save money myself. I grew up really poor and I'm lucky I broke out of poverty and helped my family too. But we're still a long way from financial security and any number of emergencies could knock us back into generational poverty.

Functionally, my partner and I live off about £15k per year (and we live very comfortably on that, but there's not much fat left to trim). I'd love to give everything else to charity but doing so would a) be committing myself to work until I'm 67 minimum and b) be eroding my own families economic security.

I don't understand how someone can commit to giving away everything above a normal salary a year. Unless it is an emotive decision rather than logical. Or they already have significant assets or security in another form (in which case it's a bit disingenuous to imply they live off $32k as in reality they live off their significant assets).

Having said all that, I think it's amazing and I live this guy's work. I just don't see how giving go charity is an effective social mechanism for most people who don't yet themselves have a very high level of economic security.

Edit to add: this is why, in principle, I support "charity work" being funded by a higher tax rate. I can't guarantee a charity will be their if I need it but if my country is stable and my country is socially altruistic then I can hopefully depend on government funded services if I'm on hard times.

Giving money to charity is only worth it IMO once you can guarantee you'll never need that money for your own survival. Because the charity might not be there when you need it.

WilliamMacAskill3 karma

Yeah, as I say in another comment, I'm really not recommending this for everyone. (Although it sounds like I actually live on several times the amount you do, if you split £15k with your partner!). I don't want to encourage people to put themselves into a precarious financial situation - it's about how much good you do over your whole life, not just the next year.

And I'm well aware that I'm in a particularly privileged situation - I have wonderful friends and a wonderful relationship, I have job security, and I love my work so I'm happy to keep doing it. And I'm able to save despite my giving.

eddielement83 karma

What are some of the most promising EA projects that we can expect to see pan out or not in the next few years?

WilliamMacAskill154 karma

There’s so much going on nowadays that it’s hard to keep on top of it all!
I’ve been a particular fan of the Lead Exposure Elimination Project, which tried to get lead paint banned in poorer countries, as it has been in richer countries. They’ve already had success in Malawi.
Another great project is Alvea, a new EA biotech start-up. Alvea is creating a vaccine platform that will protect rich and poor people alike from evolving variants of COVID-19, and help protect us against even more devastating pandemics in the future.
I’m also excited about low-wave lighting, which can potentially sterilise a room while being completely safe for human beings. If we can get the costs down, and run full efficacy and safety trials, and then install these bulbs as part of building codes all around the world - potentially, we could prevent the next pandemic while eliminating most respiratory diseases along the way.

semideclared79 karma

What did you think of Chidi Anagonye's life?

WilliamMacAskill61 karma

In a way, he's totally right - every major decision we make involves countless moral considerations on either side.

HIs mistake, though, is that he wants to feel certain before he can act. And that means never doing anything. But if we want to make the world better, we need to make decisions, even despite our uncertainty.

Maybe he'd have benefitted from reading another of my books, Moral Uncertainty, which is about how to do just that!

Grumpenstout72 karma

Do you think you should have kids? Why or why not? Regardless of the above... how likely do you think you are to decide to have kids one day? Why or why not?

WilliamMacAskill128 karma

I don’t currently plan to have kids, although I’m not ruling it out, either. It’s not something that I particularly want for myself, personally, and I also just can’t really imagine, for my life, right now, how I’d fit it in alongside the work I do.
As for whether one in general should have kids - I talk about this more in What We Owe The Future. It’s obviously a deeply personal choice, but I do think that having a family and raising your children well is one way of making the world a better place. I think the common idea that it’s bad to have kids because of their climate impact isn’t right, for two reasons.
First, you can more than offset the carbon impact: suppose, if you have a child, you donate £1000 per year to the most effective climate mitigation non-profits. That would increase the cost of raising a child by about 10%, but would offset their carbon emissions 100 times over.
Second, looking only at the negative impacts of children is looking at just one side of the ledger. People have positive impacts on the world, too: they contribute to society through their work and taxes and their personal relationships; they innovate, helping drive forward technological progress; and they contribute to moral change, too. If we’d only ever had half as many people, we’d all still be farmers, with no anaesthetic or medical care.

27231425 karma

He's not going to be able to do it on that budget! I pay more than that on rent alone. Children require a lot more extra space than being single does.

LeonardoLemaitre33 karma

He said on a podcast with Ali Abdaal that if he'd have kids, the budget changes.

WilliamMacAskill26 karma

That's right. The typical expenditure to raise a child in the UK is about £10,000/yr. So I'd allocate something like that amount (split with my partner) per child if I had kids.

get_the_reference_4 karma

First thing I thought of. Second was, is the $32k include his partner's earnings? If I didn't have a wife and three kids, I could live on that no prob.

WilliamMacAskill10 karma

No, my partner and I have separate finances. And I agree, it's really more than enough to live well on!

JustABigDuck38 karma

Hi Will!

Do you think that we should EA activists should take a welfare approach to animal issues --- trying to improve the conditions on factory farms --- or instead an abolitionist, everyone should go vegan approach? The former seems the most popular approach in EA circles, but with increases in population and wealth leading to more meat consumption, I worry that any improvements would just be offset by more animals be abused and killed for food.

WilliamMacAskill55 karma

I’m generally more sympathetic to the “incrementalist”, welfare-improving interventions. That’s really just a matter of seeing what’s worked when it comes to animal activism. The corporate cage-free campaigns run by organisations like The Humane League, or Mercy For Animals, have just had huge success - getting almost all retailers and fast food restaurants to phase out battery eggs, preventing hundreds of millions of chickens from suffering in battery cages.
Partly, also, it’s because I think the suffering of chickens and pigs on factory farms is so bad - if we could get rid of factory farming of chickens and pigs, I think we’d remove at least 90% of the suffering of farmed land animals.

Portul-TM34 karma

What did it take to set up a charity like 80,000 Hours? What struggles did you go through doing it?

WilliamMacAskill73 karma

Thanks for asking - I had bigger struggles setting up Giving What We Can, as that was the first organisation I helped set up. I was very nervous about actually doing things in the world at the time - like, it seemed so intimidating to “found an organisation”! I wouldn’t have been able to if I hadn’t been working with Toby Ord, and if I didn’t feel a sense of moral urgency.
The main struggles were: feeling like an imposter; feeling out of my depth; and genuinely *being* out of my depth and not having experience with basic things like organisational structures and management.
I also had depression and anxiety at the time, and the stress of setting something up made it harder - I’ve worked on that a lot over the last ten years, and I’m a lot happier now.

Future-Hospital480533 karma

How do you evaluate the effectiveness of preventative organizations? E.g., if an organization claims to be working on "supervolcano prevention"--an existential risk!--and then there's no supervolcano for 20 years, is giving them money more/less effective than malaria nets? (This has natural extensions to AI safety research, pandemic prevention, etc).

WilliamMacAskill40 karma

For work to reduce existential risk, there's certainly a challenge that it's hard to get good feedback loops, and it's hard to measure the impact one is having.

As the comment below suggests, the best you can do is to estimate by how much your intervention will reduce the likelihood of a supervolcanic eruption, and what existential risk would be conditional on such an eruption. For supervolcanoes specifically, the hope would be that we could have a good enough understanding of the geological system that we can be pretty confident that any intervention is reducing the risk of an eruption.

Speaking of supervolcanoes - a couple of years ago I made a friend while outdoor swimming in Oxford, and talked to him about effective altruism and existential risk. He switched his research focus, and just this week his research on supervolcanoes appeared on the cover of Nature! (It's hard to see but the cover says: "The risk of huge volcanic eruptions is being overlooked.")

LeftNebula122632 karma

Hi Will,

Is a utilitarian (or more broadly consequentialist) worldview necessary for longtermism and effective altruism? What reason do those with a more deontological or virtue ethical approach toward morality have to support your philosophy?

How do you deal with moral fanaticism in effective altruism? What reason do you have to spend time with family or friends, when that time could be used more effectively generating future utility by any number of methods?

And finally, what are your thoughts on moral non-realism? Is effective altruism undermined by the possibility of an error theory or other non-cognivitist metaethics?

If there are other sources that deal with these issues, I would love for you or anyone else to share them. Thank you!

WilliamMacAskill26 karma

I'm hoping to have a longer twitter thread on this soon. Emphatically: a utilitarian or consequentialist worldview is not necessary for longtermism or effective altruism. All you have to believe is that the consequences matter significantly - which surely they do. (John Rawls, one of the most famous non-consequentialist philosophers ever, said that a moral theory that ignored consequences would be "irrational, crazy.")

So, for example, you can believe that people have rights and it's impermissible to violate people's rights for the greater good while also thinking that living a morally good life involves using some of your income or your career to help others as much as possible (including future people).

Indeed, I think that utilitarianism is probably wrong, and I'm something like 50/50 on whether consequentialism is correct.

WilliamMacAskill7 karma

Oh, and then on meta-ethics:

Error theory is a cognitivist moral view - it claims that moral judgments express propositions. It's just that all positive moral claims are false. On non-cognitivism, moral judgments are neither true nor false.

I'm actually sympathetic to error theory; maybe I think it's 50/50 whether that or some sort of realism is true. But given that I'm not certain in error theory, it doesn't affect what I ought to do. If I spend my life trying to help other people - on error theory I made no mistake. Whereas if really might have made a mistake if I act selfishly and moral realism (or subjectivism) is true. So the mere possibility of error theory isn't sufficient to undermine effective altruism.

leigh895931 karma

How many generations after you die do you care about? And do you care about all of them equally? What's the shape of that curve?

WilliamMacAskill46 karma

I think we should care about all future generations! If our actions are going to cause people to suffer harm, it doesn’t matter whether they’re going to live a hundred years from now or a million years from now. All lives have equal moral worth. That said, we might sometimes have special moral reasons to help people in the present - because we have a special relationship to them (e.g. our family members), or because they’ve benefitted us and we owe them a fair return (e.g. neighbours who helped us out during a difficult time). That’s totally compatible with longtermism!

knorp30 karma

  1. If people ten thousand years ago tried to do things that would be helpful to us today, they wouldn't have succeeded. Why are you confident today's longtermists can offer anything useful to people many generations removed from us?
  2. Obviously EAs have done good work on present-day suffering in the developing world (bed nets, etc.) But in terms of preventing future x-risk from AI, what do you feel that longtermists have concretely accomplished so far?

WilliamMacAskill5 karma

  1. This is a really unusual time in human history - we’re confronting the emergence of extremely powerful technologies, like advanced AI and biotechnology, that could cause us to go extinct or veer permanently off course. That wasn’t the case 10,000 years ago. So there just weren’t as many things you could do, 10,000 years ago, to protect the survival and flourishing of future generations.
    Even so, I do think there were some things that people could have done 10,000 years ago to improve the long-term future. In What We Owe The Future I talk, for example, about the megafaunal extinctions.
    What’s particularly distinctive about today, though, is how much more we know. We know that species loss is probably irrevocable, that that would be true for the human species as well as non-human animal species; we know that the average atmospheric lifetime of CO2 is tens of thousands of years. That makes us very different than people 10,000 years ago.
  2. On the longtermist accomplishments: I agree there’s much less to point to than for global health and development. The clearest change, for me, is the creation of a field of AI safety - I don’t think that would have happened were it not for the research of Bostrom and others.

DoctorBlazes23 karma

How much do you think the average person should be donating to charity?

WilliamMacAskill52 karma

I think it really just depends on your personal situation. If you’re a single parent struggling to make ends meet and give your child a better life, I think it’s entirely reasonable not to donate at all (though it’s especially admirable if you do find a way to donate). If you’re a lawyer or doctor making a comfortable salary, donating more makes a lot more sense. So I want to avoid universal prescriptions here - “average” people are in very different circumstances, and we need to be aware of that.
That said, Giving What We Can recommends 10%, and I think that’s a reasonable bar for most middle-class members of rich countries, like the UK or USA.

mercrono21 karma

Do you know who Qualy is?

WilliamMacAskill29 karma

No :(

Is it you?

endless28620 karma

What do you think the role of animal welfare plays in longetermism? That is, at the moment factory farmed animal outnumber people in civilization by significant factor (maybe 1 to 5? not including fish). Even if alternative meat/milk/eggs would become main stream and reduce factory farms by an order of magnitude, there might still be a huge number of animals being exploited, also in the far future. IS this something we should think about? (I feel that usually when discussing longtermism people ignore non-human animals).

WilliamMacAskill8 karma

I agree that the suffering we currently inflict on non-human animals is almost unimaginable, and we should try to end factory farming as soon as we can. I think we should certainly worry about ways in which the future might involve horrible amounts of suffering, including animal suffering.

That said, all things considered I doubt that a significant fraction of beings in the future will be animals in farms. Eventually (and maybe soon) we'll develop technology, such as lab-grown meat, that will make animal suffering on factory farms obsolete. And, sooner or later, I expect that most beings will be digital, and therefore wouldn't eat meat.

oldschool6817 karma

I've worked for EA orgs in the developing world and one thing that confuses me is how little effort there seems to be to engage people say in Africa in EA or longtermism.

I get it from a simplistic fundraising approach today, but surely that creates major risk for sustainability of longtermism as a philosophy, if changes to global power structures and birth rate changes over decades and centuries mean that future norms are no longer set by euro-centric culture?

WilliamMacAskill10 karma

I absolutely want to see more EA outreach and engagement outside of Europe, North America and Australasia, and I think we’re already starting to see changes in this direction.
Longtermists in Africa are already doing some great work there. One program I’m excited about is the ILINA Fellowship in Nairobi, which just enrolled its first cohort (https://twitter.com/ILINAFellowship/status/1559839555075055616).
And I’ve been working a little bit with African legal scholar Cecil Abungu, on a piece on longtermism and African philosophy.
The EA community is also starting to engage more with India; for example, the first independent EA Global conference in India has been scheduled for 2023, and I plan to go.
That said, there’s definitely much more that Western EAs can and should do to engage with and learn from people in the rest of the world.

BruceTsai11 karma

Hey Will, thanks for doing the AMA!

If people 500 or 1000 years ago took longtermism and applied it to their values, they might justify more radical action in order to meet the goals of [insert religion here], for purposes of eternal salvation / heaven etc. There's probably no set of values from previous generations we'd want to "give longtermism to" if it also meant their values were locked in.

Is there any reason to believe that people 500 or 1000 years from now won't look at us in the same way?

2) Had we invested massively into iron lungs for polio 100 years ago for purposes of future generations, much of that would have been wasted when the polio vaccine came out.

Is there any reason to believe future generations won't be better at dealing with future problems than us, apart from near-term extinction events, or near term events that will stop future generations from having the capability to solve their problems?

WilliamMacAskill3 karma

  1. I think you're absolutely right that more enlightened people in the future will look back at us and think that our values are in major error. I write about that in a recent Atlantic piece. That's why I think we need to create a world with a great diversity of values, where the best arguments can win out over time - we shouldn't try to "lock in" the values we happen to like today. I talk about this more in chapter 4 of What We Owe the Future.

  2. I think that the things longtermists should focus on primarily are the ones you mention - things that take away options from future generations, such as extinction, civilisational collapse, and value lock-in. These are what I focus on primarily in the book.

cyberpunkhippie11 karma

Hi Will,

Book recommendations?

What are three books everyone should read ( excluding your books of course, or your collaborators, Peter Singer, Toby Ord, Nick Bostrom etc.)

WilliamMacAskill22 karma

I’d say:
Moral Capital, by Christopher Leslie Brown
The Scout Mindset, by Julia Galef
The Secret of Our Success, by Joe Henrich

cyberpunkhippie4 karma

Thanks!

Any fictional character that you identify with? Any sci-fi/ speculative fiction book or tv-show that explores the themes you are working on?

I think Hari Seldon from Foundation may be the ultimate longtermist!

WilliamMacAskill6 karma

Haha, that's fair. Although I suspect we can't make quite as precise predictions as Hari Seldon thinks we can.

As a teenager I was very inspired by Prince Mishkin in The Idiot, and Alyosha in The Brothers Karamazov, although I can't say I identify with either of them.

I'd really like there to be more sci-fi that depicts a positive vision of the future - there's really surprisingly little. I'm helping run a little project, called "Future Voices, which involves commissioning a number of writers to create stories to depict the future, often in positive ways. And I gave it a go myself, in an Easter egg at the very end of What We Owe The Future.

Olavvaiyar10 karma

Hey Will, congrats on the book! Looking forward to reading it. Personally, one of the most striking parts of your New Yorker profile was the descriptions of consistent austerity, keeping with the Singer-ian, personal sacrifice aesthetic of early EA and earning to give. Besides the obvious global health successes, such ambitious Giving What We Can pledges that set a cap on personal consumption or apportioned large %s of income to be donated are part of what moved me into the EA movement. How do you think the growth of longtermism as a cause interacts with these personal austerity (eg GWWC/Further pledge) parts of EA principles and practice?

WilliamMacAskill8 karma

I’m really glad that got you into the movement! Toby Ord telling me about his plans to give away most of his income, and being so enthusiastic about it, was a very big part of why I was motivated to help him set up Giving What We Can.
The interaction between personal austerity and longtermism is complex. One thing is that within at least some of the cause areas that I think are currently top-priority from a longtermist perspective (pandemic preparedness, AI safety and governance, and international cooperation), there’s a bigger bottleneck on people who understand the relevant areas and are willing to work in them than there is on funding. Compare that with global health and development, where there are known extremely effective and evidence-based interventions that just require enormous amounts of funding to scale up.
What does this imply? If you want to work on pandemic preparedness or AI safety, it will often make sense to invest in developing your skills or putting yourself in a position to change careers - rather than focusing on donations. Right now, funding isn't the biggest bottleneck for making progress on longtermist cause areas.
This might change in the future, though, if we find longtermist projects that can scale enormously with funding. Some programs in pandemic preparedness might be massively scalable - we’re working on that!
And, as you note, I’m giving the same amounts I’d always planned!

WilliamMacAskill4 karma

I think we should care about all future generations! If our actions are going to cause people to suffer harm, it doesn’t matter whether they’re going to live a hundred years from now or a million years from now. All lives have equal moral worth. That said, we might sometimes have special moral reasons to help people in the present - because we have a special relationship to them (e.g. our family members), or because they’ve benefitted us and we owe them a fair return (e.g. neighbours who helped us out during a difficult time). That’s totally compatible with longtermism!

fakemews10 karma

You shared that you recently donated to the Lead Exposure Elimination Project as a way to broadly alleviate cognitive impairments.

Are there domains where you think broad cognitive improvements could be particularly impactful from a Longtermist lens? For example, are there particular skills/capabilities that, if many people learned, you'd expect it to have a positive impact on the long-term future?

Thanks for all you do!

WilliamMacAskill12 karma

One skill that is particularly important, I think, is making well-calibrated predictions about the future. That’s absolutely essential for making good policy and wise decisions, and it’s pretty surprising how overlooked some of these basic skills are. If you’re interested in forecasting the future, and how we can improve our ability to do so, I recommend looking at Philip Tetlock’s work on “superforecasters” and sites like Metaculus and the Good Judgment Project.

TopNotchKnot8 karma

Hi Will, excited to read the book! One question I have is do you think there is an area of risk the longtermist community is under evaluating? If so, what is that risk?

Second question: what does your workout routine look like?

WilliamMacAskill9 karma

Good question!
I think the longtermist community is absolutely right to be concerned with risks from emerging technologies like AI and biotech (not to mention familiar technological risks, like risks from nuclear weapons). But I think we could do more to think about other kinds of risks. In particular, I think that the quality of a society’s values is enormously important for shaping the long-term future. By helping to bring about the end of slavery, the abolitionists increased the welfare of future generations. And they did that largely by improving their society’s values.
So how do we continue to improve our society’s values, and prevent them from being corrupted by the allure of authoritarianism and fascism? That’s a really difficult problem, but it’s a really important one, and I think we should think about it more. It’s particularly important given potential rapid advances in AI, which could give unprecedented power to a small number of actors, and means that the values that are predominant this century might persist for an extremely long time.
For my workout routine, you should look at my recent Tim Ferriss interview!

RCismyladybug8 karma

Hi Will,I'm looking forward to reading your new book and I'm an admirer of the work you're doing. Forgive me if this is answered in your book, but my question is this: In what ways would you like to see the ideas put forth in your new book become manifest in the world? Perhaps put another way, in the ideal scenario, how would you like your readers to practically confront the moral responsibilities (opportunities) discussed in the book?

WilliamMacAskill12 karma

I’d love to see readers take concrete steps toward tackling most pressing problems for improving the long-term future. That could mean working on pandemic preparedness and biosecurity, or AI safety and governance, or enhancing our ability to forecast the future. It could mean working directly on technical problems, or working on technology policy in the government, or launching a new organisation, or providing operational expertise to an existing one.
There are tons of paths to impact. Obviously, that can be difficult to navigate, at least to begin with. If you want to find the best high-impact career for you, 80,000 Hours is a great resource.

Reschiiv7 karma

Hi Will,

If I understand your view correctly, you think we should aim for a "long reflection", which would be some kind of stable world where we reflect on morality. Presumably that would require some central power to somehow regulate/supress competition. If that's the case it seems to me a big risk would be that this central power becomes some sort of authorian organization that could cause value lock in. Do you think that's a serious risk? What do you think should be done to reduce that risk (at the margin ofc)?

WilliamMacAskill3 karma

Yes, I'd be very worried about centralisation of power in a one world government, which could turn authoritarian.

But you can have institutional systems that are far from a single authoritarian state, make it hard for an authoritarian state to emerge, preserve moral diversity, and help enable moral progress over time. The US Constitution is one (obviously highly imperfect) example.

On the current margin: there's almost no serious work done on what the design of a world government (or other new global international system) should look like, or what a long reflection could look like or how we could get there. People could start thinking about it, now - I think that would be very worthwhile.

Comment_comet6 karma

Why do philosophers rely so much on intuition when it seems demonstrably true that ethical intuitions can differ dramatically from culture to culture on a vast range of topics?

WilliamMacAskill3 karma

That’s a deep and important question. Philosophers will give different answers. But here’s one basic answer that seems compelling to me. All arguments require premises. And while you can provide arguments for your premises, at some point the arguments will give out - you can’t provide arguments for your most basic premises. At that point, there’s basically no option other than to say “well, this premise just seems plausible to me, or to other people whom I trust.” Basically, the philosophical practice of “relying on intuitions” is just a way to make this explicit. When a philosopher says “my intuition is that x,” what they’re saying is that “x seems plausible to me.”

(You might ask: how do we know our intuitions are reliable, without just relying on our intuitions? How do we know that we’re not comprehensively deluded? This is one of the deepest questions in philosophy, going back to Descartes. No one has a great answer yet. But this sort of worry, about “epistemic circularity,” doesn’t just arise for philosophical intuitions. It arises for all of our basic belief-forming faculties. How do we know that our faculties of sense perception is reliable, except by relying on those very faculties?)

LoremasterCelery6 karma

Is your book going to give me more or less anxiety about superintelligent AI?

WilliamMacAskill11 karma

Um, I’m not sure, I’m afraid. On the one hand, I’m certainly not someone who thinks we’re certainly doomed from advanced AI. On the other hand, I’m worried about what happens even if we do solve the alignment problem. I worry that, if we’re not careful, advanced AI systems could spell disaster by locking in authoritarian power.
At any rate, I think there’s a lot we can do to prevent these worst-case scenarios from happening, and make sure that advanced AI benefits humanity instead - and I think we should focus primarily on the positive difference we can make. I really think we can take action to reduce the risks, and that's anxiety-reducing.

shmameron6 karma

Hi Will,

In your book, do you touch on the long-term potential population/well-being of digital minds? I feel like this is something that most people think is too crazy/weird, yet (to me) it seems like the future we should strive for the most and be the most concerned about. The potential population of biological humans is staggeringly lower by comparison, as I'm sure you're aware.

Looking forward to reading your book!

WilliamMacAskill11 karma

I really wanted to discuss this in the book, as I think it’s a really important topic, but I ended up just not having space. Maybe at some point in the future! Nick Bostrom and Carl Shulman have a paper on the topic here.

curiouskiwicat5 karma

Should more EAs do nude modeling in order to earn money to support impactful causes?

WilliamMacAskill3 karma

Haha, I mean if you're a philosopher then you can get paid while working/thinking!

davidbauer5 karma

Hi Will, thanks for doing this! From a longtermist perspective, what do you consider the most consequential thing to have happened in 2022?

WilliamMacAskill17 karma

There have been a lot of major events this year!
One obvious thought is the Russian invasion of Ukraine. Not only has the invasion inflicted enormous misery on the people of Ukraine, but it’s raised the spectre of a significant military conflict between the US and Russia. Great power conflict is enormously destructive, and enormously consequential for the future of the world. If the US and Russia were to engage in an exchange of nuclear warheads, that would be especially catastrophic. Even just a substantial probability of that scenario is very worrying. These things matter a lot for the future of our world, as well as for the victims of the conflict today.
Another thought, on similar grounds, concerns recent tensions between the US and China over Taiwan.
A final possibility is the US government’s failure to pass adequate pandemic preparedness measures. The Build Back Better Act would have devoted $10 billion dollars to pandemic preparedness, but it didn’t get passed. The Biden Administration has just released its proposed budget for Fiscal Year 2023, which asks for an $88.2 billion investment, over five years, in pandemic preparedness and biodefense. This would be an enormous achievement. But whether it goes anywhere depends a lot on what happens in the midterms (among other things). So it’s quite possible the US government will make little progress on pandemic preparedness in 2023, just as it made little progress in 2022. One day, sooner or later, a plague worse than COVID-19 will hit humanity, and it will cause a lot of death or suffering unless we’ve adequately prepared.

intrepidwebsurfer5 karma

Hiya! I'd be interested to hear where you stand on metaethics. You wrote a paper about nihilism - is this a position you're sympathetic towards?

WilliamMacAskill6 karma

I do worry that nihilism might be true. I’m probably at 50/50 on moral nihilism being true, as opposed to moral realism. But if nihilism is true, nothing matters - there’s not reason to do one thing over another. So in our deliberation, we can act as if realism is true. And if realism is true, some of the things we can do are much, much better than others.

d0rkyd00d5 karma

Well, since you're here.....

Admittedly I'm not very familiar with Effective Altruism and perhaps you've addressed this somewhere and I can be pointed in the right direction.

Can you speak to (or have you previously) the impact an individual's actions have vs. that of large corporations and industries, and the idea that one's time and money would be better spent dismantling these large producers of inequality vs. donating excess income?

Edit: just to expand a bit, the cynic in me immediately wonders what good individuals can do when collectively it seems the problem generating these wealth inequalities and terrible living standards in many areas of the world are caused by 1% or less of the population.

Appreciate the time.

WilliamMacAskill3 karma

Yes, a lot of the problems in the world are caused by companies and governments. But I think individuals can have a tremendous impact - such as by *influencing* companies and governments. We've seen this through effective altruism already, and I talk about this in chapter 10 of What We Owe The Future.

AnamorphosisMeta3 karma

  1. What are the most robust arguments regarding AI existential risk in your view? And what are the greater weaknesses? Why is this the topic you think you could be the most wrong about, as I think I heard in an interview? Do you have a view regarding the positions that seem to consider the AI apocalypse a near certainty?

WilliamMacAskill2 karma

This is a big question! If you want to know my thoughts, including on human misuse, I’ll just refer you to chapter 4 of What We Owe the Future.
The best presentation of AI takeover risk: this report by Joe Carlsmith is excellent. And the classic presentation of many arguments about AI x-risk is Nick Bostrom’s Superintelligence.
Why we could be very wrong: Maybe alignment is really easy, maybe “fast takeoff” is super unlikely, maybe existing alignment research isn’t helping or is even harmful.
I don’t agree with the idea that AI apocalypse is a near certainty - I think the risk of AI takeover is substantial, but small - more like a few percent this century. And the risk of AI being misused for catastrophic consequences is a couple of times more likely again.

pandaman19993 karma

Hi Will,

My pre-order of WWOTF should be arriving on the day before my birthday, so thank you for the early birthday present!

Anyway, on to the bathos. My question is: why are you not an antinatalist?

It seems like the logical conclusion for anyone who is very concerned about suffering and thinks that the avoiding suffering should be weighted more heavily than the creation of pleasure (assuming you do think that).

I'm probably about 50% convinced of this position myself, but if you can reason me out of this conclusion I'd greatly appreciate it!

Thank you :)

WilliamMacAskill2 karma

Happy birthday! I hope you enjoy the present, and the future, too!
On your question: So, I obviously agree that suffering is terrible. I also think that the future could contain a lot of it, and preventing that from happening is really important.
But the future could also be tremendously good - it could be filled with forms of joy, beauty, and meaning that we, today, experience in only the rarest moments.
I think we should both try to reduce the risk of future suffering, but we should also try to increase the prospects for future joy, beauty, and meaning.
That is, I agree that preventing suffering should have some priority over enabling flourishing, but it shouldn’t be our only priority.
I talk about this more in chapter 9 of WWOTF on the value of the future. I argue that, although we should in general give more weight to the prevention of “bads” compared to the promotion of “goods”, we should expect there to be a lot more good than bad in the future, and overall we should expect the future to be on balance good.

DangerOtter3 karma

Who is doing your marketing/publicity? They are doing an amazing job and want to know.

WilliamMacAskill2 karma

Haha, thanks! The person who’s leading my media campaign is Abie Rohrig, and he’s working with Basic Books and some other advisors. He’s phenomenal.
Much of the media came from people who’d gotten interested in these ideas, or who I'd gotten to know, over the previous years. That included the TIME piece, the New Yorker, Kurzgesagt, and Ezra Klein.

boomer_black2 karma

Will there ever be a second book?

WilliamMacAskill6 karma

Like a sequel? I hope so! I'd like to write something that's more squarely focused on actions we can take that robustly make the world better, and perhaps stories of people actually doing those things.

CriticalPeach1 karma

Have you read, "how the world really works," by Vaclav snil and if so, what are your thoughts on green energy for a sustainable future that mitigates climate change in a fashion that doesn't drastically impact our current food supply and way of life? What do you think is the best approach to mitigate climate change without upending our current way of life? What do you think about being child free to mitigate the effects of climate change? Thanks

WilliamMacAskill2 karma

I haven’t read it yet, though I hope to - I’ve read some of other of Vaclav Smil’s work, and I’m a big fan.I think clean technology and green energy are fantastic - they’re among the very most promising responses to climate change, and our society needs to invest more in them. In What We Owe The Future, I suggest that clean tech innovation is a “baseline” longtermist activity, because it’s good from so many perspectives. I describe it as a “win-win-win-win-win”, though since writing the book I realise I should have added in one more “win” - it's a win in six different ways!I don’t think anyone who wants to have kids should refrain from doing so in order to mitigate climate change. On balance, if you're in a position where you're able to bring them up well, I think that having kids is a good thing. It’s not just immensely personally rewarding, for many people, but it helps society, e.g. through extra taxes and through technological innovation. It’s even a good thing from the perspective of threats like climate change - we’re going to need more people to invent and develop promising new technologies to address these threats! Finally, you can more than offset the carbon impact of having kids. Suppose, if you have a child, you donate £1000 per year to the most effective climate mitigation non-profits. That would increase the cost of raising a child by about 10%, but would offset their carbon emissions 100 times over.

DaveLanglinais1 karma

Weren't you on a podcast episode of Metaphysical Milkshake recently?

WilliamMacAskill2 karma

No, sadly not!