Hosting: "EA Forum Weekly" Episode 8 (Nov. 6 - 13, 2022), summaries by Zoe Williams, supported by Rethink Priorities. Synthesized, narrated and produced by Coleman Snell
Effective Altruism Forum Podcast • By Garrett Baker • Nov 22, 2022
Hosting: "EA Forum Weekly" Episode 9 (Nov. 14 - 27, 2022), summaries by Zoe Williams, supported by Rethink Priorities. Synthesized, narrated and produced by Coleman Snell
The resources for members of the EA community in light of the FTX situation (Minute 34:24) are the following: Open Phil is seeking applications from grantees impacted by recent events by Bastian_Stern Announcing Nonlinear Emergency Funding by Kat Woods, Emerson Spartz, Drew Spartz AI Safety Microgrant Round by Chris Leong, Damola Morenikeji, David_Kristoffersson Up to $2K USD grants, with total available funding of $6K. Effective Peer Support Network in FTX crisis (Update) by Emily, Inga Includes a table of supporters you can contact for free, as well as a peer support network slack. Thoughts on legal concerns surrounding the FTX situation by Molly Thank you all for listening, you can follow Coleman on Twitter at @SnellColeman for any updates on the EA Forum Weekly and also his interview-based podcast, 21st Talks.
December 01, 2022
Hosting: "EA Forum Weekly" Episode 8 (Nov. 6 - 13, 2022), summaries by Zoe Williams, supported by Rethink Priorities. Synthesized, narrated and produced by Coleman Snell
This week's episode is a little different from our normal programing due to front-ending news and discussion about the FTX situation. If you need someone to talk to, even just to vent, CEA’s community health team is available or you can access peer support set up by Rethink Wellbeing and the Mental Health Navigator here. Limited emergency funding is available via Nonlinear or the AI Safety Microgrants round
November 22, 2022
Hosting: "EA Forum Weekly" Episode 7 (Oct. 31 - Nov. 6, 2022), summaries by Zoe Williams, supported by Rethink Priorities. Synthesized, narrated and produced by Coleman Snell
Link to the 2022 EA Survey! EA & LW Forums Weekly Summary (Oct. 31st - Nov. 6th 22') The Effective Altruism Forum Weekly is hosted by Coleman Snell and Summaries are written by Zoe Williams of Rethink Priorities. Each week, we summarize the top-voted posts and articles on the EA Forum, conveniently separated into different categories for easier listening.
November 10, 2022
Hosting: "EA Forum Weekly" Episode 6 (Oct. 24 - Oct. 30, 2022), summaries by Zoe Williams, supported by Rethink Priorities. Synthesized, narrated and produced by Coleman Snell
Link to the 2022 EA Survey! EA & LW Forums Weekly Summary (24 - 30th Oct 22') The Effective Altruism Forum Weekly is hosted by Coleman Snell and Summaries are written by Zoe Williams of Rethink Priorities. Each week, we summarize the top-voted posts and articles on the EA Forum, conveniently separated into different categories for easier listening.
November 02, 2022
Hosting: "EA Forum Summaries Weekly" Episode 5 (Oct. 17 - Oct. 23, 2022), summaries by Zoe Williams, supported by Rethink Priorities. Synthesized, narrated and produced by Coleman Snell
The Effective Altruism Forum Weekly is a podcast that summarizes the top posts on the EA forum from the previous week, written by Zoe Williams and supported by Rethink Priorities. A written version of the summaries can be found here.
October 28, 2022
Quick update on this podcast ... where it's come from and where it's going etc. (David Reinstein off-the-cuff)
Quick update on this podcast ... where it's come from and where it's going etc. (David Reinstein off-the-cuff) EA Forum Weekly summaries will keep happening Watch this space; we may return to full readings of EA Forum posts (and maybe comments), with even better production value than a year ago Maybe other content too (authors read own posts, interviews/discussions with authors, etc), contact daaronr AT gmail.com if you are interested in contributing content
October 24, 2022
Hosting: "EA Forum Summaries Weekly"; Episode 4 (10-16 October, 2022), summaries by Zoe Williams, supported by Rethink Priorities. Synthesized, narrated and produced by Coleman Snell.
EA & LW Forums Weekly Summary (10 - 16 Oct 22') Hosting: "EA Forum Summaries Weekly"; Episode 4 (10-16 October, 2022), summaries by Zoe Williams, supported by Rethink Priorities. Synthesized, narrated and produced by Coleman Snell. Starting with "We can do better than Argmax".
October 20, 2022
Hosting: "EA Forum Summaries Weekly" Episode 3 (Sept 26 - Oct 9, 2022), summaries by Zoe Williams, supported by Rethink Priorities. Synthesized, narrated and produced by Coleman Snell
Hosting: "EA Forum Summaries Weekly" Episode 3 (Sept 26 - Oct 9, 2022), summaries by Zoe Williams, supported by Rethink Priorities. Synthesized, narrated and produced by Coleman Snell Link: EA & LW Forums Weekly Summary (26 Sep - 9 Oct 22')
October 20, 2022
Hosting: "EA Forum Summaries Weekly" episode 2, summaries by Zoe Williams, synthesized/ narrated and produced by Coleman Snell with consultation from David Reinstein
Read/adapted from EA & LW Forums Weekly Summary (19 - 25 Sep 22') by Zoe Williams (I believe only the EA Forum part is narrated.) Synthesized/ narrated and produced by Coleman Snell with consultation from David Reinstein.
September 30, 2022
Hosting/introducing: "EA Forum Summaries Weekly", narrated and produced by Coleman Snell.
EA Forum Podcast hosts... Note from Coleman Snell: Thanks for listening to the very first episode of EA Forum Summaries Weekly! Narrated and produced by Coleman Snell. Written summaries by Rethink Priorities Executive Research Assistant Zoe Williams. You can check out all the articles in the links below: An experiment eliciting relative estimates for Open Philanthropy’s 2018 AI safety grants Could it be a (bad) lock-in to replace factory farming with alternative protein? Differential technology development: preprint on the concept Announcing the Space Futures Initiative Requesting feedback: proposal for a new nonprofit with potential to become highly effective What could an AI-caused existential catastrophe actually look like? Improving "Improving Institutional Decision-Making": A brief history of IIDM Roodman's Thoughts on Biological Anchors Apply now - EAGxVirtual (21-23 Oct) Announcing an Empirical AI Safety Program My emotional reaction to the current funding situation My closing talk at EAGxSingapore Bring legal cases to me, I will donate 100% of my cut of the fee to a charity of your choice "Agency" needs nuance CEA Ops is now EV Ops EA architect: Building an EA hub in the Great Lakes region, Ohio It’s not effective to call everything effective and how (not) to name a new organisation Please note that this podcast will only contain summaries of EA forum posts, and not LessWrong posts. This is to keep the episodes short & sweet for a weekly series. Other options would include raising the karma threshold on both.
September 25, 2022
"Do good better" Creative Writing Contest Fiction joint 2nd prize winner, by Andrew Kao, read by David Reinstein
"Do good better" Creative Writing Contest Fiction joint 2nd prize winner, by Andrew Kao, read by David Reinstein Link: https://forum.effectivealtruism.org/posts/SQZKZe3MLjiAuyxGJ/creative-writing-contest-fiction-do-good-better 3 Jan 2022 -- improved sound quality (less mic distortion, I hope)
January 06, 2022
"The value of money going to different groups" by Toby Ord (without commentary)
Reading https://forum.effectivealtruism.org/posts/T4F6ZW5Hzv4Tz9Mrm/the-value-of-money-going-to-different-groups This is an edited repost of a previous reading (reader David Reinstein); I removed my commentary so you can listen to the full essay uninterrupted. Used an audio tool for this; let me know what you think I may do this for other prior episodes (remove the comments), if there is interest. I'm thinking for future readings to save my comments for the end, if at all (maybe still including a few explainers if they don't distract from the flow.)
January 01, 2022
"The value of money going to different groups" by Toby Ord; EA Forum post, linked essay, many comments by David Reinstein (FITS crosspost)
Cross-post of: https://bit.ly/fitstoby I'm about to post a version with (almost?) NO COMMENTS ...
January 01, 2022
Seeing the effects of your donation and making incremental choices
Cross-post from Found In the Struce Reading Seeing the effects of your donation and making incremental choices from the EA Forum, by David Reinstein ... (read by the author) Part of the 'giving and gifts podcast trilogy' along with Profusion: The Big, Christmassy Data Special and FITS "The Economics of the Gift" reading
January 01, 2022
"The Unweaving of a Beautiful Thing" (1st prize 2021 creative writing contest), by atb, read by David Reinstein (Volume boosted, removed silence)
"The Unweaving of a Beautiful Thing" (1st prize 2021 creative writing contest), by atb, read by David Reinstein. Sorry for the half-assed accent work. Original post: https://forum.effectivealtruism.org/posts/mCtZF5tbCYW2pRjhi/the-unweaving-of-a-beautiful-thing
January 01, 2022
The Intensity of Valenced Experience, part 1
Original post Written by Jason Schukraft
January 01, 2022
Prioritization when size matters
Original post by Jonathan Harris Graph mentioned in episode
December 27, 2021
The Intensity of Valenced Experience, part 2
Original post Written by Jason Schukraft
December 09, 2021
Research Summary: The Intensity of Valenced Experience across Species
Read content and comments from EA Forum post "Research Summary: The Intensity of Valenced Experience across Species" by Jason Schukraft. Read by David Reinstein, who interjects with some small discussion points and thoughts.
October 21, 2021
[Cross-hosted from FITS] "Cultured meat: A comparison of techno-economic analyses" (by Linch and Neil Dullaghan, Rethink Priorities); read and commented on by David Reinstein
Cross hosted from "Found in the Struce" David Reinstein: I read "Cultured meat: A comparison of techno-economic analyses I added some comments including a few 'what does this mean?' cries for help. Note, I mentioned stray content in one place when it was actually a link to a google doc -- I was reading this on a black and white e-reader that didn't show this as a link. Some highlights: Laugh at my mispronunciation of biological terms, and mixing up the Greek letters upsilon and mu.
October 17, 2021
500 Million, But Not A Single One More
Note: Despite what is said in this episode, today is not Smallpox Eradication Day. Smallpox Eradication Day is on December 9th. Original post by Jai Read & edited by Garrett Baker This post is a primarily motivational work on the horrible effects & eradication of smallpox.
August 04, 2021
Is effective altruism growing? An update on the stock of funding vs. people
Original post by Benjamin Todd Read & edited by David Reinstein Here’s a summary of what’s coming up: How much funding is committed to effective altruism (going forward)? Around $46 billion. How quickly have these funds grown? About 37% per year since 2015, with much of the growth concentrated in 2020–2021. How much is being donated each year? Around $420 million, which is just over 1% of committed capital, and has grown maybe about 21% per year since 2015. How many committed community members are there? About 7,400, growing 10–20% per year 2018–2020, and growing faster than that 2015–2017. Has the funding overhang grown or shrunk? Funding seems to have grown faster than the number of people, so the overhang has grown in both proportional and absolute terms. What might be the implications for career choice? Skill bottlenecks have probably increased for people able to think of ways to spend lots of funding effectively, run big projects, and evaluate grants.
July 31, 2021
Found in the Struce: “Why scientific research is less effective in producing value than it could be: a mapping”
Podcast originally posted here Original Effective Altruism Forum post here Read & Edited by David Reinstein Reading "Why scientific research is less effective in producing value than it could be: a mapping" post in the EA Forum (and peering at some links). Some comments and thoughts/my own experience. Reading the comments section, especially comments from Linch Zhang, IanDavidMoss, and AllAmericanBreakfast. Outro snip: "Magic Terrapin" by Alfie Pugh, performed by the Locked Horns I hope to follow up on this with a reading/discussion of bit.ly/unjournal (or the EA Forum post version of this, if and when I make it)
July 29, 2021
What's wrong with the EA-aligned research pipeline?
Read and Edited by Sam Nolan, Written by MichaelA This is the second post in the series Improving the EA-Aligned Research Pipeline, What's wrong with the EA-Aligned Research Pipeline. This reading discusses 10 potential issues with the pipeline as it currently stands. Most notably, it finds that although there are a large number of people, research questions and funders, but a bottleneck in vetting potential EA research candidates. The first post in this series was: Improving the EA-Aligned Research Pipeline Further posts in this series will be released soon. Links: Original post A central directory for open research questions EA is vetting-constrained Ben Todd discussing organizational capacity, infrastructure, and management bottlenecks Call to action anonymous form
July 28, 2021
Improving the EA-aligned research pipeline: Sequence introduction
Original post: https://forum.effectivealtruism.org/s/fSmBYeTyisyM9fkmD/p/ysHq3K2asNwsCRWeG Read & edited by Sam Nolan This post is the introduction to the sequence, Improving the EA-Aligned Research Pipeline by MichaelA. The sequence discusses the problems with the current system for EA aligned research, and issues with allocating resources (particularly human resources) effectively. It also discusses possible intervention options to solve these bottlenecks in the research pipeline. This episode is an introduction to the sequence. The rest of the sequence is: What's wrong with the EA Research Pipeline? The rest of the sequence is, as of writing this, yet to be published, but hopefully will be soon! The links in this post can be found by browsing the original post above.
July 27, 2021
The case against “EA cause areas”
Original post: https://forum.effectivealtruism.org/posts/3voXaqvPutSrHvuCT/the-case-against-ea-cause-areas Read & edited by Garrett Baker Everyone reasonably familiar with EA knows that AI safety, pandemic preparedness, animal welfare and global poverty are considered EA cause areas, whereas feminism, LGBT rights, wildlife conservation and dental hygiene aren't. The state of some very specific cause areas being held in high regard by the EA community is the result of long deliberations by many thoughtful people who have reasoned that work in these areas could be highly effective. This collective cause prioritization is often the outcome of weighing and comparing the scale, tractability and neglectedness of different causes. Neglectedness in particular seems to play a crucial role in swaying the attention of many EAs and concluding, for example, that working on pandemic preparedness is likely more promising than working on climate change, due to the different levels of attention that these causes currently receive worldwide. Some cause areas, such as AI safety and global poverty, have gained so much attention within EA (both in absolute terms and, more so, compared to the general public) that the EA movement has become somewhat identified with them. Prioritizing and comparing cause areas is at the very core of EA. Nevertheless, I would like to argue that while cause prioritization is extremely important and should continue, having the EA movement identified with specific cause areas has negative consequences. I would like to highlight the negative aspects of having such a large fraction of the attention and resources of EA going into such a small number of causes, and present the case for more cause diversification and pluralism within EA.
July 24, 2021
We are in triage every second of every day
Original post: https://forum.effectivealtruism.org/s/YCa8BRQoxKbmf5CJb/p/vQpk3cxdAe5RX9xzo Read & Edited by Garrett Baker Spoilers ahead — listen to the episode beforehand if you don’t want to hear a rough summary first. I quite liked the "Playing God" episode of RadioLab. The topic is triage, the practice of assigning priority to different patients in emergency medicine. By extension, to triage means to ration scarce resources. The episode treats triage as a rare phenomenon– in fact, it suggests that medical triage protocols were not taken very seriously in the US until after Hurricane Katrina– but triage is not a rare phenomenon at all. We are engaging in triage with every decision we make. The stories in “Playing God” are gripping, particularly the story of a New Orleans hospital thrown into hell in a matter of days after losing power during Hurricane Katrina. Sheri Fink from the New York Times discusses the events she reported in her book, Five Days at Memorial. The close-up details are difficult to stomach. After evacuating the intensive care unit, the hospital staff are forced to rank the remaining patients for evacuation; moving the patients is backbreaking labor without the elevators, and helicopters and boats are only coming sporadically to take them away. Sewage is backing up into the hospital and the extreme heat is causing some patients and pets to have seizures.
July 23, 2021
Introduction to Effective Altruism (Ajeya Cotra)
Original post: https://forum.effectivealtruism.org/s/YCa8BRQoxKbmf5CJb/p/5EqJozsDdHcF7dpPL In this talk from EAGxBerkeley 2016, Ajeya Cotra shares what led her to join the effective altruism movement, then provides an overview of its basic principles.
July 23, 2021
Reducing long-term risks from malevolent actors
Original post: https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors Read by: Wuschel Edited by: Garrett Baker Summary Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history. (More) Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. (More) Malevolent humans with access to advanced technology—such as whole brain emulation or other forms of transformative AI—could cause serious existential risks and suffering risks. (More) We therefore consider interventions to reduce the expected influence of malevolent humans on the long-term future. The development of manipulation-proof measures of malevolence seems valuable, since they could be used to screen for malevolent humans in high-impact settings, such as heads of government or CEOs. (More) We also explore possible future technologies that may offer unprecedented leverage to mitigate against malevolent traits. (More) Selecting against psychopathic and sadistic tendencies in genetically enhanced, highly intelligent humans might be particularly important. However, risks of unintended negative consequences must be handled with extreme caution. (More) We argue that further work on reducing malevolence would be valuable from many moral perspectives and constitutes a promising focus area for longtermist EAs. (More)
July 15, 2021
Report on Running a Forecasting Tournament at an EA Retreat, part 2
Original post: https://forum.effectivealtruism.org/posts/347gJg88ZxxE7Jhch/report-on-running-a-forecasting-tournament-at-an-ea-retreat#Tournament_Outcomes Read & edited by: Garrett Baker Tournament Outcomes In this section I’ll go over the strategies used by players, final scores, and the winner of the tournament. For the actual outcomes of the predictions, see the appendix. Strategies Used by Players The most common strategy I observed was when a question’s outcome could be influenced by player behaviour, players would predict an outcome with 99% confidence then devise a plan to make their predicted outcome happen. One player even made a to-do list:
July 14, 2021
Report on Running a Forecasting Tournament at an EA Retreat, part 1
Original post: https://forum.effectivealtruism.org/posts/347gJg88ZxxE7Jhch/i-ran-a-forecasting-tournament-at-an-ea-retreat-here-s-what Read & edited by Garrett Baker This post describes a simple forecasting tournament I ran during the 2021 Effective Altruism New Zealand retreat as an experimental exercise in improving judgement and decision making. I asked players to predict whether certain events would occur within the span of the retreat, like whether Peter Singer would post to Instagram, or whether anyone would lose their hat. The tournament was surprisingly fun, and went in many unexpected directions. I would strongly encourage other EA retreats and similar gatherings to run their own prediction tournaments, and I provide some resources to get started. A summary of this post: Motivation for the tournament. Summary of the rules, questions, and scoring system I used. Outcomes of the tournament: strategies used by players and the scores obtained. Discussion of stuff I found interesting or surprising, and changes I would make to future iterations of the tournament. Conclusion and promises for follow-up posts. Appendix: the outcomes of the prediction questions. Motivation The art of forecasting is an important part of the Effective Altruism extended universe. There are at least three reasons for this. First, if you can predict which problems are going to be important in the future, then you can start preparing solutions now (unaligned AI, runaway nanotech, engineered pandemics...) Secondly, if you can predict the future conditional on your actions, then you can choose interventions which do more good for longer (if we transfer cash to poor countries, will this help them build a self-sustaining economy, or will it foster dependence and fragility?) Thirdly, in a broad sense, good forecasting requires clear thinking and improves decision-making, which are virtues we want to cultivate in building a wiser society. In other words, cultivating forecasting skills is an act of everyday longtermism.
July 13, 2021
Against neutrality about creating happy lives
Original post: https://forum.effectivealtruism.org/posts/HGLK3igGprWQPHfAp/against-neutrality-about-creating-happy-lives Narration by: The Girl Reading This Editing by: Garrett Baker (Warning: spoilers for the movie American Beauty.) “Once for each, just once. Once and no more. And for us too, once. Never again. And yet it seems that this—to have once existed, even if only once, to have been a part of this earth—can never be taken back. And so we keep going, trying to achieve it, trying to hold it in our simple hands, our already crowded eyes, our dumbfounded hearts.” – Rilke, Ninth Elegy Various philosophers have tried hard to validate the so-called “intuition of neutrality,” according to which the fact that someone would live a wonderful life, if created, is not itself reason to create them (see e.g. Frick (2014) for efforts in this vicinity). The oft-quoted slogan from Jan Narveson is: “We are in favor of making people happy, but neutral about making happy people” (p. 80). I don’t have the neutrality intuition. To the contrary, I think that creating someone who will live a wonderful life is to do, for them, something incredibly significant and worthwhile. Exactly how to weigh this against other considerations in different contexts is an additional and substantially more complex question. But I feel very far from neutral about it, and I’d hope that others, in considering whether to create me, wouldn’t feel neutral, either. This post tries to point at why.
July 10, 2021
[New org] Canning What We Give
Original post: https://forum.effectivealtruism.org/posts/ozyQ7PwhRgP7D9b2w/new-org-canning-what-we-give Narration by: Pixel Brownie Software (Prownie) Editing by: Garrett Baker Epistemic status: 30% (plus or minus 50%). Further details at the bottom. In the 2019 EA Cause Prioritisation survey, Global Poverty remains the most popular single cause across the sample as a whole. But after more engagement with EA, around 42% of people change their cause area, and of that, a majority (54%) moved towards the Long Term Future/Catastrophic and Existential Risk Reduction. While many people find that donations help them stay engaged (and continue to be a great thing to do), there has been much discussion of other ways people can contribute positively. In thinking about the long-run future, one area of research has been improving human's resilience to disasters. A 2014 paper looked at global refuges, and more recently ALLFED, among others, have studied ways to feed humanity in disaster scenarios. There is much work done, and even much more needed, to directly reduce risks such as through pandemic preparedness, improving nuclear treaties, and improving the functioning of international institutions. But we believe that there are still opportunities to increase resilience in disaster scenarios. Wouldn't it be great if there was a way to directly link the simplicity of donations with effective methods for the recovery of civilisation?
July 09, 2021
Why EA groups should not use “Effective Altruism” in their name.
Original post: https://forum.effectivealtruism.org/posts/i24sXY8v6FXLyJKXF/why-ea-groups-should-not-use-effective-altruism-in-their Narration by Liam Nolan Editing by Garrett Baker Starting a conversation about the name “Effective Altruism” for local and university groups. Abstract: most EA groups’ names follow the recipe “Effective Altruism + [location/university]”. In 2020 we founded a university EA group who’s name does not include the words “Effective Altruism”. We have grown rapidly, and it now seems more and more likely that our organization will stick around in years to come. We think our name played a non-negligible part in that. In fact, we believe that choosing an alternative name is one of the most cost-effective things you can do to make your group grow. In this article we argue that more (potential) groups should consider an alternative name. We propose a method for coming up with that name. Lastly, we propose that “part of the EA network” could serve as a common subtitle to unite all EA groups despite their various names. Scroll down to ‘summary’ for a quick overview of our arguments. One of my teachers, a social entrepreneur, once told me: “when you are doing any kind of project, first make sure to give it a good name.” These words ran through my mind when I, together with five others, started a new EA student association at Erasmus University Rotterdam in the Netherlands. At our second collective meeting we decided against the name “Effective Altruism Erasmus” and opted for “Positive Impact Society Erasmus” (PISE) instead. Now, 6 months in, we still believe this was a great decision. Our association is doing well, and we believe that our name has had some part in that. As we speak, more Dutch EA groups are considering changing their name. Maastricht University’s chapter is already called “LEAP” (Local Effective Altruism Project) and the group at Wageningen University is also considering a name change. We think we should have a movement wide conversation about “Effective Altruism” as the name for local and university groups. Below we have written down our thoughts on two questions: firstly, should local and university groups have a name other than “Effective Altruism X”? Secondly, if so, what should that name be? Lastly, we propose a common subtitle for all EA groups with an alternative name. Our thoughts are far from complete and we are uncertain on many accounts. We invite anyone to add to the discussion!
July 08, 2021
How to run a high-energy reading group
Original post: https://forum.effectivealtruism.org/posts/RZ4cWxEkTsCqGgfut/how-to-run-a-high-energy-reading-group Narration by Garrett Baker Editing by Athenae Galea Why are reading groups and journal clubs bad so often? I think there are two reasons: boring readings and low-energy discussions. This post is about how to avoid those pitfalls. The problem I have participated in (and organized) some really bad reading groups. This is a shame, because I love a good reading group. They cause me to read more things and read them more carefully. A great group discussion will give me way more than I’d get just by taking notes on a reading. This is what a bad reading group looks like: six people gather around a table. Two kind of skimmed the reading, and two didn’t read it at all. No one knows quite what to talk about. Someone ventures a, “so, what surprised you about the paper?” Another person flips through their notes, scanning for a possible answer. Most people stay quiet. No one leaves the table feeling excited about the reading or about being a part of the group. This is avoidable, but you need to find interesting and valuable readings and you need to structure your group to encourage high-energy discussions.
July 07, 2021
How much does performance differ between people?
Original post: https://forum.effectivealtruism.org/posts/ntLmCbHE2XKhfbzaX/how-much-does-performance-differ-between-people Narration by Liam Nolan Audio editing by Garrett Baker Some people seem to achieve orders of magnitudes more than others in the same job. For instance, among companies funded by Y Combinator the top 0.5% account for more than ⅔ of the total market value; and among successful bestseller authors, the top 1% stay on the New York Times bestseller list more than 25 times longer than the median author in that group. This is a striking and often unappreciated fact, but raises many questions. How many jobs have these huge differences in achievements? More importantly, why can achievements differ so much, and can we identify future top performers in advance? Are some people much more talented? Have they spent more time practicing key skills? Did they have more supportive environments, or start with more resources? Or did the top performers just get lucky? More precisely, when recruiting, for instance, we’d want to know the following: when predicting the future performance of different people in a given job, what does the distribution of predicted (‘ex-ante’) performance look like?
July 06, 2021
Don't Be Bycatch
Original post: https://forum.effectivealtruism.org/posts/2BEecjksNZNHQmdyM/don-t-be-bycatch It's a common story. Someone who's passionate about EA principles, but has little in the way of resources, tries and fails to do EA things. They write blog posts, and nothing happens. They apply to jobs, and nothing happens. They do research, and don't get that grant. Reading articles no longer feels exciting, but like a chore, or worse: a reminder of their own inadequacy. Anybody who comes to this place, I heartily sympathize, and encourage them to disentangle themselves from this painful situation any way they can. Why does this happen? Well, EA has two targets. Subscribers to EA principles who the movement wants to become big donors or effective workers. Big donors and effective workers who the movement wants to subscribe to EA principles. I won't claim what weight this community and its institutions give to (1) vs. (2). But when we set out to catch big fish, we risk turning the little fish into bycatch. The technical term for this is churn. Part of the issue is the planner's fallacy. When we're setting out, we underestimate how long and costly it will be to achieve an impact, and overestimate what we'll accomplish. The higher above average you aim for, the more likely you are to fall short. And another part is expectation-setting. If the expectation right from the get-go is that EA is about quickly achieving big impact, almost everyone will fail, and think they're just not cut out for it. I wish we had a holiday that was the opposite of Petrov Day, where we honored somebody who went a little bit out of their comfort zone to try and be helpful in a small and simple way. Or whose altruistic endeavor was passionate, costly, yet ineffective, and who tried it anyway, changed their mind, and valued it as a learning experience.
July 05, 2021
Is Democracy a Fad?
Original post: https://forum.effectivealtruism.org/posts/TMCWXTayji7gvRK9p/is-democracy-a-fad This cross-post from my personal blog explains why I think democracy will probably recede within the next several centuries, supposing people are still around. The key points are that: (1) Up until the past couple centuries, nearly all states have been dictatorships. (2) There are many examples of system-wide social trends, including the rise of democracy in Ancient Greece, that have lasted for a couple centuries and then been reversed. (3) If certain popular theories about democratization are right, then widespread automation would negate recent economic changes that have allowed democracy to flourish. This prediction might have some implications for what people who are trying to improve the future should do today (although I'm not sure what these implications are). It might also have some implications for how we should imagine the future more broadly. For example, it might give us stronger reasons to doubt that future generations will take inclusive approaches to any consequential decisions they face.
July 05, 2021