February 9, 2018
On Tuesday 23rd of January 2018, the EA Rotterdam group had their second reading & discussion group. This is a deeper dive into some of the EA topics.
The topic for this event was ‘Extinction Risks‘ from the 80,000 hours website.
The evening unfolded into a thrilling discussion in which great questions were asked.
We (the organisers of EA Rotterdam) thank Alex from V2 (our venue for the night) for hosting us.
If you want to visit an EA Rotterdam event, visit our Meetup page.
Humanity is Facing its Most Dangerous Time Ever
Wait, what? How can this be? Isn’t it the most peaceful time ever? (discussion here) There is no world war, no black plague, no biblical tidal wave. Yet, we live in a more dangerous time than ever before. We have harnessed the power of the atom (read: made a ton of nuclear weapons). We are cracking the genetic codes (read: bioterror from a basement). We are changing the climate without regard for what will happen. We are developing an intelligence that will far surpass us (AI).
We are living in dangerous times. Experts estimate our extinction risk to be between 1-20% in the next century. That is some orders of magnitude higher than the average person would ever guess. But, we are also living in a time where our resources can be used for good. We are living in a time where we can gather our resources to prevent (some of the) bad outcomes. Extinction risks is a neglected cause and an optimist would see here a great opportunity to do good.
Want to take action? Go here in the 80,000 hours article.
Nuclear War
We discussed how nuclear war could wreak havoc on the world. The combination with ideology (and patriotism/tribalism) is what makes this such a pressing problem. Where in the Cold War two nations were keeping each other in check with MAD, today more and more actors (read: countries/groups) have gotten their hands on nuclear weapons. And although the Cold War has come to an end, there is still tension between Russia and America (like a lot).
There are fewer foot soldiers around the world but cyber attacks and the like have taken their place. Conflicts between countries are being fought in different places. Both digitally as physically (think Ukraine). But nuclear war is not out of the question. North Korea could do untold damage to South Korea, Japan and America. And that hasn’t even numbered in the risk of AI in combination with nuclear weapons.
New technology always finds a way to spread itself. And we people can decide to do good or bad with it. Or even have good intentions (e.g. energy) and have bad outcomes (e.g. climate change). The proliferation of information and technology is virtually unstoppable. So we must recognise that we can’t control the tech.
Uncontrollable Tech
What if I told you that I could 3D print a gun? Disturbing right. I could make a gun without a registration number. So, what if I told you that anyone with an internet connection and access to a 3D printer could do this? That is the reality we live in today. More on this in this excellent Planet Money episode. And what if the person that made the blueprints is now selling a mill that can make an aluminium frame of an AR-15.
This is a prime example of bad consequences of technology that was made to do good (e.g. 3D print heart valves). We did ask the question: Where are these guns were going? Is it just a group of anarchists that have them stocked in their house? Or will these guns be the next ones used in a mass shooting? Or are we good people in our hearts? Or are the people who commit murders not the people who care much about their privacy and whether their guns have a serial number?
What became clear is that (new) technology increases our power. Our power to do both good and bad. And that tech has unforeseen and unforeseeable consequences. The latter we can’t do anything about, but the former we can become better at. The Future of Humanity Institute is a research institute that is investigating ways to do this.
Divided Together
One other factor in extinction risk is us, our divided world. Because of algorithms we live in our own filter bubbles. We can say that we’re both smart and stupid at the same time. We can learn as much as we want, but hearing an opinion that isn’t aligned with what we think is very unlikely. And yes, we lived in our own bubbles before, but it has become worse through technology.
And when we code machines to emulate us, it takes on our biases. An experiment with a twitter bot ended in racism, in 24 hours. If there is a faulty premise/logic behind a program, it may perform in a way we didn’t intend it should. And the faults can be invisible (like filter bubbles which only reached our conversations last year), and we can become dependent on them. And are the bubbles even bad? Don’t they make us feel comfortable? To that I would say, easy choices hard life, hard choices easy live.
Why Care?
Is there any reason we should even care that we’re divided and risk extinction? Carl Sagan says yes we should.
If we are required to calibrate extinction in numerical terms, I would be sure to include the number of people in future generations who would not be born…. (By one calculation), the stakes are one million times greater for extinction than for the more modest nuclear wars that kill “only” hundreds of millions of people. There are many other possible measures of the potential loss—including culture and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise.”(source)
We agreed that a person not born does not equate a person being killed. But we also talked about the joy that this person could not experience (because of not being). This can be called he unfulfilled potential. The potential for happiness, technology, society, artistic expression, and more.
More on this in a great interview by Sam Harris with David Benatar.
Prepare Yourself
What if we stopped looking for answers and just tried to live out the extinction events? That is what preppers are preparing for. Some very rich technologists are buying land in New Zealand (read more). Whilst others are planning to freeze their bodies until a time comes to save/heal themselves if/when technology keeps progressing (more at Wait But Why). We ended up discussing that time might be better spend solving than preparing.
Why Neglected?
Extinction risks are far away. Climate (change) is something we don’t experience, we experience weather. So we have to address rationality (logos) and not emotion (pathos). Or at least try and use more rationality because sometimes emotions are working against us.
Climate change and conflict lead to migration and when nationalism is encouraged, people from one country are not likely to help people from another country. They ask themselves, ‘Why help these other people?’, we have our own struggles.
Steven Pinker is positive about our ability to change. In his book (buy it here) Better Angels of our Nature he argues that we’re becoming more compassionate. We’re making our circle of empathy (or compassion) larger.
William MacAskill (80k podcast link) argues the same. He states that our morals are improving and that those of future generations will likely be even better. He argues that people with ‘bad’ ideas aren’t stupid, they are just uninformed. You only need one wrong belief (and many right reasons) to go down a wrong path. So when we increase our logical thinking, we might end up somewhere more positive.
Speaking for the Future
The green party (Groenlinks) proposed an ombudsman for the future. The goal was that this person would represent our future generations. Because the actions we take now will influence their lives. And they don’t get to have a vote now.
How can we become more future-oriented? Can we improve our voting systems? We had some ideas and there is more in Buying Time (buy it here). And watch this video by David Letterman and Barack Obama about why people don’t vote.
Optimist vs Cynic
You have to believe in an optimistic world (at least so I think). But we’ve become more cynical over the last decades. Why? We’ve lost our belief in social progress. After the second world war you could move upward, now we don’t see these possibilities anymore.
In the enlightenment, there was a march or reason. The 19th century brought us romanticism. And in the 20th century, we saw how reason could be used for nefarious purposes. We see how reason, capitalism, efficiency can be used for bad things.
The world has become too complicated. Wages are frozen. And people feel they aren’t benefitting from the technological progress that’s being made. The cost of living is going up. And people are able to see how others around them are thriving (thanks, Instagram and Vogue) and they are not.
Yet, we live in a world where we have more access to healthcare than ever before. Our basic needs are becoming cheaper. We have a supercomputer in our pocket and the world’s knowledge at our fingertips. Through a different lens, the world looks much better.
Conclusion
We’ve had a great evening with an energising discussion about extinction risks. In the end, we took a closer look at our own psychology and looked at how we view the world. Everyone took something home and by discussing the topic things became clearer.
Want to join us for another evening? Feel free to come over and bring a friend! Please check out our Meetup Page.
Questions from me:
- How do you feel about the future? Scared straight? Optimistic? Realistic?
- And how are you preparing or preventing?