The Blank Slate by Steven Pinker takes a critical look at our human brain and argues that it’s NOT a blank slate. Pinker (also known from The Better Angels of Our Nature and Enlightenment Now), combines his skills of storytelling and deep (and wide) knowledge to put down a convincing argument for how the brain/mind? interacts with our environment.
Here is a short summary from the book:
One of the world’s leading experts on language and the mind explores the idea of human nature and its moral, emotional, and political colourings. With characteristic wit, lucidity, and insight, Pinker argues that the dogma that the mind has no innate traits-a doctrine held by many intellectuals during the past century-denies our common humanity and our individual preferences, replaces objective analyses of social problems with feel-good slogans, and distorts our understanding of politics, violence, parenting, and the arts. Injecting calm and rationality into debates that are notorious for axe-grinding and mud-slinging, Pinker shows the importance of an honest acknowledgement of human nature based on science and common sense.
Here is the table of contents:
The chapter I want to highlight/make some observations about is the one on children. It is this chapter where I was most surprised by the evidence.
In psychology (what I studied) you learn about the 50/50ish division between genes and environment (nature vs nurture) and of course that there are interactions between both.
An example would be your length. Your genetic make-up determines for the most part how tall you will become. But if you’re malnourished whilst growing up, you will come up a few centimetres short in the end.
Or think of your temperament. You may be a stoic, or a hot head. And during the years of your life you will learn to deal with how you’re wired. And some do better than others. They learn better techniques, or they might be unlucky in their childhood.
And this is where Pinker, armed with data, made me think about things a bit different than before. He argues that your family and all the experiences shaped by your parents (the ‘shared’ experiences you could have had with siblings) don’t matter at all. And they don’t matter for the variance of outcomes you will have.
I.e. if we’re looking at your expression of your temperament or the chance that you will end up in jail, then it’s explained for about 50% by your genes, and 50% by your unique experiences.
Unique experiences? The friends you have and smoked weed with when you were still 15. The tv shows you watched in your bedroom. The teacher who took you under his wing. All things that are (almost) completely outside of the control of your parents.
But, but… parents should have an influence, right? I also can’t shake the feeling that what a parent does should have an influence. But when looking at (identical) twins who grew up in different families, or looking at different families in similar circumstances, and many other configurations, Pinker concludes that the shared experiences really count for nothing.
Looking at this in another way, you can say that there may just be many more (influential) unique experiences. Say for instance that your experience of sex(uality) should be formed by different factors. There are of course the genes. But what would have more influence, parents telling you about the birds and the bees, versus your first good/bad/average very intimate evening. Or the stories your peer group tell you. And the expectation that may differ from class to class, from peer group to peer group.
What if you can still shape this environment? As a parent can’t you choose where your kid grows up and influence it in this way? I guess that may still be true. Still, you’re only marginally improving the environment (which accounts for 50%) with still so much variation of unique experiences.
Say you choose the best school, but because your kid is now surrounded by other kids who are smarter he becomes very insecure and gets bullied. Or you move to the countryside because you believe it’s safer and he gets hit by a car in the middle of nowhere.
Ok, enough rambling, Pinker does end the chapter in a good way. You can see your kids not as a blank slate you need to shape and fill. No, see your kids as your friends. As (little) people you want to hang out with. To enjoy your time together (they have half your genes, so you might get along great). And yes, don’t be a bad parent, why would you want to even consider that. Be a good parent just because (and not for their outcomes) it’s the moral thing to do.
God Is Not Great by Christopher Hitchens takes reason to religion. It’s a deep dive into a terribly important topic. Not only because it has shaped (and for the foreseeable future will shape) our lives. Whilst some will argue that we’re already living in the next Enlightenment (or hope so, Steven Pinker), Christopher Hitchens is more militant and political, if we need an Enlightenment, he will be one of the horsemen of it.
Here are some observations I had on the chapters:
I found this to quite the enlightening read and although I don’t think religion and discussion around it will occupy much of my mind-space, I think this is a good introduction to the sins of religion and has given me a better understanding of the space.
How to Get People to Do Stuff by Susan Weinschenk can be seen as a guide to actionable psychology/behavioural economics. The book is full of tips and trick on how to get people to take action. It’s quite high-level but does reference good sources (e.g. research by Daniel Kahneman). Here are the sections:
I skimmed through quite some parts (and did look at the strategies) and now have 9 actionables to do for Queal.
This year my theme is Connection. And for this, I’ve started laying the groundwork. Just like in the second half of 2018, I’ve been keeping up my updates on the Timeline.
One big thing I’ve done is to follow a course on ‘The Molecular Mechanisms of Ageing‘. The course was quite detailed and I’ve learned some more fundamentals of ageing. I hope to use this knowledge further in
I’ve also followed a course on JavaScript from Codeacademy. The course was quite enlightening, now I should/will start using it (and future courses on PHP and other tools) to upgrade my website, workflows (e.g.
One other actionable was to start using more ‘book’ knowledge in real-life and at Queal, we’ve been implementing two books into our routines. One about Building a Storybrand, and the other about usability testing (Rocket Surgery Made Easy).
Here is my analysis of the goals and various updates:
Goal 1: Make this website a true personal knowledge hub
I’ve added almost all of my back-catalogue from before moving the website over. I’ve also done the same from another wiki-like system I used. Of course, I haven’t yet written all the blogs/reviews I wanted to write.
The goal stays the same and in the coming quarter, I hope to improve search and continue to add new things to the Timeline, reviews, etc. Maybe I will write a long essay, but no guarantees.
Goal 2: Eat good meals that support my well-being 90% of the time
Yes and no. For most meals I eat at home, I’ve made them myself and have put some effort into it. I do see that I eat quite a lot and that includes quite some yoghurt with toppings at work.
I will keep being conscious of this and with Lotte moving in, I think my food will be very good. I might even bring some more to the office (and have some variety next to 2-3 meals of Queal per day).
Goal 3: Keep on improving my house
Yes. I’ve done quite a lot since last writing here. The bathroom is finished (for which I only did the door and painting). I take a bath about once a week and of course it’s very convenient to have a toilet upstairs.
Next to that, Lotte is moving in soon (in a few days of writing this draft, and on the day of publishing) so we’ve been making the house even cooler in the meantime.
The plants have multiplied, we’ve painted some walls, there is a new couch, new bookshelves/library, etc. I like how everything looks and Lotte will take some time to sort out her things.
One other addition is ‘Anne’, the robot vacuum (from iRobot) and he is working quite well. Today I put away all cables he was messing up, so he should be all good to go.
Goal 4: Achieve my fitness goals
The last few weeks I’ve been following my own new plan and that is going very well. I don’t know yet what I can do best in terms of weight-loss versus muscle-gain, but everything looks good for now.
I’m still learning the Snatch and better my Clean & Jerk. I haven’t measured my max yet (will do so in about 7 weeks) and who knows I will be a bit closer to the 90kg.
Goal 5: Write Spero
No. I have this on my daily checklist. Who knows if I will be able to incorporate this in my routine somewhere in the coming months.
Alright that is it for now. I don’t have any new goals at the moment, let’s just be happy with the ones I have and there is enough to be done already.
“Bob Johansson has just sold his software company and is looking forward to a life of leisure. There are places to go, books to read, and movies to watch. So it’s a little unfair when he gets himself killed crossing the street.
Bob wakes up a century later to find that corpsicles have been declared to be without rights, and he is now the property of the state. He has been uploaded into computer hardware and is slated to be the controlling AI in an interstellar probe looking for habitable planets. The stakes are high: no less than the first claim to entire worlds. If he declines the honor, he’ll be switched off, and they’ll try again with someone else. If he accepts, he becomes a prime target. There are at least three other countries trying to get their own probes launched first, and they play dirty.
The safest place for Bob is in space, heading away from Earth at top speed. Or so he thinks. Because the universe is full of nasties, and trespassers make them mad – very mad.”
This was a fun story and makes me wonder about the second and third book in the ‘
Hmm I do realise that there is a short storyloop at the beginning:
You: Bob, sold his company, etc
Need: to live, and well, you’ve been hit by a bus
Go: you are an AI and you need to figure out how this works
Search: learn to work with the tools you have. Learn more about the world
Find (with the help of a guide): learns how to protect himself, use his new abilities
Take: has to leave the world, and leave all his connections to the world behind
Return: finds his humanity again in VR etc (and returns to Earth to save it later)
Change: he is the new Bob (and Bill, Homer, etc)
You: Bob, AI, cruising through space
Need: to survive from other AIs humans made (but a lot of new goals and subplots are introduced later, and there I think the story might be less good, but also interesting, hmm)
Go: on the way to new resources in other solar systems
Search: energy, place to be safe, make copies, learn skills
Find (with the help of a guide): arrives in other solar system(s), has new skills, improves, finds way to save humanity (after a while at least) (hmm, not really a guide here except from some old knowledge of tv series etc, and some quotes from The Art of War)
Take: not all Bobs survive, humanity? (but not really, because you don’t really care about that too much) (I guess there could/should have been more sacrifice?)
Return: populates the universe, goes back to Earth
Change: he is the new Bob (and Bill, Homer, etc)
I guess another problem I had with the story structure was the lack of closed loops. The threat of the Brazilian probe is still there, there is another type of intelligent civilisation out there (that took the metal out of one system and left some bots there), the humanoids on another planet (and the gorilla’s etc they have to survive from), etc.
On Wednesday 13rd of March 2019, the EA Rotterdam group had their seventh reading & discussion group. This is a deeper dive into some of the EA topics.
The topic for this event was AI x Future: Prosperity or destruction?
During the evening we discussed how artificial intelligence (AI) could lead to a wide range of possible futures.
Although we gave the instruction of thinking about only one side, of course all 3 groups also considered the opposite point of view from what they had to argue. Here are the questions, presentations, and my personal summary of the night.
We (the organisers of EA Rotterdam) thank Alex from V2_ (our venue for the night) for hosting us, and Jan (also from V2_) for being our AI expert for the evening.
If you want to visit an EA Rotterdam event, visit our Meetup page.
These were our starting questions:
Bright Lights
Dark Despair
Bonus:
We started the evening with a presentation by Christiaan and Floris (me). In it, we explained both Effective Altruism (EA) and how (through this framework) we look at AI.
After that, we split the group into two and both groups worked on making a mindmap/overview of the questions asked above (download them). This is a summary of both sides:
These are the two posters from the Dark Despair groups:
Privacy
In a world with AGI, it could possibly track everything you do. The data that is is disparate systems could be combined and acted upon (not in your interest). Think 1984. You won’t be able to hide, your face will be detected.
Totalitarian State
This leads right into the second point of despair, a totalitarian state. One where big brother is always watching. The EU is making a case for privacy and human rights, can they withstand AGI?
Censorship, like that in China already this day, could lead to total control of the population. You might not be able to leave (e.g. if you’re social credits are too low).
War
Killer bees, but this time for real (or well, artificial, with tiny bombs). It is already real and warfare can become more dangerous and one-sided with an AGI on one side of the battle. Who presses the button? And are the goals of the AGI the same ones as ours?
And what if this makes war cheaper? Instead of training a soldier for years at millions of costs, just fly in some (small) drones that control themselves. Heck, what if they can repair themselves?
Will there be any empathy left in war? If you’re not there, why see the other side as a real human?
Jobs
Will there be any jobs left? A(G)I might leave us without mundane and repetitive tasks (a positive in most cases), but what about
And for who would we be working? Will it be to better humanity or for furthering the goals of the AGI (which might not align with ours).
Social Media
Looking closer at home, learning algorithms (ANI) are already influencing our lives and optimising our time on social networks, making us hunker for likes, hearts, approval. What if Facebook (social media), Amazon (buy this now, watch Twitch), Google (watch Youtube), Netflix, etc. become even better at this? Will we be the fat people from Wall-E?
Economic Inequality
And whilst we’re binge-watching some awesome new series, the AGI is hacking away at tax evasion (which some people are already good at, image the possibilities for an AGI).
Where will the benefits go? Do they go to society (like now via taxes and positive externalities) or will companies (and their executives) rake in all the benefits? With more and more data, who will benefit? How will the benefits ‘trickle-down’?
Autonomous Vehicles
In many US states,
Consciousness
Consciousness and intelligence don’t go hand in hand. Will AGI enjoy art, music, or anything at all? And (surprising to me), we asked, are humans becoming less conscious?
AGI vs AGI
What if you take the time to program the safety into your AGI, and then the other team (read: country) doesn’t and their AGI becomes more intelligent faster, but doesn’t share our goals? Guess who ‘wins’.
This is the poster from the Bright Lights group.
Anti-War
AGI could find better compromises and mutual interests. If you could plan scenarios better (show losses from war, have shorter wars, show benefits from cooperating, etc). We pesky humans tend to be quite negative, what if AGI could show us that war is not needed to achieve our goals?
War
Health/Medicine
With better data analysis, we can make better predictions and make better medicine. We can see the unpredicted/unknown and intervene before it’s too late.
We can help people who are addicted to prevent relapse. Data from cheap trackers could help someone stay clean. It could even detect bad patterns and help people before things get too bad.
If we understand how proteins fold (and we’re getting better at it, AlphaFold), we might cure every disease we know. The possibilities of AGI and health are endless and exciting.
Mental Health
A chat-bot could keep you company. If we have more old people (before we make them fit again), an AGI could be their companion.
Chat to your AGI and ditch the therapist. Get coaching and live your best life. Not only for people who have access to both right now. No, therapy and coaching for everyone in the world.
Personal
AGI could help you with making life decisions (think dating simulation in Black Mirror). Choose the very best career for your happiness/fulfilment. Should you have children? Dating, NO MORE DRAMA!
Have your AGI tell you the weather. Have it be your personal yoga teacher, your basketball coach. Let it take away boring work, save you time, and let you live your best life.
Two hours isn’t enough to tackle AI and our possible future. But I do hope that we’ve been able to inspire everyone who was there, and all you reading this, of what the possible future’s there are.
“May we live in interesting times” is a quote I find very appropriate for this topic. It can go many ways (and is doing that already). If, and when, we will have AGI, we will see. Until then, I hope to see you at our next Meetup.
Videos
Nick Bostrom – What happens when our computers get smarter than we are?
Max Tegmark – How to get empowered, not overpowered, by AI
Grady Booch – Don’t fear superintelligent AI
Shyam Sankar – The rise of human-computer cooperation
Anthony Goldboom – The jobs we’ll lose to machines — and the ones we won’t (4 min)
Two Minute Papers – How Does Deep Learning Work?
Crash Course – Machine Learning & Artificial Intelligence
Computerphile – Artificial Intelligence with Rob Miles (13 episodes)
Books
Superintelligence – Nick Bostrom (examining the risks)
Life 3.0 – Max Tegmark (optimistic)
The Master Algorithm – Pedro Domingos (explanation of learning algorithms)
The Singularity Is Near – Ray Kurzweil (very optimistic)
Humans Need Not Apply – Jerry Kaplan (good intro, conversational)
Our Final Invention – James Barrat (negative effects)
Isaac Asimov’s Robot Series (fiction 1940-1950, loads of fun!)
TV Shows
Person of Interest (good considerations)
Black Mirror (episodic, dark side of technology)
Westworld (AI as humanoid robots)
Movies
Ex Machina (AI as humanoid robot)
Blade Runner (cult classic, who/what is humam?)
Eagle Eye (omnipresent AI system)
Her (AI and human connection)
2001: A Space Odyssey (1986, AI ship computer)
Research/Articles
Effective Altruism Foundation on Artificial Intelligence Opportunities and Risks
https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/
80000 hours Problem Profile of Artificial Intelligence
https://80000hours.org/topic/priority-paths/ai-policy/
80000 hours on AI policy (also has great podcasts)
Great, and long-but-worth-it, article on The AI Revolution
Future Perfect (Vox) article on AI safety alignment
https://cs.nyu.edu/faculty/davise/papers/Bostrom.pdf
Ernest Davis on Ethical Guidelines for a Superintelligence
https://intelligence.org/files/PredictingAI.pdf
On how we’re bad at prediction when AGI will happen
https://intelligence.org/files/ResponsesAGIRisk.pdf
Responses to Catastrophic AGI Risk
https://kk.org/thetechnium/thinkism/
Kevin Kelly on Thinkism, why the Singularity (/AGI) won’t happen soon
dhttps://deepmind.com/blog/alphafold/
Deep Mind (Google/Alphabet) on Alphafold (protein folding)
Meetup:
Awesome newsletter (recommended by an attendee):
http://www.exponentialview.co/
Download the full resource list
Well, not actually a book, a monograph. One that accompanies Good to Great. Turning the Flywheel by Jim Collins goes deeper into the concept of the flywheel. Below I will define the flywheel and give two interpretations of it, for Queal and for myself.
The Flywheel effect is a concept developed in the book Good to Great. No matter how dramatic the end result, good-to-great transformations never happen in one fell swoop. In building a great company or social sector enterprise, there is no single defining action, no grand program, no one killer innovation, no solitary lucky break, no miracle moment. Rather, the process resembles relentlessly pushing a giant, heavy flywheel, turn upon turn, building momentum until a point of breakthrough, and beyond.
7 steps to capturing your own flywheel:
1. Create a list of significant replicable successes your enterprise has achieved.
2. Compile a list of failures and disappointments.
3. Compare the successes to the disappointments and ask, “What do these successes and disappointments tell us about the possible components of our flywheel?”
4. Using the components you’ve identified (keeping it to four to six), sketch the flywheel.
5. If you have more than six components, you’re making it too complicated; consolidate and simplify to capture the essence of the flywheel.
6. Test the flywheel against your list of successes and disappointments.
7. Test the flywheel against the three circles of your Hedgehog Concept