From Bacteria to Bach and Back
From Bacteria to Bach and Back by Daniel Dennett is XYZ
From Bacteria to Bach and Back by Daniel Dennett is XYZ
Juvenescence by Jim Mellon and Al Chalabi is a book aimed squarely at investors who want to invest in the new hot thing and get better than average returns. I personally am not on board with the long-term ability of people/groups/investors to do this (A Random Walk Down Wall Street), but at the same time I’m very interested in the field and this book provides a good helicopter view of the possibilities.
Here is a compressed index of the book:
Greg Bailey (investor and entrepreneur) gives an introduction to the book. He argues that we’re at a breakthrough moment in ageing research. Where first the focus was on lifestyle interventions (adding a few healthy years, possibly even extending lifespan a bit), it’s now on drugs/therapies that might extend life by much more. He argues that inflammation and insulin resistance are most probably two mechanisms that have much to do with how we age (but the evidence isn’t clear yet how the causal effects work). He personally takes metformin, a statin, baby aspirin, with fish oil, curcumin, vitamin D and B, nicotinamide mononucleotide (NMN) and episodic calcium.
Longevity is taking flight, just as flight was some 100 years ago. Two key issues are:
To do this, we must look at the cells itself, an area that is now exploding (with investment and discoveries). It’s also relatively recently that more and more researches have begun to see ageing as a disease. Before, and still for many people, the separate diseases are treated as such. The effects of a longer healthy lifespan will result in lower health costs, more productivity and economic growth.
Definition: Ageing is marked by a progressive loss of physical integrity, with lessened functionality and increased vulnerability to death.
The Human Cell Atlas aims to identify every cell in every tissue (about 37 trillion in total).
“The short answers: it’s bad, maybe, possibly, and probably not!”
The goal of most researchers is to extend healthy lifespan, to have a very short period of illness. Most current therapies are focussed on this, only soon will we also be able to (radically) extend lifespan in general. The current ‘hard’ ceiling proposed in the book is 115 years. They speak of a bridge being built, one that connects/makes us survive until we find the ‘real/long-term’ solutions.
One change in attitudes is that we can see ageing as a single disease complex. Regenerative medicine will allow us to restore our bodies to the best/optimal state. Thus we should focus on the causes of ageing (e.g. chronic inflammation, cell breakdown, mitochondrial DNA damage, stem cell depletion, cellular senescence).
The authors speak about why it’s now the right time to invest. They argue that genomic sequencing, the imminent appearance of therapies make this the right moment.
Human research takes very long (we don’t die quickly) and is very expensive. Yeast, worms, and mice models are sometimes good proxies (and the best we have now).
“Already, we can, and are, reducing the risk of dying from the diseases of ageing. For instance, cardiovascular disease (CVD) related deaths and cancer deaths are each falling in developed countries by about 2 to 4% per annum.“
“Ageing is rigorously described as senescence, the progressive degradation of bodily functions.”
Changes/mutation at the beginning of life may help us, but be detrimental later in life (Medawar-Williams Theory).
“Molecules become unbound, genes become inefficient, waste products (cellular debris) build up… shortening of telomeres, reduced mitochondrial function (limiting energy production), the depletion of the potency of stem cells, and impaired cellular networks.”
“For now there are no specifically approved or recommended treatments to delay or to reverse ageing, other than [caloric restriction/lifestyle changes]”
Yet they are optimistic because we are starting to understand more and more.
“The mitochondria, large structures (‘organelles’) within our cells, are the machines that extract energy from nutrients and store it as adenosine triphosphate (ATP).” With age, they become less effective.
The immune system also becomes less effective (immunosenescence). And other things break down (we get cancer, lose hair, lose balance, type 2 diabetes, etc).
Some types of cells are immortal (e.g. cancer cells). But many of our cells don’t do well at copying after 50 times (too many mistakes). This has been called the Hayflick Limit.
Twin-studies showed that only 20% of date of death was genetic, 80% was environment (not clear how much is (bad)luck and how much is smoking, diet, sleep, etc).
Because of attacks from outside the body (exogenous) and inside (endogenous), we can’t expect our bodies to stay the same (homeostasis). I think that Aubrey de Grey tries to argue that we should make our repair systems so good as to do maintain this. One other aspect to take into account is oxidative stress (reactive oxygen species, ROS). This also increases over time as those free radicals damage cells.
You should be able to estimate your biological age with an epigenetic clock (Horvath, et al.)
“Though we must again stress that our estimates of timeframes are immensely speculative… average life expectancy… will rise from about 73 today to lose to 100 (in 20 years)” … “That said, if you can stay alive for another ten to twenty years, and if you aren’t yet over 75, and if you remain in reasonable health for your age, you have an excellent chance of living to over 110 years old.”
“[Inflammaging] describe[s] the aspects of the breakdown of intercellular communication and the gradual failure of the immune system.”
When the innate and learned/adaptive systems begin to fail (called immunosenescence), your body can’t fight infections anymore. In parallel, the immune system is fighting battles it can’t win, leading to persistent inflammation (inflammaging).
In your guts is where most of this happens, this is where your microbiome is (many bacteria). Specifically, the NLRP3 gene (which encodes the cryopyrin protein) becomes less effective.
The B-cells (from bone marrow) and T-cells (ditto, and thymus) also slows with ageing. There are also less ‘naive’ ones, that are open to learning to fight new pathogens. “Immunosurvelliance of persistent viruses and in particular of the cytomegalovirus (CMS), causes stress to T-cells.”
They mention that inflammaging is linked to the big killers. And that resveratrol and metformin might have pathways to suppress/dampen inflammaging.
Growth hormones might also help here (IGF-1) and FGF7. Yet the former is also mentioned elsewhere as a possible negative influence.
Another avenue is to improve our (gut) microbiome. One theory is that in older people the relationship moves from symbiotic to hostile.
“… ageing is currently inescapable, that it is characterised by the progressive loss of functioning of our bodies and that it is the principal cause of [deaths from cardiac diseases, cancer, etc].”
The two (broad) views are that 1) ageing is preprogrammed in a way, and 2) that it’s random/stochastic. The former says that there is something in our cells that triggers at some programmed time. The latter says that accumulation of damage, free radicals, etc just heap up over time.
The hallmarks of ageing are (López-Otín, Carlos, et al.)
This is the loss of homeostasis in the proteome (protein 100-250k we need for life). Proteostasis involves cleaning up misfolded proteins. When this doesn’t happen effectively anymore, diseases start to develop.
Chaperone molecules are proteins (or drugs) that refold misshapen proteins. Two systems are used by the body to destroy misfolded proteins (autophagy-lysosmal and ubiquitin-proteasome systems).
Too many unfolded/misfolded/clumped proteins are implicated in (causing?) Parkinson’s and Alzheimer’s (Powers, et al. 2009).
Boosting proteostasis might increase longevity.
Phenotypes refer to the physical and behavioural expression of genotypes. You get the latter from your parents, the former is influenced by your environment. Natural selection might select for the ones that reproduce earlier (since they have kids that have their genes), of course this is offset by the chance of those kids surviving. (more is said about evolution and why for instance reptiles don’t seem to age that quick/are still fit at an older age). They also use the example of eunuchs (no chance of reproduction), but not everywhere they lived longer.
Most scientists don’t evoke the second law of thermodynamics (entropy) when talking about ageing. Leonard Hayflick does, and so does Peter Hoffman. Aubrey de Grey argues that people are very good at combatting entropy, but that we should help our cells repair mechanisms.
The disposable soma (body) theory states that ageing occurs due to the accumulation of damage during life. This view argues that we die some time after our ‘usefulness period’ (passing on genes) but leaves the door open to doing repairs etc after that. One correct prediction that it makes, is that in times of low calories, people survive longer (and have fewer kids).
Free radicals, DNA damage and the oxidative theory of ageing states that those three are responsible for ageing. Antioxidants (as applied now) don’t show consistent positive effects. Too much unrepaired DNA does show to change the chromatin (what chromosomes are made of). Caloric restriction (CR) might help a bit (but not much).
The theory of antagonistic pleiotropy argues that a gene variant is beneficial to our survival in early life, becomes harmful as we age. Another view is that ageing does stop at very old ages (and people die of exhaustion?).
The hyperfunction theory argues that the ‘hyperfunction’ of things useful in youth are also causes of ageing/death. Excessive signalling of mTOR and insulin/IGF-1 are examples of this.
The rate of living theory says that the slower the metabolism, the longer an organism lives (Kleiber’s Law).
Most theories contain a piece of the truth. Yet we don’t know at this moment which is best/what is to come (otherwise we would already have solved ageing).
DNA damage occurs through free radicals (ROS). This happens about 10.000 times per day in humans, in every cell! And one repair takes 10.000 molecules of ATP to repair. (so if my math is correct, 100 million ATP molecules per cell. We have 37 trillion cells, so 3.7e+21 ATP molecules get to work every day. Lol, numbers were calculated in the book too, we have only 50grams of ATP, it’s recycled so fast, we use 180kg per day (overturned)).
“Unrepaired DNA damage is particularly noticeable in non-dividing or slowly dividing cells, such as neuronal, heart and skeletal cells because the mutations tend to persist. Whereas in dividing cells, such as those of the liver, DNA damage that is not repaired will normally automatically induce cell death, though occasionally it can lead to the development of aberrant cancerous cells.”
“ATP liberates energy by being converted into ADP (adenosine diphospahte), by removing one of the phosphate groups. ATP becomes spent when it is converted to ADP. The ADP group is then recycled in the mitochondria, recharged, and re-emerges as ATP and the cycle continues.”
More nuclear DNA damage over time means a greater risk of cancer (which happens with age).
Some (SENS) argue that mitochondrial DNA (mtDNA) mutations in slowly dividing cells are causative of ageing.
Deaths from cardiovascular diseases (CVD) are falling worldwide. In the US this is 20% in the last 20 years (per capita). LDL (bad cholesterol) is a major factor in the formation of heart diseases. Statins (David Sinclair also argues) are a wonderful discovery that helps combat this. (Although food and lifestyle interventions might prevent it in the first place)
“Statins reduce the amount of bad cholesterol int he blood, and so lessen the amount of arterial blockage from the build-up of plaques. Statins also change the heart structure, reducing thickness and volume and reducing the chance of a heart attack.”
Many other interventions related to CVD are mentioned in the book.
“In industrialised nations, about one in two people will develop a form of cancer during their lifetimes, and generally between the ages of 40 and 80. Just under 8 million people die of cancer worldwide each year.”
Immunotherapy is the treatment where the immune system is stimulated to better fight cancer (and this is also how our bodies fight pre-cancerous cells normally). (analogy to a car) “you first have to release the brake, then press on the accelerator and steer where you’d like to go”.
Early therapies only did the first (accelerate), this involved giving patients immune cytokines (which promptly attacked other cells and lead to deadly inflammation). But if the patient did survive, there were long term benefits (cancer not coming back).
CLTA4 is a more targetted (at cancer cells) version of this process.
Analysts predict that 60% of cancers can ben managed by immunotherapies.
Another approach in this direction is CAR-T (chimeric antigen T-cell receptors). Here antibodies are taken outside of the body and manipulated in a lab (and enhanced of course).
“[A]ge reduces lung elasticity, respiratory strength and the efficiency of the chest wall in respiration.”
Smoking is a leading cause of respiratory disease. About 10 million people die of this each year.
The categories are:
Intervention at the genetic level might be a solution for some of these (e.g. COPD). These are at least a decade away according to the authors.
Note: authors might consider Alzheimers as a type of diabetes.
Today 8% of the adult population worldwide has diabetes type 2 (450 million people). Diabetes is associated with many (if not all) ageing diseases (as cause). This is all lifestyle-induced…
Diabetes causes 5 million deaths per year.
Type 1 has a genetic component and affects 1% of people worldwide, it’s medically controlled with insulin. Type 2 starts with insulin resistance, obesity and insufficient exercise are the main causes. Lifestyle changes can reverse it (dieting and exercising). A short period of fasting could reverse type 2 and type 1 diabetes (Valter Longo, The Longevity Diet), in mice!
Metformin was originally developed to combat type 2 diabetes.
The book mentions many other types of drug-interventions (mostly related to insulin) for type 1 and 2.
“[T]he prevalence of dementia and neurodegeneration is actually falling in the developed world.” (1/5th in England and Wales in the span of 22 years).
The different kinds of dementia:
“About 32% of people over 85 years old in the US have been diagnosed with Alzheimer’s.” There are 50 million people living with dementia (130 million in 2050 if no new interventions).
Statins don’t protect against dementia. It is characterized (but maybe not causes) by the build-up of protein (Lewy bodies, amyloid plaques, protein tangles) between cells.
There is a strong link between diabetes and dementia. Poor diets might be a cause (processed foods). Gluten is probably ok for 99% of people (not linked).
“It is thought that the accumulation of misfolded proteins is the result of the failure of the so-called chaperone system, whereby proteins are guided into their 3D structures by helper molecules. The failure of autophagy to remove these misfolded proteins, as well as damaged organelles, through lysosomal degradation aggravates the situation in both Alzheimer’s (tau peptides) and in Parkinson’s (a-synuclein protein aggregation).
“About 70% of Alzheimer’s is genetic, so that means about 30% comes from environmental factors.”
There is currently no cure (and many failed) fo Parkinson’s. What we do know is that exercise is good (induces autophagy and clearance of amyloid plaques and tau protein tangles). Genetic interventions will also come with time.
Shorter-lived animals can (with some caveats) be very good proxies for humans (we live too long). Fruit flies, roundworms, mice, and baker’s yeast are some of the most used animals.
One problem with this is that they were selected for short lifespans in the first place. Wild mice, for instance, live much longer than the lab mice. The homogenous(ness) also doesn’t reflect real-life well. Also, telomerase isn’t a (lifespan) issue for mice.
Another avenue of research is to look at long-lived animals and see what mechanisms they have that keep them healthy. The hydra can regenerate indefinitely and FOXO genes seem to be an interesting avenue of research.
Insilico Medicine is a company that uses machine learning (ML) that uses human gene expressions to see if there is a biological age (and where you are on that scale). They also use ML to find new drugs/molecular structures.
Fertility is also a topic of interest and improving IVF (with NAD+ precursors) and getting women ovulating after menopause again is being explored.
Autophagy is the way cells get rid of garbage, “[it] delivers unwanted components from within the cytoplasm to the lysosome, which reduces them to amino acids and other cellular building blocks.” As we age, this process breaks down more often/becomes less efficient. Spermidine might work to keep it going better (reduce inflammation, clean cells).
The Buck Institute is the world’s first independent research institution focused on using ground-breaking research to prevent and cure age-related chronic diseases.
Biomarkers of ageing are quantitative variable indicators of biological age. Inflammation agents, glucose metabolism biomarkers, and others are examples. This type of indicator might better predict health and care needs than chronological age.
David Sinclair (Life Biosciences) – Both researcher and entrepreneur (12 longevity companies).
Nir Barzilai – Metformin (2019 trail, also cautionary note: higher Alzheimers?), HDL.
Aubrey de Grey – See Ending Aging, SENS research foundation.
Leonard Hayflick & Laura Deming – Entropy, animal models don’t translate well.
Craig Venter – Sequence Human Genome, Human Longevity Inc.
(etc)
If ageing was classified as a disease, things would go smoother. 90% of drugs fail in clinical trials. Phase 2 is to test efficacy (and here most drugs die).
Young blood for old mice! works. The mice experiments probably worked because of the parabiosis (stitching together), the liver and circulatory system (not per se only/primarily the blood itself). But maybe not for humans. Possibly you can administer osteopontin (a protein that is lacking at older age, no transfusion needed). More research is needed.
Unpleasant and doesn’t seem to really work for people (i.e. doesn’t translate well from animal models), and is hard/impossible to adhere to. Intermittent fasting might have same/better effect.
Metformin (antidiabetic drug), and the gene AMPK (adenosine monophosphate activated kinase). Possible life extension effects, TAME trail on the way.
mTOR stands for mechanistic target of rapamycin, 2 pathways, MTORC1 is the focus. But has side-effects in humans, rapalogs (analogues) are being developed.
A family of proteins, expressed when stressed (activate AMPK).
X
The Singularity Trap by Dennis E. Taylor is another good book by the author of the Bob’s series. This one is less fun and universe building and more focussed on one particular event and story. It still has quite some humour and interesting dynamics. Less expansive, still fun and well-written.
From another reviewer: “Where the Bobbiverse novels relied on an equal mix of pop-culture nerdiness and solid SF idea exploration in the realm of a self-replicating AI who is still effectively “human”, The Singularity Trap jumps on some of the same solid SF ideas and plotting but does it without most of the humor.”
I can agree and one other thing I liked was the narration and sound effects (like a voice-over with radio-ish sounds). So very different from Bob and his clones/friends, still good.
Semiosis by Sue Burke explores an interesting idea, what if plants could think and what if we have to live together (in an alliance) with them.
That being said, I thought the book went on way too long and I didn’t feel much connection with the characters. It might be so because I like my sci-fi to be in space, or that the switching between generations just didn’t do it for me.
The book is also about how we want to live together. I think here it might have touched upon some good points, but it didn’t provide any revelations or new insights.
Possible Minds: 25 Ways of Looking at AI by John Brockman is a collection of essays by leading AI researchers, artists, and philosophers. They all give their own view on the state/future of AI, as/and a reflection on The Human Use of Human Beings by Norbert Wiener. Each essay is quite different and here I’ve tried to summarise them.
One immediate thing I learned/was reminded about is that technology itself will not be a force for good or bad, it is culture that does this. Technology only enables it. (update: from a podcast about Weigers in China, DNA testing can help with your ancestry, it can also enable mass surveillance and profiling)
Seth Lloyd is a theoretical physicist at MIT, Nam P. Suh Professor in the Department of Mechanical Engineering, and an external professor at the Santa Fe Institute.
“Wiener’s central insight was that the world should be understood in terms of information. Complex systems, such as organisms, brains, and human societies, consist of interlocking feedback loops in which signals exchanged between subsystems result in complex but stable behavior. When feedback loops break down, the system goes unstable.”
“Technological prediction is particularly chancy, given that technologies progress by a series of refinements, halted by obstacles and overcome by innovation. Many obstacles and some innovations can be anticipated, but more cannot. In my own work with experimentalists on building quantum computers, I typically find that some of the technological steps I expect to be easy turn out to be impossible, whereas some of the tasks I imagine to be impossible turn out to be easy. You don’t know until you try.”
“Raw information-processing power does not mean sophisticated information-processing power. While computer power has advanced exponentially, the programs by which computers operate have often failed to advance at all.”
“As machines become more powerful and capable of learning, they learn more and more as human beings do—from multiple examples, often under the supervision of human and machine teachers. Education is as hard and slow for computers as it is for teenagers. Consequently, systems based on deep learning are becoming more rather than less human. The skills they bring to learning are not “better than” but “complementary to” human learning: Computer learning systems can identify patterns that humans can not—and vice versa.”
Judea Pearl is a professor of computer science and director of the Cognitive Systems Laboratory at UCLA. His most recent book, co-authored with Dana Mackenzie, is The Book of Why: The New Science of Cause and Effect.
“Current machine-learning systems operate almost exclusively in a statistical, or model-blind, mode, which is analogous in many ways to fitting a function to a cloud of data points. Such systems cannot reason about “What if?” questions and, therefore, cannot serve as the basis for Strong AI—that is, artificial intelligence that emulates human-level reasoning and competence.”
“Homo sapiens… create and store a mental representation of their environment, interrogate that representation, distort it by mental acts of imagination, and finally answer the “What if?” kinds of questions. Examples are interventional questions (“What if I do such-and-such?”) and retrospective or counterfactual questions (“What if I had acted differently?”). No learning machine in operation today can answer such questions.”
“I view machine learning as a tool to get us from data to probabilities. But then we still have to make two extra steps to go from probabilities into real understanding—two big steps. One is to predict the effect of actions, and the second is counterfactual imagination. We cannot claim to understand reality unless we make the last two steps.”
Stuart Russell is a professor of computer science and Smith-Zadeh Professor in Engineering at UC Berkeley. He is the co-author (with Peter Norvig) of Artificial Intelligence: A Modern Approach.
“Putting a purpose into a machine that optimizes its behavior according to clearly defined algorithms seems an admirable approach to ensuring that the machine’s “conduct will be carried out on principles acceptable to us!” But, as Wiener warns, we need to put in the right purpose.”
“The technical term for putting in the right purpose [Midas problem] is value alignment. When it fails, we may inadvertently imbue machines with objectives counter to our own. Tasked with finding a cure for cancer as fast as possible, an AI system might elect to use the entire human population as guinea pigs for its experiments.”
“AI research, in its present form, studies the ability to achieve objectives, not the design of those objectives.”
He mentions some objections he then also refutes:
“A more precise definition is given by the framework of cooperative inverse-reinforcement learning, or CIRL“
George Dyson is a historian of science and technology and the author of Baidarka: The Kayak, Darwin Among the Machines, Project Orion, and Turing’s Cathedral.
“He likes to point out that analog computing, once believed to be as extinct as the Differential Analyzer, has returned. He argues that while we may use digital components, at a certain point the analog computing being performed by the system far exceeds the complexity of the digital code with which it is built. He believes that true artificial intelligence—with analog control systems emerging from a digital substrate the way digital computers emerged out of analog components in the aftermath of World War II—may not be as far off as we think.”
“Digital computers execute transformations between two species of bits: bits representing differences in space and bits representing differences in time.”
“Analog computers also mediate transformations between two forms of information: structure in space and behavior in time”
‘This [digital vs analog] is starting to change: from the bottom up, as the threefold drivers of drone warfare, autonomous vehicles, and cell phones push the development of neuromorphic microprocessors that implement actual neural networks, rather than simulations of neural networks, directly in silicon (and other potential substrates); and from the top down, as our largest and most successful enterprises increasingly turn to analog computation in their infiltration and control of the world.”
“Nowhere is there any controlling model of the system except the system itself.” (model itself is the system, can’t be reduced or ‘controlled’)
“Before you know it, your system will not only be observing and mapping the meaning of things, it will start constructing meaning as well. In time, it will control meaning, in the same way the traffic map starts to control the flow of traffic even though no one seems to be in control.”
Three laws of robotics (just kidding), of artificial intelligence:
“Provably “good” AI is a myth. Our relationship with true AI will always be a matter of faith, not proof.”
“We worry too much about machine intelligence and not enough about self-reproduction, communication, and control.”
Daniel C. Dennett is University Professor and Austin B. Fletcher Professor of Philosophy and co-director of the Center for Cognitive Studies at Tufts University. He is the author of a dozen books, including Consciousness Explained and, most recently, From Bacteria to Bach and Back: The Evolution of Minds.
(quoting Weiner) “[I]n the long run, there is no distinction between arming ourselves and arming our enemies.” The information age is also the dysinformation age”
“[W]e’re making tools, not colleagues, and the great danger is not appreciating the difference, which we should strive to accentuate, marking and defending it with political and legal innovations.”
“AI in its current manifestations is parasitic on human intelligence. It quite indiscriminately gorges on whatever has been produced by human creators and extracts the patterns to be found there—including some of our most pernicious habits.* These machines do not (yet) have the goals or strategies or capacities for self-criticism and innovation to permit them to transcend their databases by reflectively thinking about their own thinking and their own goals.”
“We don’t need artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights, and should not have feelings that could be hurt, or be able to respond with resentment to “abuses” rained on them by inept users.”
Rodney Brooks is a computer scientist; Panasonic Professor of Robotics, emeritus, MIT; former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL). He is the author of Flesh and Machines.
(John Brockman) “[H]e is alarmed by the extent to which we have come to rely on pervasive systems that are not just exploitative but also vulnerable, as a result of the too-rapid development of software engineering—an advance that seems to have outstripped the imposition of reliably effective safeguards.”
“We rely on computers for our banking, our payment of bills, our retirement accounts, our mortgages, our purchasing of goods and services—these, too, are all vulnerable.”
“Humankind has gotten itself into a fine pickle: We are being exploited by companies that paradoxically deliver services we crave, and at the same time our lives depend on many software-enabled systems that are open to attack.”
“Moral leadership is the first and biggest challenge.”
Frank Wilczek is Herman Feshbach Professor of Physics at MIT, recipient of the 2004 Nobel Prize in physics, and the author of A Beautiful Question: Finding Nature’s Deep Design.
In asking if AI can be conscious, creative, and/or evil, Wilczek answers yes. “Evidence from those fields makes it overwhelmingly likely that there is no sharp divide between natural and artificial intelligence.”
Talking about the ‘Astonishing Hypothesis’ that mind emerges from matter. “People try to understand how minds work by understanding how brains function; and they try to understand how brains function by studying how information is encoded in electrical and chemical signals, transformed by physical processes, and used to control behavior.”
“No one has ever stumbled upon a power of mind that is separate from conventional physical events in biological organisms.”
“… natural intelligence is a special case of artificial intelligence.” He calls it the ‘astonishing corollary’.
” Human mind emerges from matter. Matter is what physics says it is. Therefore, the human mind emerges from physical processes we understand and can reproduce artificially. Therefore, natural intelligence is a special case of artificial intelligence.”
We have been upgrading/enhancing our intelligence for thousands of years. First with fire, glasses, clothing. Now with phones, internet, X-ray. All these enhancements can be covered in six factors: speed, size, stability, duty cycle, modularity, quantum readiness.
Human brains are still better than machines at: three-dimentionality, self-repair, connectivity, development, integration.
“If that’s right, we can look forward to several generations during which humans, empowered and augmented by smart devices, coexist with increasingly capable autonomous AIs.”
Max Tegmark is an MIT physicist and AI researcher, president of the Future of Life Institute, scientific director of the Foundational Questions Institute, and the author of Our Mathematical Universe and Life 3.0: Being Human in the Age of Artificial Intelligence.
“Consciousness is the cosmic awakening; it transformed our Universe from a mindless zombie with no self-awareness into a living ecosystem harboring self-reflection, beauty, hope, meaning, and purpose.”
“But from my perspective as a physicist, intelligence is simply a certain kind of information processing performed by elementary particles moving around, and there’s no law of physics that says one can’t build machines more intelligent in every way than we are, and able to seed cosmic life.”
Tegmark argues that we’ve been outsourcing/inventing our ways out of 1) natural processes (heat/light/mechanical power), 2) then discovered that our bodies are also (biological) machines, and 3) started building machines that outshine us in cognitive tasks too.
“The existence of affordable AGI means, by definition, that all jobs can be done more cheaply by machines, so anyone claiming that “people will always find new well-paying jobs” is in effect claiming that AI researchers will fail to build AGI.”
“Homo sapiens is by nature curious, which will motivate the scientific quest for understanding intelligence and developing AGI even without economic incentives.”
“I’m advocating a strategy change from “Let’s rush to build technology that makes us obsolete—what could possibly go wrong?” to “Let’s envision an inspiring future and steer toward it.””
“[T]he real risk with AGI isn’t malice but competence.”
“This mistakenly equates intelligence with morality. Intelligence isn’t good or evil but morally neutral. It’s simply an ability to accomplish complex goals, good or bad.”
“Let’s create our own meaning, based on something more profound than having jobs. AGI can enable us to finally become the masters of our own destiny. Let’s make that destiny a truly inspiring one!”
Jaan Tallinn, a computer programmer, theoretical physicist, and investor, is a co-developer of Skype and Kazaa. In 2012, he co-founded the Centre for the Study of Existential Risk—an interdisciplinary research institute that works to mitigate risks “associated with emerging technologies and human activity”
“As predicted by Turing, once we have superhuman AI (“the machine thinking method”), the human-brain regime will end. Look around you—you’re witnessing the final decades of a hundred-thousand-year regime.”
“Another strong incentive to turn a blind eye to the AI risk is the (very human) curiosity that knows no bounds. “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success.”
(quoting Yudkowsky, blog) “[A]sking about the effect of machine superintelligence on the conventional human labor market is like asking how US-Chinese trade patterns would be affected by the Moon crashing into the Earth. There would indeed be effects, but you’d be missing the point.”
“… superintelligent AI is an environmental risk“
Tallinn argues that we pity humans fit nicely within the nice confines of Earth (although we have shaped it to our liking, think airconditioning). But that AI is very much able to survive in a much wider range of environments (e.g. deep space).
Steven Pinker, a Johnstone Family Professor in the Department of Psychology at Harvard University, is an experimental psychologist who conducts research in visual cognition, psycholinguistics, and social relations. He is the author of eleven books, including The Blank Slate, The Better Angels of Our Nature, and, most recently, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.
“A healthy society—one that gives its members the means to pursue life in defiance of entropy—allows information sensed and contributed by its members to feed back and affect how the society is governed. A dysfunctional society invokes dogma and authority to impose control from the top down.”
“The possibility that machines threaten a new fascism must be weighed against the vigor of the liberal ideas, institutions, and norms… The flaw in today’s dystopian prophecies is that they disregard the existence of these norms and institutions, or drastically underestimate their causal potency.”
“The reason is that almost all the variation across time and space in freedom of thought is driven by differences in norms and institutions and almost none of it by differences in technology.”
What I get from this is that technology is agnostic and that how we use it (norms/culture) will determine if it will be used for good or bad. Pinker argues that we/activists should focus on things like the laws, not the technology.
Pinker also dismisses the competent-but-stupid AI scenarios where the AI is very good at completing a goal, but doing this too literal (e.g. making everyone happy by installing dopamine drips). He argues that intelligence (as a broad concept) will consist of several parts that ‘grow’ together, and thusly an AI that will be able to do large things in the world, will also be one that is ‘smart’ enough to not ‘hack’ the goal. (I’m not totally sure about this line of argument and I think especially Nick Bostrom wouldn’t agree).
“Rates of industrial, domestic, and transportation fatalities have fallen by more than 95 (and often 99) percent since their highs in the first half of the 20th century.* Yet tech prophets of malevolent or oblivious artificial intelligence write as if this momentous transformation never happened and one morning engineers will hand total control of the physical world to untested machines, heedless of the human consequences.”
David Deutsch is a quantum physicist and a member of the Centre for Quantum Computation at the Clarendon Laboratory, Oxford University. He is the author of The Fabric of Reality and The Beginning of Infinity.
(about humans in the past) “Moreover, this must have been knowledge in the sense of understanding, because it is impossible to imitate novel complex behaviors like those without understanding what the component behaviors are for.”
“Such knowledgeable imitation depends on successfully guessing explanations, whether verbal or not, of what the other person is trying to achieve and how each of his actions contributes to that—for instance, when he cuts a groove in some wood, gathers dry kindling to put in it, and so on.”
“No nonhuman ape today has this ability to imitate novel complex behaviors. Nor does any present-day artificial intelligence. But our pre-sapiens ancestors did”
“Any ability based on guessing must include means of correcting one’s guesses, since most guesses will be wrong at first. (There are always many more ways of being wrong than right.) Bayesian updating is inadequate, because it cannot generate novel guesses about the purpose of an action, only fine-tune—or, at best, choose among—existing ones. Creativity is needed. As the philosopher Karl Popper explained, creative criticism, interleaved with creative conjecture, is how humans learn one another’s behaviors, including language, and extract meaning from one another’s utterances”
“So everyone had the same aspiration in life: to avoid the punishments and get the rewards. In a typical generation, no one invented anything, because no one aspired to anything new, because everyone had already despaired of improvement being possible.” (more in From Bacteria to Bach and Back, Daniel Dennett)
“The worry that AGIs are uniquely dangerous because they could run on ever better hardware is a fallacy, since human thought will be accelerated by the same technology.” (very much opposing many others who see AI as dangerous, although in many cases they are talking about two different things, Deutsch is talking specifically about creative AGI)
Tom Griffiths is Henry R. Luce Professor of Information, Technology, Consciousness, and Culture at Princeton University. He is co-author (with Brian Christian) of Algorithms to Live By.
“But if you want to know why the driver in front of you cut you off, why people vote against their interests, or what birthday present you should get for your partner, you’re still better off asking a human than a machine. Solving those problems requires building models of human minds that can be implemented inside a computer—something that’s essential not just to better integrate machines into human societies but to make sure that human societies can continue to exist.”
Making inferences can be very difficult. If you prefer dessert, will your AI now buy you only desserts? Knowing what humans want (in how far we really know it ourselves) will be a very big challenge.
“One of the tools used for solving this problem is inverse-reinforcement learning. Reinforcement learning is a standard method for training intelligent machines. By associating particular outcomes with rewards, a machine-learning system can be trained to follow strategies that produce those outcomes.”
“If you’re trying to make inferences about the rewards that motivate human behavior, the generative model is really a theory of how people behave—how human minds work. Inferences about the hidden causes behind the behavior of other people reflect a sophisticated model of human nature that we all carry around in our heads. When that model is accurate, we make good inferences.”
“[W]hen it comes to understanding the human mind, these two goals—accuracy and generalizability—have long been at odds with each other. … Ultimately, what we need is a way to describe how human minds work that has the generalizability of rationality and the accuracy of heuristics.”
“To develop a more realistic model of rational behavior, we need to take into account the cost of computation. Real agents need to modulate the amount of time they spend thinking by the effect the extra thought has on the results of a decision.” The model used for this is called ‘bounded-rationality’.
“Human beings are an amazing example of systems that act intelligently despite significant computational constraints. We’re quite good at developing strategies that allow us to solve problems pretty well without working too hard. Understanding how we do this will be a step toward making computers work smarter, not harder.”
Anca Dragan is an assistant professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. She co-founded and serves on the steering committee for the Berkeley AI Research (BAIR) Lab and is a co-principal investigator in Berkeley’s Center for Human-Compatible AI.
“At the core of artificial intelligence is our mathematical definition of what an AI agent (a robot) is. When we define a robot, we define states, actions, and rewards.” The goal of an AI is to get the highest cumulative reward.
We have been doing quite well with this definition. “But with increasing AI capability, the problems we want to tackle don’t fit neatly into this framework. We can no longer cut off a tiny piece of the world, put it in a box, and give it to a robot.“
“So to anticipate human actions, robots need to start understanding human decision making. And that doesn’t mean assuming that human behavior is perfectly optimal; that might be enough for a chess- or Go-playing robot, but in the real world, people’s decisions are less predictable than the optimal move in a board game.” Here I think she is referencing (implicitly) to Judea Pearl and David Deutsch, who argue that this ‘understanding/predicting’ is now not available/possible in current AI systems.
“Finally, just as robots need to anticipate what people will do next, people need to do the same with robots. This is why transparency is important. Not only will robots need good mental models of people but people will need good mental models of robots.“
“In general, humans have had a notoriously difficult time specifying exactly what they want, as exemplified by all those genie legends. An AI paradigm in which robots get some externally specified reward fails when that reward is not perfectly well thought out. It may incentivize the robot to behave in the wrong way and even resist our attempts to correct its behavior, as that would lead to a lower specified reward.”
What Anca argues for is that we should have AI that reasons about us. I think this is the right solution, but also the most difficult one. We are bad at it, there are different reasons/preferences between people. It will be a tough cookie to crack.
Chris Anderson is an entrepreneur; former editor-in-chief of Wired; co-founder and CEO of 3DR; and author of The Long Tail, Free (book), and Makers.
Chris’ story starts with one about mosquito’s that follow a gradient descent when they are searching for you. The stronger the smell, move in that direction (an algorithm). He argues that almost everything around us is driven by gradient descent (hunger, sleepiness, etc).
He talks more about local minima (finding a solution, but that a better one might be over the next ‘hill’). One thing you would probably need is a (mental) map.
“We’re going to rock ourselves out of local minima and find deeper minima, maybe even global minima. And when we’re done, we may even have taught machines to seem as smart as a mosquito, forever descending the cosmic gradients to an ultimate goal, whatever that may be.”
David Kaiser is Germeshausen Professor of the History of Science and professor of physics at MIT, and head of its Program in Science, Technology and Society. He is the author of How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival and American Physics and the Cold War Bubble (forthcoming).
“Wiener borrowed this insight when composing Human Use. If information was like entropy, then it could not be conserved—or contained.” One key idea here is that if something is known somewhere, you can’t stop others from learning it (only possibly delay it a bit). “(from Weiner) [T]he fate of information in the typically American world is to become something which can be bought or sold.”
(hmm I guess I didn’t find too many nuggets of information (a term there defined in a few ways) in this piece).
A
A
A
A
A
A
A
A
A
A
And from one very good review I’m going to copy another view:
Nobody knows
It has been proven nigh impossible to predict where the scientific progress or humanity was headed even when developments – of any sort – were stable. The exercise is more futile now given the pace of change with new technologies. With rising complexities rise the future potentialities. Almost anything, everything, and nothing predicted and predicated is possible sometime or the other in the next century. The wide variety of views present in the book have the brightest minds talking past each other partly because the history and experience they cite are useless in providing projections of what could lay ahead. Differing meanings of the terms used as explained in a couple of sections below also contribute to the extent of disagreements, as is commonplace amongst philosophers of any ilk for millennia.
Machines will surpass humanity
Most of the contributors seem to agree that there will be hardly any human skills and faculties where our technology creations will remain inferior forever. None of the contributors resorts to discussions on the soul or divine entity to justify our perpetual supremacy. Our ability to sense causality, impute a purpose and our apparent consciousness are seen by a handful that will keep humanity ahead, but none of these commentators expects any humanity trait supremacy to last forever. Let’s use a bad analogy: if our natural procreation, our children, grow up to develop own purpose, outgrow their parents in many skills and at times develop the willingness to act against their creators, will machines surely never go along that path? The point behind this lousy analogy is that our silicon creations will grow exponentially forever for decades and centuries if not millennia to come. A few years ago, the pessimists used to cite their inability to recognize a cat or a face through optical sensors as one reason humans would remain superior for a long time. Machines have surpassed humans on sound and face recognition in a few short years. They may walk or run better than us next even if Robots’ plodding appear clunky and laughable today (note: Boston Dynamics, already doing very well), for instance. And, machines as a unit may also learn to ascribe purposes too or exhibit complexities that make our consciousness look like that of a cardboard box in a few decades. It is difficult to pick a set of human aspects that will remain superior for the next hundred years.
Subpoint: Machines will have its own goals and purposes
It is likely that consciousness is nothing but an emergent quality from many neurons interacting with each other just the way fluidity is from water molecules or planetary forces are from rocks coming together. What we humans imply through words like beauty, art, goals, purpose are possible emergent qualities from the numerosity of the underlying components and their complex interactions or interrelations. If today’s machines can only code, crunch data or uncover hidden patterns but cannot define their own “ultimate” utility functions, the “ultimate” stage set by humans is being pushed back with the machines working out the rest on their own. It is not ridiculous to assume that what we deem as exotic human qualia – goals, consciousness, beauty, etc. – will also fall prey to the ever-growing machine abilities if they prove nothing but emergent qualities of complex computational techniques.
The pessimistic forecasts are far more compulsive reads
There is no reason AI/AGI/technology progress should make humanity useless, subservient, or extinct for centuries, even if it is a long-term inevitability. As we discuss above, no-one knows! That said, the cases of the optimists – i.e., those who mostly believe that the positives of technology boom would far outweigh any attendant harmful impacts – appear lame compared to the pessimists. Once again, the optimists do not have to be wrong, but the stage belongs to those with scary stories. In the 25 views, you read, the most frightening are by far the most compelling. The trend tells us about what gets our goat and stirs us to action. That said, the pessimists appear more right because almost all optimists’ cases base their case off the dire forecasts that did not prove right historically rather than paint any upcoming utopia they have in their mind. The optimists rest their case on grandmotherly adages like this time is not different while pessimists point to horses who thought they would continue to carry humans forever in transport based on a few thousand years of history but became the showcase items. (mandatory CGP Grey video)
Terms without precise meanings and predictions that are too static
With the band of new philosophers and heavy thinkers in this compendium, there are dozens of commonly used terms including AI, AGI, co-existence, etc. with no precise meanings or with multiple meanings. AI appears to be perpetually something that is a technology of tomorrow, never mind that what we have today would have likely surpassed any definition of most scientists’ AI a few decades ago. The way we use our smart devices, even a person in the late nineties would claim we already co-exist with our gadgets now. The field does not need its Wittgenstein to prove how these thinkers are talking different languages; the technology world is moving far too quickly for the best thinkers to take decades to agree on the underlying meaning of terms. Readers have to distil the views themselves, keeping in mind the plethora of different meanings and time-frames used by the writers while talking about the same subject. (I find this a very good point, we are already living with AI in many forms, and AGI is, I believe, not something that will happen at moment X, it will be different skills/intelligences at different moments) (It also makes me think of a flood raising higher and higher, and some skills are higher upon some mountains)
Multiple dystopias
This reviewer can categorize the doomsayers in at least three different buckets:
a. what will we do? If machines do everything better, will humans be next dogs, better sitting pretty at home than trying to work on anything? If that’s the case, how will the machines/rest of the working world bear the burden of a rising hoard of the unemployed? How will this unemployed lot live life or find purpose?
b. Will we have any free will? As machines understand us faster and better than we can, and continuously act to change our behaviour, will we have any power to stand up against the big brother – be it a set of corporates, governments or machines – converting us into its zombies? Will we be just like our stone-aged forefathers or animals with what were unfathomable massive natural forces for them become machine controls for us?
c. if machines/AGI change the world to make it more suitable for their existence, will humans go extinct? Will machines feel the need to euthanize our race for their purpose someday?
With the rising concerns or privacy and security, most contributors’ AGI dystopia worries focus more around the second category concerns at present. If economic cycles turn, the first category pessimists may get more hearing, even though they are the ones most laughed off based on historical antecedents now. The third category doomsayers will carry the sensationalist tag until it becomes too late assuming that day is in our future at some point.
View 1: think tanks will not work
Let’s say that humanity’s primary goal against AI is a guaranteed survival and continued dominance. We want at least some of us to remain the ultimate overlord of this planet. This requires suppression of some AI-developments or at least a close monitoring. Many groups have been formed globally with the right objectives in mind, but such think tanks are slow moving entities with little power to make an immediate difference. It is likely that by the time some of their suggestions are enacted, the AI-world might have already skirted the underlying issues with many more of different varieties turning more critical. These groups are playing an important role in highlighting the problems at hand unbiasedly, but they are unlikely to make a real difference on their own.
View 2: the best solution could be fighting of iron with iron!
In a free-wheeling technocratic world, the best solutions will emerge from competing entities. It is likely that despite the cries from those with extreme views, no “kill switch” will rise into existence for any AI at humanity level. The more “the good” who follow laws are suppressed at some place, the more will be the powers of some “bad” at some other place. This topic is controversial and requires an extended essay on its own, so perhaps not for this review!
This year my theme is Connection. Just like in the second half of 2018, I’ve been keeping up my updates on the Timeline.
I haven’t thought too much about the theme itself for the last three months. I have been thinking about what to do in the future, and in a way still want to bring together the concepts I’ve learned across a wide range of topics.
Spero is one of the projects where I want to apply this. First in writing with some AI information, some storytelling experience, and then some more to learn about marketing it.
Here is my analysis of the goals and various updates:
Goal 1: Make this website a true personal knowledge hub
This quarter I’ve improved search on the Timeline. Next to that, I’ve been cutting down a bit on listening to podcasts. I found it too much, non-just-in-time, information. Now I try and listen to more books, more music, and more time without anything blaring into my ears.
The next quarter I will hope to find some time for essays. And in some free moments, I would also like to improve the structure a small bit, but this is a very low priority.
Goal 2: Eat good meals that support my well-being 90% of the time
Food has been going very well. Together with Lotte, I’ve been eating very healthy meals. We don’t snack a lot and I keep the alcoholic drinks to 0-2 on weekdays.
I’m currently experimenting with intermittent fasting (IF), eating only between 11-19. It has been going very well and I’ve adjusted to not eating in the morning.
The trouble/difficulty is sports. I normally go to the gym in the morning, but I feel less energetic without some sugars running through my veins. So I’m testing to find out the best time to do sports. One good time (that I’m testing now) could be 12-14.
Next to that, I would like to know some more about the effects of some foods and have more structure/standard meals per week.
Goal 3: Keep on improving my house
The last quarter we’ve done quite some improvements (and bringing two households together). And I’ve made a bar for the balcony. There is a leak somewhere in the bathroom/pipes, so that is something the installer will look at this Friday.
But next to that, I have no immediate plans to do something for the house. I will maybe take the lead in double-glass (at the front), but we will see about that in a bit.
Goal 4: Achieve my fitness goals
The last few weeks I’ve been cutting down on calories and it’s going pretty well. I will see how long I can hold this pattern, and then move to a 13-week strength cycle (with enough food to help the muscles grow).
My maximum for the Snatch is now at about 50kg, but my max of 6 reps (x4 sets) is at around 40kg, so that is of course way too close. I will try and see which things I can focus on to become better under pressure/weight.
Goal 5: Write Spero
In contrast to the last quarter, I’ve written quite some parts of Spero and will continue to do so for the next quarter. My goal is to have a first draft finished by then, and to maybe also have some friends critique it by then.
That is it for now. Next quarter I hope to share some good updates again.
Bird by Bird written by Anne Lamott is an awesome guide to writing well. It doesn’t only touch on how to write (it doesn’t even really go into the details like style) but focusses much of the energy on the how of writing.
Write with emotion, use your own life (but do change the details), and write shitty first drafts. Improve, repeat.
Writing isn’t about becoming famous and to really enjoy it, try and enjoy the process itself.
From another review: “Bird by Bird is more of a pep talk/psychotherapy session for writers. Sitting home alone writing can be more than a little crazy-making, so it’s nice to have some reassurance that the craziness is normal, along with some tools for getting to the next day. ” I agree.
Good in combination with Writing That Works.
In Loonshots we’re taken on an innovative journey by Safi Bahcall. In the book, he argues how we can stimulate loonshots (moonshots, innovative new products, high chance of failure) on a personal and company level. He talks about the ingredients for loonshots, uses many examples (which can be a bit too detailed), and makes you want to start your own loonshot factory.
From my (read: Queal’s) perspective, I have mixed feelings about the need and viability of loonshots. What if innovation and progress are made in small steps? Bahcall does mention this in the book and I think he is also on board with this concept. He calls them franchises (making the next iteration/update of an original product). If the distinction (1 or 0) is a bit artificial, I will leave in the middle. Let’s just say the book focuses on one end, the loonshots.
Here are some ingredients/concepts from the book:
This being said, a very interesting book, but maybe not very much applicable for myself (at this moment).
The Book of Why: The New Science of Cause and Effect by Judea Pearl and Dana Mackenzie argues that correlation is not causation.
Three levels of intelligence:
Current day AI is at level 1 (not even 2). It can’t posit a counter-factual. And it shows that level 1 is already very strong.
Limited Turing test is the goal of Pearl. E.g. have a story and AI to make inferences from parts of it.
Bayesian, priors, causality, billjard balls