Results 1 to 10 of 10

Thread: Retiring ideas

  1. #1
    Let sleeping tigers lie Khendraja'aro's Avatar
    Join Date
    Jan 2010
    Location
    In the forests of the night
    Posts
    6,239

    Default Retiring ideas

    http://www.theguardian.com/science/2...ement-edge-org

    What scientific idea is ready for retirement?
    Each year a forum for the world's most brilliant minds asks one question. This year's drew responses from such names as Richard Dawkins, Ian McEwan and Alan Alda. Here, edge.org founder John Brockman explains how the question came into being and we pick some of the best responses
    The Observer, Sunday 12 January 2014
    A white mouse
    In her response to edge.org's annual question, Columbia University professor Azra Raza argues that testing drugs on mice is pointless. Photograph: Redmond Durrell/Alamy
    Edge.org was launched in 1996 as the online version of "the Reality Club", an informal gathering of intellectuals who met from 1981 to 1996 in Chinese restaurants, artist lofts, investment banking firms, ballrooms, museums, living rooms and elsewhere. Though the venue is now in cyberspace, the spirit of the Reality Club lives on in the lively back-and-forth discussions on the hot-button ideas driving the discussion today.

    The online salon at edge.org is a living document of millions of words charting the Edge conversation over the past 15 years. It is available, gratis, to the general public.

    As the late artist James Lee Byars and I once wrote: "To accomplish the extraordinary, you must seek extraordinary people." At the centre of every Edge project are remarkable people and remarkable minds – scientists, artists, philosophers, technologists and entrepreneurs.

    Through the years, edge.org has had a simple criterion for choosing contributors. We look for people whose creative work has expanded our notion of who and what we are. A few are bestselling authors or are famous in the mass culture. Most are not. Rather, we encourage work on the cutting edge of the culture, and the investigation of ideas that have not been generally exposed. We are interested in "thinking smart"; we are not interested in received "wisdom".

    In the words of the novelist Ian McEwan, edge.org is "open-minded, free-ranging, intellectually playful… an unadorned pleasure in curiosity, a collective expression of wonder at the living and inanimate world… an ongoing and thrilling colloquium."

    At the end of the year in 1999, for the first anniversary edition of Edge, I asked a number of thinkers to use the interrogative. I asked "the most subtle sensibilities in the world what question they are asking themselves". We've been doing it annually ever since.

    It's not easy coming up with a question. James Lee, whose 1971 conceptual art piece The World Question Centre inspired the annual Edge question, used to say: "I can answer the question, but am I bright enough to ask it?" We are looking for questions that inspire answers we can't possibly predict. My goal is to provoke people into thinking thoughts that they normally might not have.

    We pay a lot of attention to framing the question and soliciting early responses from individuals who can set a high bar. This is critical. These responses seed the site and challenge and encourage the wider group to think in surprising ways.

    The online publication of the annual question occurs in mid-January, and in recent years it is followed by a printed book. Last year we worried about worrying. This year's question comes from HeadCon 13, a two-day Edge seminar that took place in September last year. At one point, Yale psychologist Laurie Santos mentioned to the group that she was interested in why there was no mechanism in social science for retiring ideas in order to make room for new initiatives.

    A lively discussion followed and I quickly picked up on it as an indication that Santos was on to a possible Edge question. After two weeks of often intense conversations, several Edgies expressed concern that the responses would go negative and that people would use it as an opportunity to trash their rivals. Others pointed out that every year, no matter what question is asked, people try to do this in any case. We decided to go with it after one Edgie commented: "Science is argument, not advertising."

    Thus I am pleased to present the Edge question 2014, asked by Laurie Santos.

    MOUSE MODELS

    Azra Raza
    Professor of medicine and director of the MDS Centre, Columbia University, New York

    An obvious truth that is either being ignored or going unaddressed in cancer research is that mouse models do not mimic human disease well and are essentially worthless for drug development. We cured acute leukaemia in mice in 1977 with drugs that we are still using in exactly the same dose and duration today in humans with dreadful results. Imagine the artificiality of taking human tumour cells, growing them in lab dishes, then transferring them to mice whose immune systems have been compromised so they cannot reject the implanted tumours, and then exposing these "xenografts" to drugs whose killing efficiency and toxicity profiles will then be applied to treat human cancers. The pitfalls of such an entirely synthesized non-natural model system have also plagued other disciplines.

    A recent scientific paper showed that all 150 drugs tested at the cost of billions of dollars in human trials of sepsis failed because the drugs had been developed using mice. Unfortunately, what looks like sepsis in mice turned out to be very different than what sepsis is in humans. Coverage of this study by Gina Kolata in the New York Times incited a heated response from within the biomedical research community.

    One blogger said: "There is no basis for leveraging a niche piece of research to imply that mice are useless models for all human diseases." In an article for the Jackson Laboratory, three leading physician scientists concluded: "The key is to construct the appropriate mouse models and design the experimental conditions that mirror the human situation."

    The problem is there are no appropriate mouse models that can mimic the human situation. So why is the cancer research community continuing to be dominated by the dysfunctional tradition of employing mouse models to test hypotheses for development of new drugs?

    Robert Weinberg of the Whitehead Institute at MIT [Massachusetts Institute of Technology] has provided the best answer. He was quoted in the press, noting: "[There are] two reasons. First, there's no other model with which to replace that poor mouse. Second, the FDA [the US Food and Drugs Administration] has created inertia because it continues to recognise these models as the gold standard for predicting the utility of drugs."

    There is a third reason related more to the frailties of human nature. Too many eminent laboratories and illustrious researchers have devoted entire lives to studying malignant diseases in mouse models and they are the ones reviewing one another's grants and deciding where the NIH money [US government medical research funding] gets spent. They are not prepared to accept that mouse models are basically valueless for most of cancer therapeutics.

    In the final analysis then, one of the main reasons we continue to stick to this archaic ethos is to obtain funding. Here is one example.

    I decided to study a bone marrow malignant disease called myelodysplastic syndromes (MDS), which frequently evolves into acute leukaemia, back in the early 1980s. One decision I made very early on was to concentrate my research on freshly obtained human cells and not to rely on mice or petri dishes alone. In the past three decades, I have collected more than 50,000 bone marrow biopsies, blood, normal control buccal smear cells [cells taken from inside the cheek], serum and plasma samples in a well-annotated tissue repository backed by a computerised bank of clinical, pathologic and morphologic data. By using these samples, we have identified novel genes involved in causing certain types of MDS, as well as sets of genes related to survival, natural history of the disease and response to therapy. But when I used bone marrow cells from treated MDS patients to develop a genomic expression profile which was startlingly predictive of response and applied for an NIH grant to validate the signature, the main criticism was that before confirming it through a prospective trial in humans, I should first reproduce it in mice!

    The time is here to let go of the mouse models at least as surrogates for bringing drugs to the bedside. Remember what Mark Twain said. "What gets us into trouble is not what we don't know; it's what we know for sure that just ain't so."

    THERE CAN BE NO SCIENCE OF ART

    Jonathan Gottschall.
    US academic and author who specialises in literature and evolution and teaches at Washington & Jefferson College, Pennsylvania

    Fifteen thousand years ago in France, a sculptor swam and slithered almost a kilometre down into a mountain cave. Using clay, the artist shaped a big bull rearing up to mount a cow, and then left his creation in the bowels of the earth. The two bison of the Tuc D'Audoubert caves sat undisturbed for many thousands of years until they were rediscovered by spelunking boys [cavers] in 1912. The discovery of the clay bison was one of many shocking 20th-century discoveries of sophisticated cave art stretching back tens of thousands of years. The discoveries overturned our sense of what our caveman ancestors were like. They were not furry, grunting troglodytes. They had artistic souls. They showed us that humans are – by nature, not just by culture – art-making, art-consuming, art-addicted apes.

    But why? Why did the sculptor burrow into the earth, make art, and leave it there in the dark? And why does art exist in the first place? Scholars have spun a lot of stories in answer to such questions, but the truth is that we really don't know. And here's one reason why: science is lying down on the job.

    A long time ago someone proclaimed that art could not be studied scientifically, and for some reason almost everyone believed it. The humanities and sciences constituted, as Stephen Jay Gould might have proclaimed, separate, non-overlapping magisteria – that the tools of the one are radically unsuited to the other.

    The prehistoric bison carving at the Tuc D’Audoubert caves in France The prehistoric bison carving at the Tuc D’Audoubert caves in France: ‘Our caveman ancestors had artistic souls.’
    Science has mostly bought into this. How else can we explain its neglect of the arts? People live in art. We read stories, and watch them on TV, and listen to them in song. We make paintings and gaze at them on walls. We beautify our homes like bowerbirds adorning nests. We demand beauty in the products we buy, which explains the gleam of our automobiles and the sleek modernist aesthetic of our iPhones. We make art out of our own bodies: sculpting them through diet and exercise; festooning them with jewellery and colourful garments; using our skins as living canvas for the display of tattoos. And so it is the world over. As the late Denis Dutton argued in The Art Instinct, underneath the cultural variations, "all human beings have essentially the same art".

    Our curious love affair with art sets our species apart as much as our sapience or our language or our use of tools. And yet we understand so little about art. We don't know why art exists in the first place. We don't know why we crave beauty. We don't know how art produces its effects in our brains – why one arrangement of sound or colour pleases while another cloys. We don't know very much about the precursors of art in other species, and we don't know when humans became creatures of art. (According to one influential theory, art arrived 50,000 years ago with a kind of creative big bang. If that's true, how did that happen?) We don't even have a good definition, in truth, of what art is. In short, there is nothing so central to human life that is so incompletely understood.

    Recent years have seen more use of scientific tools and methods in humanities subjects. Neuroscientists can show us what's happening in the brain when we enjoy a song or study a painting. Psychologists are studying the ways novels and TV shows shape our politics and our morality. Evolutionary psychologists and literary scholars are teaming up to explore narrative's Darwinian origins. And other literary scholars are developing a "digital humanities" using algorithms to extract big data from digitised literature. But scientific work in the humanities has mainly been scattered, preliminary, and desultory. It does not constitute a research programme.

    If we want better answers to fundamental questions about art, science must jump in the game with both feet. Going it alone, humanities scholars can tell intriguing stories about the origins and significance of art, but they don't have the tools to patiently winnow the field of competing ideas. That's what the scientific method is for: separating the stories that are more accurate, from the stories that are less accurate. But make no mistake, a strong science of art will require both the thick, granular expertise of humanities scholars and the clever hypothesis testing of scientists. I'm not calling for a scientific takeover of the arts. I'm calling for a partnership.

    This partnership faces great obstacles. There's the unexamined assumption that something in art makes it science-proof. There's a widespread, if usually unspoken, belief that art is just a frill in human life – relatively unimportant compared with the weighty stuff of science. And there's the weird idea that science necessarily destroys the beauty it seeks to explain (as though a learned astronomer really could dull the star shine). But the Delphic admonition "know thyself" still rings out as the great prime directive of intellectual inquiry, and there will always be a gaping hole in human self-knowledge until we develop a science of art.

    ADDICTION

    Helen Fisher.
    Biological anthropologist at Rutgers University, New Jersey and author of Why Him? Why Her? How to Find and Keep Lasting Love

    "If an idea is not absurd, there is no hope for it," Einstein reportedly said. I would like to broaden the definition of addiction and retire the scientific idea that all addictions are pathological and harmful. Since the beginning of formal diagnostics more than 50 years ago, the compulsive pursuit of gambling, food, and sex (known as non-substance rewards) have not been regarded as addictions; only abuse of alcohol, opioids, cocaine, amphetamines, cannabis, heroin and nicotine have been formally regarded as addictions. This categorisation rests largely on the fact that substances activate basic "reward pathways" in the brain associated with craving and obsession, and produce pathological behaviours. Psychiatrists work within this world of psychopathology – that which is abnormal and makes you ill.

    As an anthropologist, they appear limited by this view. Scientists have now shown that food, sex and gambling compulsions employ many of the same brain pathways activated by substance abuse. Indeed, the 2013 edition of the Diagnostic and Statistical Manual of Mental Disorders (the DSM) has finally acknowledged that at least one form of non-substance abuse can be regarded as an addiction: gambling. The abuse of sex and food were not included. Neither was romantic love. I shall propose that love addiction is just as real as any other addiction, in terms of its behaviour patterns and brain mechanisms. Moreover, it's often a positive addiction.

    Scientists and laymen have long regarded romantic love as part of the supernatural, or as a social invention of the troubadours in 12th-century France. Evidence does not support these notions. Love songs, poems, stories, operas, ballets, novels, myths and legends, love magic, love charms, love suicides and homicides: evidence of romantic love has now been found in more than 200 societies ranging over thousands of years. Around the world men and women pine for love, live for love, kill for love and die for love. Human romantic love, also known as passionate love or "being in love" is regularly regarded as a human universal.

    Moreover, love-besotted men and women show all of the basic symptoms of addiction. Foremost, the lover is stiletto-focused on his/her drug of choice: the love object. They think obsessively about "him" or "her" (intrusive thinking), and often compulsively call, write, or appear, to stay in touch. Paramount to this experience is intense motivation to win their sweetheart, not unlike the substance abuser fixated on his/her drug. Impassioned lovers also distort reality, change their priorities and daily habits to accommodate the beloved, experience personality changes (affect disturbance), and sometimes do inappropriate or risky things to impress this special other. Many are willing to sacrifice, even die for "him" or "her". The lover craves emotional and physical union with their beloved too (dependence). And like the addict who suffers when they can't get their drug, the lover suffers when apart from the beloved (separation anxiety). Adversity and social barriers even heighten this longing (frustration attraction).

    In fact, besotted lovers express all four of the basic traits of addiction: craving; tolerance; withdrawal; and relapse. They feel a "rush" of exhilaration when with their beloved (intoxication). As their tolerance builds, the lover seeks to interact with the beloved more and more (intensification). If the love object breaks off the relationship, the lover experiences signs of drug withdrawal, including protest, crying spells, lethargy, anxiety, insomnia or hypersomnia, loss of appetite or binge eating, irritability and loneliness. Lovers, like addicts, also often go to extremes, sometimes doing degrading or physically dangerous things to win back the beloved. And lovers relapse the way drug addicts do: long after the relationship is over, events, people, places, songs or other external cues associated with their abandoning sweetheart can trigger memories and renewed craving.

    Of the many indications that romantic love is an addiction, however, perhaps none is more convincing than the growing data from neuroscience. Using brain scanning (functional magnetic resonance imaging, or fMRI), several scientists have now shown that feelings of intense romantic love engage regions of the brain's "reward system," specifically dopamine pathways associated with energy, focus, motivation, ecstasy, despair and craving – including primary regions associated with substance (and non-substance) addictions. In fact, our group has found activity in the nucleus accumbens – the core brain factory associated with all addictions – in our rejected lovers. Moreover, some of our newest (unpublished) results suggest correlations between activities of the nucleus accumbens and feelings of romantic passion among lovers who were wildly, happily in love.

    Nobel laureate Eric Kandel recently said: "Brain studies will ultimately tell us what it is like to be human." Knowing what we now know about the brain, my brain-scanning partner, Lucy Brown, has suggested that romantic love is a natural addiction; and I have maintained that this natural addiction evolved from mammalian antecedents some 4.4m years ago among our first hominid ancestors, in conjunction with the evolution of (serial, social) monogamy – a hallmark of humankind. Its purpose: to motivate our forebears to focus their mating time and metabolic energy on a single partner at a time, thus initiating the formation of a pair-bond to rear their young (at least through infancy) together as a team. The sooner we embrace what brain science is telling us – and use this information to upgrade the concept of addiction – the better we will understand ourselves and all the billions of others on this planet who revel in the ecstasy and struggle with the sorrow of this profoundly powerful, natural, often positive addiction: romantic love.

    THE LINEAR NO-THRESHOLD RADIATION DOSE HYPOTHESIS

    Stewart Brand
    Author and founder of The Whole Earth Catalog; co-founder of The Well and The Long Now Foundation

    In his 1976 book, A Scientist at the White House, George Kistiakowsky, President Eisenhower's science adviser, told us what he wrote in his diary in 1960 on being exposed to the idea by the Federal Radiation Council:

    It is a rather appalling document that takes 140 pages to state the simple fact that, since we know virtually nothing about the dangers of low-intensity radiation, we might as well agree that the average population dose from manmade radiation should be no greater than that which the population already receives from natural causes; and that any individual in that population shouldn't be exposed to more than three times that amount, the latter figure being, of course, totally arbitrary. Later in the book, Kistiakowsky, who was a nuclear expert and veteran of the Manhattan Project, wrote: "… a linear relation between dose and effect… I still believe is entirely unnecessary for the definition of the current radiation guidelines, since they are pulled out of thin air without any knowledge on which to base them."

    Sixty-three years of research on radiation effects have gone by, and Kistiakowsky's critique still holds. The linear no-threshold (LNT) radiation dose hypothesis, which surreally influences every regulation and public fear about nuclear power, is based on no knowledge whatever.

    Fukushima Daiichi Nuclear Power Plant Panic-mongers said Fukushima would kill thousands, but no one has died. Rex Photograph: KeystoneUSA-ZUMA/Rex Features
    At stake are the hundreds of billions spent on meaningless levels of "safety" around nuclear power plants and waste storage, the projected costs of next-generation nuclear plant designs to reduce greenhouse gases worldwide, and the extremely harmful episodes of public panic that accompany rare radiation-release events such as Fukushima and Chernobyl. (No birth defects whatever were caused by Chernobyl, but fear of them led to 100,000 panic abortions in the Soviet Union and Europe. What people remember about Fukushima is that nuclear opponents predicted that hundreds or thousands would die or become ill from the radiation. In fact nobody died, nobody became ill, and nobody is expected to.)

    The "linear" part of the LNT is true and well documented. Based on long-term studies of survivors of the atomic bombs in Japan and of nuclear industry workers, the incidence of eventual cancer increases with increasing exposure to radiation at levels above 100 millisieverts per year. The effect is linear. Below 100 millisieverts per year, however, no increased cancer incidence has been detected, either because it doesn't exist or because the numbers are so low that any signal gets lost in the epidemiological noise.

    We all die. Nearly a half of us die of cancer (38% of females, 45% of males). If the "no-threshold" part of the LNT is taken seriously, and an exposed population experiences as much as a 0.5% increase in cancer risk, it simply cannot be detected. The LNT operates on the unprovable assumption that the cancer deaths exist, even if the increase is too small to detect, and that therefore "no level of radiation is safe" and every extra millisievert is a public health hazard.

    Some evidence against the "no-threshold" hypothesis draws on studies of background radiation. In the US we are all exposed to 6.2 millisieverts a year on average, but it varies regionally. New England has lower background radiation, Colorado is much higher, yet cancer rates in New England are higher than in Colorado – an inverse effect. Some places in the world, such as Ramsar in Iran, have a tenfold higher background radiation, but no higher cancer rates have been discovered there. These results suggest that there is indeed a threshold below which radiation is not harmful.

    Furthermore, recent research at the cell level shows a number of mechanisms for repair of damaged DNA and for ejection of damaged cells up to significant radiation levels. This is not surprising given that life evolved amid high radiation and other threats to DNA. The DNA repair mechanisms that have existed in yeast for 800m years are also present in humans.

    The actual threat of low-dose radiation to humans is so low that the LNT hypothesis can neither be proven true nor proven false, yet it continues to dominate and misguide policies concerning radiation exposure, making them grotesquely conservative and expensive. Once the LNT is explicitly discarded, we can move on to regulations that reflect only discernible, measurable medical effects, and that respond mainly to the much larger considerations of whole-system benefits and harms.

    The most crucial decisions about nuclear power are at the category level of world urban prosperity and climate change, not imaginary cancers per millisievert.

    HUMANIQUENESS

    Irene Pepperberg.
    Research associate and lecturer at Harvard specialising in animal thought processes, and author of Alex & Me

    Yes, humans do some things that other species do not – we are indeed the only species to send probes to outer space to find other forms of life – but the converse is certainly equally true. Other species do things humans find impossible, and many non-human species are indeed unique in their abilities. No human can detect temperature changes of a few hundredths of a degree as can some pit vipers, nor can humans best a dog at following faint scents. Dolphins hear at ranges impossible for humans and, along with bats, can use natural sonar. Bees and many birds see in the ultraviolet, and many birds migrate thousands of miles yearly under their own power, with what seems to be some kind of internal GPS. Humans, of course, can and will invent machines to accomplish such feats of nature, unlike our non-human brethren – but non-humans had these abilities first. Clearly I don't contest data that show that humans are unique in many ways, and I certainly favour studying the similarities and differences across species, but think it is time to retire the notion that human uniqueness is a pinnacle of some sort, denied in any shape, way, or form to other creatures.

    Another reason for retiring the idea of humaniqueness as the ideal endpoint of some evolutionary process is, of course, that our criteria for uniqueness inevitably need redefinition. Remember when "man, the tool-user" was our definition? At least until along came species like cactus-spike-using Galapagos finches, sponge-wielding dolphins, and now even crocodiles that use sticks to lure birds to their demise. Then it was "man, the tool-maker"… but that fell out of favour when such behaviour was seen in a number of other creatures, including species so evolutionary-distant from humans as New Caledonian crows. Learning through imitation? Almost all songbirds do it to some extent vocally, and minor evidence exists for physical aspects in parrots and apes. I realise that current research does demonstrate that apes, for example, are lacking in certain aspects of collaborative abilities seen in humans, but have to wonder if different experimental protocols might provide different data in the future.

    The comparative study of behaviour needs to be expanded and supported, but not merely to find more data enshrining humans as "special". Finding out what makes us different from other species is a worthy enterprise, but it can also lead us to find out what is "special" about other beings, what incredible things we may need to learn from them. So, for example, we need more studies to determine the extent to which non-humans show empathy or exhibit various aspects of 'theory of mind", to learn what is needed for survival in both their natural environment and what they can acquire when enculturated into ours. Maybe they have other means of accomplishing the social networking we take as at least a partial requisite for humanness. We need to find out what aspects of human communication skills they can acquire – but we also can't lose sight of the need to uncover the complexities that exist in their own communication systems.

    Nota Bene lest my point be misunderstood: my argument is a different one from that of bestowing personhood on various non-human species, and is separate from other arguments for animal rights and even animal welfare – although I can see the possible implications of what I am proposing. All told, it seems to me that it is time to continue to study all the complexities of behaviour in all species, human and non-human, to concentrate on similarities as well as differences, and – in many cases – to appreciate the inspiration that our non-human compatriots provide in order to develop tools and skills that enhance our own abilities, rather than simply to consign non-humans to a second-class status.

    THINGS ARE EITHER TRUE OR FALSE

    Actor Alan Alda.
    American actor, writer, director and author of Things I Overheard While Talking to Myself

    The idea that things are either true or false should possibly take a rest. I'm not a scientist, just a lover of science, so I might be speaking out of turn – but like all lovers I think about my beloved a lot. I want her to be free and productive, and not misunderstood.

    For me, the trouble with truth is that not only is the notion of eternal, universal truth highly questionable, but simple, local truths are subject to refinement as well. Up is up and down is down, of course. Except under special circumstances. Is the north pole up and the south pole down? Is someone standing at one of the poles right-side up or upside-down? Kind of depends on your perspective.

    When I studied how to think in school I was taught that the first rule of logic was that a thing cannot both be and not be at the same time and in the same respect. That last note, "in the same respect," says a lot. As soon as you change the frame of reference, you've changed the truthiness of a once immutable fact.

    Death seems pretty definite. The body is just a lump. Life is gone. But if you step back a bit, the body is actually in a transitional phase while it slowly turns into compost – capable of living in another way.

    This is not to say that nothing is true or that everything is possible – just that it might not be so helpful for things to be known as true for all time, without a disclaimer. At the moment, the way it's presented to us, astrology is highly unlikely to be true. But if it turns out that organic stuff once bounced off Mars and hit Earth with a dose of life, we might have to revise some statements that planets do not influence our lives here on Earth.

    I wonder, and this is just a modest proposal, if scientific truth should be identified in a way that acknowledges that it's something we know and understand for now – and in a certain way. One of the major ways the public comes to mistrust science is when they feel that scientists can't make up their minds. One says red wine is good for you, and another says even in small amounts it can be harmful. In turn, some people think science is just another belief system.

    Scientists and science writers make a real effort to deal with this all the time. The phrase "current research suggests" warns us that it's not a fact yet. But, from time to time the full-blown factualness of something is declared, even though further work could place it within a new frame of reference. And then the public might wonder if the scientists are just arguing for their pet ideas.

    Facts, it seems to me, are workable units, useful in a given frame or context. They should be as exact and irrefutable as possible, tested by experiment to the fullest extent. When the frame changes, they don't need to be discarded as untrue, but respected as still useful within their domain. I think most people who work with facts accept this, but I don't think the public fully gets it.

    That's why I hope for more wariness about implying we know something to be true or false for all time and for everywhere in the cosmos.

    Especially if we happen to be upside down when we say it.

    BEWARE OF ARROGANCE: RETIRE NOTHING!

    Ian McEwan
    Novelist; author of many books including Sweet Tooth; Solar; On Chesil Beach and Amsterdam (winner of the Man Booker prize for fiction)

    A great and rich scientific tradition should hang on to everything it has. Truth is not the only measure. There are ways of being wrong that help others to be right. Some are wrong, but brilliantly so. Some are wrong but contribute to method. Some are wrong but help found a discipline. Aristotle ranged over the whole of human knowledge and was wrong about much. But his invention of zoology alone was priceless. Would you cast him aside? You never know when you might need an old idea. It could rise again one day to enhance a perspective the present cannot imagine. It would not be available to us if it were fully retired.

    A picture of the statue of Aristotle Aristotle: even his mistakes are worth preserving. Photograph: Popperfoto/Getty Images
    Even Darwin in the early 20th century experienced some neglect, until the modern [evolutionary] synthesis. The Expression of the Emotions in Man and Animals took longer to be current. William James also languished, as did psychology, once consciousness as a subject was retired from it. Look at the revived fortunes of Thomas Bayes and Adam Smith (especially The Theory of Moral Sentiments). We may need to take another look at the long-maligned Descartes. Epigenetics might even restore the reputation of Lamarck. Freud may yet have something to tell us about the unconscious.

    Every last serious and systematic speculation about the world deserves to be preserved. We need to remember how we got to where we are, and we'd like the future not to retire us. Science should look to literature and maintain a vibrant living history as a monument to ingenuity and persistence. We won't retire Shakespeare. Nor should we Bacon.

    ESSENTIALISM

    Richard Dawkins
    Evolutionary biologist; professor of the public understanding of science, Oxford; author of The Magic of Reality

    Essentialism – what I've called "the tyranny of the discontinuous mind" – stems from Plato, with his characteristically Greek geometer's view of things. For Plato, a circle or a right-angled triangle were ideal forms, definable mathematically but never realised in practice. A circle drawn in the sand was an imperfect approximation to the ideal Platonic circle hanging in some abstract space. That works for geometric shapes like circles, but essentialism has been applied to living things and Ernst Mayr blamed this for humanity's late discovery of evolution – as late as the 19th century. If, like Aristotle, you treat all flesh-and-blood rabbits as imperfect approximations to an ideal Platonic rabbit, it won't occur to you that rabbits might have evolved from a non-rabbit ancestor, and might evolve into a non-rabbit descendant. If you think, following the dictionary definition of essentialism, that the essence of rabbitness is "prior to" the existence of rabbits (whatever "prior to" might mean, and that's a nonsense in itself) evolution is not an idea that will spring readily to your mind, and you may resist when somebody else suggests it.

    Paleontologists will argue passionately about whether a particular fossil is, say, Australopithecus or Homo. But any evolutionist knows there must have existed individuals who were exactly intermediate. It's essentialist folly to insist on the necessity of shoehorning your fossil into one genus or the other. There never was an Australopithecus mother who gave birth to a Homo child, for every child ever born belonged to the same species as its mother. The whole system of labelling species with discontinuous names is geared to a time slice, the present, in which ancestors have been conveniently expunged from our awareness (and "ring species" tactfully ignored). If by some miracle every ancestor were preserved as a fossil, discontinuous naming would be impossible. Creationists are misguidedly fond of citing "gaps" as embarrassing for evolutionists, but gaps are a fortuitous boon for taxonomists who, with good reason, want to give species discrete names. Quarrelling about whether a fossil is "really" Australopithecus or Homo is like quarrelling over whether George should be called "tall". He's 5ft 10, doesn't that tell you what you need to know?

    Essentialism rears its ugly head in racial terminology. The majority of "African Americans" are of mixed race. Yet so entrenched is our essentialist mindset that American official forms require everyone to tick one race/ethnicity box or another: no room for intermediates. A different but also pernicious point is that a person will be called "African American" even if only, say, one of his eight great grandparents was of African descent. As Lionel Tiger put it to me, we have here a reprehensible "contamination metaphor". But I mainly want to call attention to our society's essentialist determination to dragoon a person into one discrete category or another. We seem ill-equipped to deal mentally with a continuous spectrum of intermediates. We are still infected with the plague of Plato's essentialism.

    Moral controversies such as those over abortion and euthanasia are riddled with the same infection. At what point is a brain-dead accident-victim defined as "dead"? At what moment during development does an embryo become a "person"? Only a mind infected with essentialism would ask such questions. An embryo develops gradually from single-celled zygote to newborn baby, and there's no one instant when "personhood" should be deemed to have arrived. The world is divided into those who get this truth and those who wail: "But there has to be some moment when the foetus becomes human." No, there really doesn't, any more than there has to be a day when a middle-aged person becomes old. It would be better – though still not ideal – to say the embryo goes through stages of being a quarter human, half human, three quarters human… The essentialist mind will recoil from such language and accuse me of all manner of horrors for denying the essence of humanness.

    Evolution too, like embryonic development, is gradual. Every one of our ancestors, back to the common root we share with chimpanzees and beyond, belonged to the same species as its own parents and its own children. And likewise for the ancestors of a chimpanzee, back to the same shared progenitor. We are linked to modern chimpanzees by a V-shaped chain of individuals who once lived and breathed and reproduced, each link in the chain being a member of the same species as its neighbours in the chain, no matter that taxonomists insist on dividing them at convenient points and thrusting discontinuous labels upon them. If all the intermediates, down both forks of the V from the shared ancestor, had happened to survive, moralists would have to abandon their essentialist, "speciesist" habit of placing Homo sapiens on a sacred plinth, infinitely separate from all other species. Abortion would no more be "murder" than killing a chimpanzee – or, by extension, any animal. Indeed an early-stage human embryo, with no nervous system and presumably lacking pain and fear, might defensibly be afforded less moral protection than an adult pig, which is clearly well equipped to suffer. Our essentialist urge toward rigid definitions of "human" (in debates over abortion and animal rights) and "alive" (in debates over euthanasia and end-of-life decisions) makes no sense in the light of evolution and other gradualistic phenomena.

    We define a poverty "line": you are either "above" or "below" it. But poverty is a continuum. Why not say, in dollar equivalents, how poor you actually are? The preposterous electoral college system in US presidential elections is another, and especially grievous, manifestation of essentialist thinking. Florida must go either wholly Republican or wholly Democrat – all 25 electoral college votes – even though the popular vote is a dead heat. But states should not be seen as essentially red or blue: they are mixtures in various proportions.

    You can surely think of many other examples of "the dead hand of Plato" – essentialism. It is scientifically confused and morally pernicious. It needs to be retired.

    INFINITY

    Max Tegmark.
    Physicist, researcher, precision cosmology; scientific director of the Foundational Questions Institute; author of Our Mathematical Universe

    I was seduced by infinity at an early age. Cantor's diagonality proof that some infinities are bigger than others mesmerised me, and his infinite hierarchy of infinities blew my mind. The assumption that something truly infinite exists in nature underlies every physics course I've ever taught at MIT and indeed all of modern physics. But it's an untested assumption, which raises the question: is it actually true?

    There are in fact two separate assumptions: "infinitely big" and "infinitely small". By infinitely big, I mean the idea that space can have infinite volume, that time can continue for ever, and that there can be infinitely many physical objects. By infinitely small, I mean the continuum: the idea that even a litre of space contains an infinite number of points, that space can be stretched out indefinitely without anything bad happening, and that there are quantities in nature that can vary continuously. The two are closely related because inflation, the most popular explanation of our big bang, can create an infinite volume by stretching continuous space indefinitely.

    A galaxy photographed by the Hubble Space Telescope 'We don’t actually need the infinite to accurately describe the formation of galaxies.' Photograph: Scott Camazine/Alamy
    The theory of inflation has been spectacularly successful, and is a leading contender for a Nobel prize. It explained how a subatomic speck of matter transformed into a massive big bang, creating a huge, flat and uniform universe with tiny density fluctuations that eventually grew into today's galaxies and cosmic large-scale structure, all in beautiful agreement with precision measurements from experiments such as the Planck satellite. But by generically predicting that space isn't just big, but truly infinite, inflation has also brought about the so-called measure problem, which I view as the greatest crisis facing modern physics. Physics is all about predicting the future from the past, but inflation seems to sabotage this: when we try to predict the probability that something particular will happen, inflation always gives the same useless answer: infinity divided by infinity. The problem is that whatever experiment you make, inflation predicts that there will be infinitely many copies of you far away in our infinite space, obtaining each physically possible outcome, and despite years of tooth-grinding in the cosmology community, no consensus has emerged on how to extract sensible answers from these infinities. So strictly speaking, we physicists are no longer able to predict anything at all!

    This means that today's best theories similarly need a major shakeup, by retiring an incorrect assumption. Which one? Here's my prime suspect: infinity.

    A rubber band can't be stretched indefinitely, because although it seems smooth and continuous, that's merely a convenient approximation: it's really made of atoms, and if you stretch it too much, it snaps. If we similarly retire the idea that space itself is an infinitely stretchy continuum, then a big snap of sorts stops inflation from producing an infinitely big space, and the measure problem goes away. Without the infinitely small, inflation can't make the infinitely big, so you get rid of both infinities in one fell swoop – together with many other problems plaguing modern physics, such as infinitely dense black hole singularities and infinities popping up when we try to quantize gravity.

    In the past, many venerable mathematicians expressed scepticism towards infinity and the continuum. The legendary Carl Friedrich Gauss denied that anything infinite really existed, saying "infinity is merely a way of speaking" and "I protest against the use of infinite magnitude as something completed, which is never permissible in mathematics". In the past century, however, infinity has become mathematically mainstream, and most physicists and mathematicians have become so enamoured of infinity that they rarely question it. Why? Basically, because infinity is an extremely convenient approximation for which we haven't discovered convenient alternatives. Consider, for example, the air in front of you. Keeping track of the positions and speeds of octillions of atoms would be hopelessly complicated. But if you ignore the fact that air is made of atoms and instead approximate it as a continuum, a smooth substance that has a density, pressure and velocity at each point, you find that this idealised air obeys a beautifully simple equation that explains almost everything we care about: how to build airplanes, how we hear them with soundwaves, how to make weather forecasts, etc. Yet despite all that convenience, air of course isn't truly continuous. I think it's the same way for space, time and all the other building blocks of our physical word.

    Let's face it: despite their seductive allure, we have no direct observational evidence for either the infinitely big or the infinitely small. We speak of infinite volumes with infinitely many planets, but our observable universe contains only about 10 to the power of 89 objects (mostly photons). If space is a true continuum, then to describe even something as simple as the distance between two points requires an infinite amount of information, specified by a number with infinitely many decimal places. In practice, we physicists have never managed to measure anything to more than about 17 decimal places. Yet real numbers with their infinitely many decimals have infested almost every nook and cranny of physics, from the strengths of electromagnetic fields to the wave functions of quantum mechanics: we describe even a single bit of quantum information (qubit) using two real numbers involving infinitely many decimals.

    Not only do we lack evidence for the infinite, but we don't actually need the infinite to do physics: our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow's weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can too – in a way that's more deep and elegant than the hacks we use for our computer simulations. Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it – the true laws of physics. To start this search in earnest, we need to question infinity. I'm betting that we also need to let go of it.
    I found the mouse and the LNT argument especially relevant for me, what do you think?

    And the part about infinity - well, it certainly rings true, mathematical models are, after all, only an imperfect attempt on describing the world but I'm not quite seeing yet as to how exactly to replace such algebra.
    When the stars threw down their spears
    And watered heaven with their tears:
    Did he smile his work to see?
    Did he who made the lamb make thee?

  2. #2
    Huh. I found the mouse one particularly unpersuasive. As it pointed out "we don't have a better alternative". For it to be worth retiring, surely something better should at least be on the horizon if not already here. I liked the addiction one, though I think it doesn't go far enough. IMO the way the mental health field approaches pathology in general might need to be reexamined.
    Last night as I lay in bed, looking up at the stars, I thought, “Where the hell is my ceiling?"

  3. #3
    Let sleeping tigers lie Khendraja'aro's Avatar
    Join Date
    Jan 2010
    Location
    In the forests of the night
    Posts
    6,239
    I'm not sure that the word "alternative" is such a good term. I mean, she points out that we cured several types of cancer. In mice.

    Which means that using mice as an "alternative" is kind of a vabanque game - we'll get quite a lot of false positives. And we'll also get a lot of false negatives. A research group in Göttingen, by the way, found something that might prevent Alzheimer's. In mice.

    In essence, mice seem to be quite a crapshoot. Beyond determining rough toxicology, we might look at a scenario where we're better suited by rolling dice.

    Not to mention, that if mice were to take over the world they'd be quite a healthy bunch.
    When the stars threw down their spears
    And watered heaven with their tears:
    Did he smile his work to see?
    Did he who made the lamb make thee?

  4. #4
    Well, at first blush the title itself did a good job of challenging my pre-conceived notions. I was prepared to discuss ideas for retirement.

    I'd "retire" the idea that religious beliefs should influence laws or public policy in any way. Would that fit under True or False, Humaniqueness, Arrogance, or all of the above? I have an affinity to Dawkins' Essentialism, even though I don't agree with its conclusions entirely.

  5. #5
    Dawkins seems to entirely miss the point of having thresholds or criteria for anything. It's not that something magical happens once a threshold is passed; it's that using that threshold is useful for one reason or another.
    Hope is the denial of reality

  6. #6
    Let sleeping tigers lie Khendraja'aro's Avatar
    Join Date
    Jan 2010
    Location
    In the forests of the night
    Posts
    6,239
    Doesn't mean that the properties such thresholds are used for are always sensible.
    When the stars threw down their spears
    And watered heaven with their tears:
    Did he smile his work to see?
    Did he who made the lamb make thee?

  7. #7
    No one claims they are, but he's going in the opposite extreme, including with his examples.
    Hope is the denial of reality

  8. #8
    Quote Originally Posted by Khendraja'aro View Post
    I'm not sure that the word "alternative" is such a good term. I mean, she points out that we cured several types of cancer. In mice.

    Which means that using mice as an "alternative" is kind of a vabanque game - we'll get quite a lot of false positives. And we'll also get a lot of false negatives. A research group in Göttingen, by the way, found something that might prevent Alzheimer's. In mice.

    In essence, mice seem to be quite a crapshoot. Beyond determining rough toxicology, we might look at a scenario where we're better suited by rolling dice.

    Not to mention, that if mice were to take over the world they'd be quite a healthy bunch.
    I know of not a single biomedical researcher who believes that mouse studies are an ideal approach to test hypotheses in vivo. Everyone is well aware that mouse studies are inferior to, say, human studies, and for obvious, well-documented reasons. But they are far superior to in vitro studies in almost every way, and are a useful stepping stone and 'sanity check' before proceeding to studies on more complex animals at significant effort and cost (let alone moving straight to humans with all of the attendant risks).

    Take my own research for example. I'm designing a new kind of material that can regenerate various orthopedic tissues. We do lots of in vitro characterization - first on its chemical/physical properties, eventually on its interactions with various cell cultures (both human and animal, as well as primary cells vs. cell lines). This gives us a wealth of information that we can use to modify the material and improve it for in vivo use. But all of that cell data is useless without seeing the more complex interplay inside a living organism, so we have to move it in vivo. Now, you can imagine that the orthopedic requirements for cartilage or bone in the joint or spine of a rodent is very different from something on the scale of a human (let alone walking upright vs. a quadruped). Furthermore, we're well aware that the 'critical size' of a defect in rodents or other small animals is very different than in humans, and the natural healing response in certain animals can be very different than in humans. Yet we continue to use animal studies - I'll be working extensively with mouse, rat, and rabbit studies before we even consider moving to larger animal models.

    Why? There are many obvious answers. First of all, cost and time: regeneration, breeding, etc. is all faster and cheaper in a small animal than in a larger one, allowing us to screen many more combinations and conditions before moving to a larger animal or human. Secondly, the availability of trangenic strains. Mice and rats have lots of genetically modified strains that are useful for studying various disease models that simply aren't available for larger animals (for a whole host of reasons). This gives mechanistic insight you wouldn't get otherwise. Third, ethics: testing untried therapies on hundreds of mice is a lot more ethical than doing it on humans, or even on relatively higher orders of animals (e.g. non-human primates). You obviously can't torture or waste the mice, but it is an important step to carry out before jumping wholesale into human/NHP studies, where the consequences of failure are deemed more significant.

    None of this means that the data from small animal studies should be accepted as representative of the reality in humans. There are well-known deficiencies in various disease models in rodents, and extrapolating results to humans is essentially never done; there are no scientists naive enough to think that mouse data can be translated into clinical successes without a lot more effort. But it does mean that small animal studies are frequently a useful and necessary step in between in vitro testing and large animal/human trials.

  9. #9
    I like the radiation one, mainly because I heart nuclear power.

  10. #10
    Quote Originally Posted by Khendraja'aro View Post
    And the part about infinity - well, it certainly rings true, mathematical models are, after all, only an imperfect attempt on describing the world but I'm not quite seeing yet as to how exactly to replace such algebra.
    Make more people familiar with Cantor's ideas on infinity? Frankly I'm tired of the idea being peddled that because something is infinite, that it must mean that it composes all possibilities, especially in regards to the universe at large. For instance, the set of all numbers between 1 and 2 is infinitely large, but good luck finding 5 in that (and I don't mean the 5 in 1.5 or 1.25, etc).
    . . .

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •