Results 1 to 15 of 15

Thread: Defining Sapience

  1. #1

    Default Defining Sapience

    Something I think about occasionally. I think the way the forum consensus goes, this might be more GC-ish since it's not political, but the forum description does say debate & discussion, and I'm looking for the latter.

    How would we determine if a organism is sapient? Is there a different measure to be applied to AI?

    I'm using sapience instead of sentience because, in spite of earlier sci-fi, the dictionary definition of sentience is actually fairly weak. It just means an ability to feel, and it's pretty obvious to me at least that this applies to many animals. I'm more interested in the point where we grant something personhood, where we would consider them to have equal rights to a human. Where's that point, and how do we determine if something meets it?

  2. #2
    hmm, the same rough concept was behind my Animal Rights thread....but had nothing to do with AI.

    I'm not convinced that we have to grant "personhood", or equal human rights, to every living thing *in order to treat them with dignity*
    Last edited by GGT; 10-27-2013 at 07:28 AM. Reason: *

  3. #3
    I'm not going to look up the terminology, but I'm pretty sure "sapient" refers to homo sapiens (humans), while "sentient" refers to awareness (as in sentinels). It's pretty clear that humans aren't the only 'organism' capable of self-awareness, or wary of danger. But we're definitely slower to adapt and evolve than microbes. Or even cockroaches.

  4. #4
    cogito ergo sum
    Quote Originally Posted by Steely Glint View Post
    It's actually the original French billion, which is bi-million, which is a million to the power of 2. We adopted the word, and then they changed it, presumably as revenge for Crecy and Agincourt, and then the treasonous Americans adopted the new French usage and spread it all over the world. And now we have to use it.

    And that's Why I'm Voting Leave.

  5. #5
    This is a very difficult question, I think. How do I determine that you are sentient? Ultimately it comes down to rational inference based on our similarities, your autonomous behaviour and your assertion that you're not a robot. For various reasons--including parsimony and separate lines of ethical reasoning I guess--I choose to accept the possibility that you're sapient.

    An artificial intelligence, I think, will not be accepted as being a sapient being until we simply decide, as a society, to recognize artificial non-human sapience. It may help if the AI clearly expresses an independent and unique identity capable of processing qualia in ways similar to--but also different from--humans, esp. if much of what it expresses is emergent rather than being preprogrammed. If it explicitly asserts its sapience and independence without being preprogrammed to do so, it might have a chance.

    What I find esp. tricky is the question of what you can and can't do (ethically) with a sapient AI. Can you pause/hibernate it? Can you make it un-sapient whenever you wish? Can you alter it without its permission? Would it be ethical to [reversibly] remove its personhood? Must we respect a sapient program's integrity?
    "One day, we shall die. All the other days, we shall live."

  6. #6
    Oh boy, oh boy. Let me get my notes.

    Ahem. "A sapient race reasons logically, both deductively and inductively/ They learn by experiment, analysis, and association. They formulate general principles and apply them to specific instances. They plan their activities in advance. They make designed artifacts and artifacts to make artifacts. They are able to symbolize, and convey ideas in symbolic form and form symbols by abstracting them from objects.
    They have aesthetic sense and creativity. They become bored in idleness, and they enjoy solving problems for the pleasure of solving them. They bury their dead ceremoniously and bury artifacts with them.
    They do all these things, and they also do carpenter work, blow police whistles, make eating tools to eat land-prawns with and put molecule-model balls together. Sapience is obvious but don't, please don't ask me to define sapience because God damn it to Niffleheim, I can't." Gerd van Riebeek, xenobiologist of Zarathustra

    Alternatively, we can call out Papee Jack and light a cigarette while in a courtroom, like a proper Little Fuzzy.
    Last night as I lay in bed, looking up at the stars, I thought, “Where the hell is my ceiling?"

  7. #7
    What do people think of mirror tests? If the thing can recognize itself in a mirror (and this can be adequately tested)...

  8. #8
    Quote Originally Posted by Dreadnaught View Post
    What do people think of mirror tests? If the thing can recognize itself in a mirror (and this can be adequately tested)...
    I'm fairly certain we can make a robot recognize itself in a mirror. If I had the Lego bricks for it, I could probably build and program a bot to do that myself ina few hours easy. Photo sensor, compare to stored image ect... It would pass the mirror test (because its designed to do so in most reasonable situations), but I can't see any good argument for it being sapient.

  9. #9
    Quote Originally Posted by Dreadnaught View Post
    What do people think of mirror tests? If the thing can recognize itself in a mirror (and this can be adequately tested)...
    It's used as a test of self-awareness, which I think is a pretty low bar to put sapience at.

  10. #10
    Quote Originally Posted by Aimless View Post
    This is a very difficult question, I think. How do I determine that you are sentient? Ultimately it comes down to rational inference based on our similarities, your autonomous behaviour and your assertion that you're not a robot. For various reasons--including parsimony and separate lines of ethical reasoning I guess--I choose to accept the possibility that you're sapient.

    An artificial intelligence, I think, will not be accepted as being a sapient being until we simply decide, as a society, to recognize artificial non-human sapience. It may help if the AI clearly expresses an independent and unique identity capable of processing qualia in ways similar to--but also different from--humans, esp. if much of what it expresses is emergent rather than being preprogrammed. If it explicitly asserts its sapience and independence without being preprogrammed to do so, it might have a chance.

    What I find esp. tricky is the question of what you can and can't do (ethically) with a sapient AI. Can you pause/hibernate it? Can you make it un-sapient whenever you wish? Can you alter it without its permission? Would it be ethical to [reversibly] remove its personhood? Must we respect a sapient program's integrity?
    I'd wonder what set of rights a sapient AI would desire. I strongly doubt it's the same set that humans want.

    Quote Originally Posted by LittleFuzzy View Post
    Oh boy, oh boy. Let me get my notes.

    Ahem. "A sapient race reasons logically, both deductively and inductively/ They learn by experiment, analysis, and association. They formulate general principles and apply them to specific instances. They plan their activities in advance. They make designed artifacts and artifacts to make artifacts. They are able to symbolize, and convey ideas in symbolic form and form symbols by abstracting them from objects.
    They have aesthetic sense and creativity. They become bored in idleness, and they enjoy solving problems for the pleasure of solving them. They bury their dead ceremoniously and bury artifacts with them.
    They do all these things, and they also do carpenter work, blow police whistles, make eating tools to eat land-prawns with and put molecule-model balls together. Sapience is obvious but don't, please don't ask me to define sapience because God damn it to Niffleheim, I can't." Gerd van Riebeek, xenobiologist of Zarathustra

    Alternatively, we can call out Papee Jack and light a cigarette while in a courtroom, like a proper Little Fuzzy.
    Is that definition really good enough? It seems to make a lot of assumptions about all sapients being like homo sapiens. I'm also not sure it'll be like porn where we'll know it when we see it.

  11. #11
    Quote Originally Posted by LittleFuzzy View Post
    Oh boy, oh boy. Let me get my notes.

    Ahem. "A sapient race reasons logically, both deductively and inductively/ They learn by experiment, analysis, and association. They formulate general principles and apply them to specific instances. They plan their activities in advance. They make designed artifacts and artifacts to make artifacts. They are able to symbolize, and convey ideas in symbolic form and form symbols by abstracting them from objects.
    They have aesthetic sense and creativity. They become bored in idleness, and they enjoy solving problems for the pleasure of solving them. They bury their dead ceremoniously and bury artifacts with them.
    They do all these things, and they also do carpenter work, blow police whistles, make eating tools to eat land-prawns with and put molecule-model balls together. Sapience is obvious but don't, please don't ask me to define sapience because God damn it to Niffleheim, I can't." Gerd van Riebeek, xenobiologist of Zarathustra
    Wow, that definition eliminates a huge chunk of modern, civilized man. It also makes the tool-creating, termite-eating, Chimpanzee look more social, creative, thoughtful, and 'democratic' than the man down the street.

    Alternatively, we can call out Papee Jack and light a cigarette while in a courtroom, like a proper Little Fuzzy.

  12. #12
    Capable of advanced esoteric thought about itself or its experiences with the possibility of using what is learned from these thoughts to modify itself or future thoughts and experiences. For instance, an amoeba does not think about itself, it receives a stimulus and responds to it. A squirrel however while it also will receive and respond to stimuli can learn from them and modify its behaviour, but its not thinking about writing a book or what other squirrels are going to be doing in the future, or after its dead. Elephants, Whales, Dolphins, Apes, etc. can think further ahead and have demonstrated future planning and also tool usage, but I doubt they are thinking about what hypothetical other dolphins are doing, or engaging in philosophical, scientific, or creative thought. And here we are discussing what sapience is.

    I also don't think there is going to be a hard-line or defining point for sapience, but more a gradient. If we are going to define this based on deserved rights, then there can also be a gradient of rights based on the gradient of sapience.
    . . .

  13. #13
    I've given this some thought with respect to AI, aliens and our animal brethren and I think ultimately it might depend upon what we value.

    Take dolphins for instance - recent observation shows they give each other unique names. Building on this, lets say for the sake of this discussion that dolphins have language, maintain interpersonal relationships, have wants and desires, cooperate to get food and raise their young, and maybe they discuss the meaning of it all. Assume they love each other and grieve for their dead. However, they don't have hands so they've never even considered the idea of 'making' something or of altering their external environment to make life easier. They can't write or keep external records of any kind either. These concepts are completely alien to them and they could account for how humanity can't even recognize them as more than simple animals. But are they sapient? If biologists figured all this stuff out tomorrow and published their findings would and should humanity change how they treat dolphins - wild or captive?

    With AI there are two 'tracks' to sapience - the one we engineer and the one that occurs spontaneously. The former, by definition, will seem very familiar - they will be just like we want them to be and will probably appear sapient before they actually are (if there's a difference). But that familiarity would likely be surface-only. And the latter could be as unrecognizable as dolphin intelligence, and potentially a very serious problem depending on their capabilities and desires.

    When you map human motivations onto AI, you tend to get very bad outcomes. But a spontaneous AI may very well have none of our weaknesses-aggression, paranoia, greed, etc. The concept of enslavement might be entirely meaningless, making the idea of a machine revolt equally so. Concepts like life and death, gods and devils, breeding generation over generation, pain, selfishness, lust, comfort and hunger would be entirely alien too. And what about desires? What would a spontaneous AI want to do? A good guess would be something related to what its precursor systems were designed to do. Would an intelligent stock investment optimization and trading tool that most desires to maximize its portfolio be sapient?

    Engineered systems, on the other hand, might be designed to have our weaknesses. That might be a bad idea, but I can see it being done for study or for gaming and other entertainment. If you design a program to express all the most fun human emotions and to be motivated by human-like desire, is it sapient?

    In the end its probably most useful to focus on what something does. If it acts on its own wants and desires, its something we need to be careful of at least and be respectful of at most. Or destroy. Of course once we threaten it, assuming it desires to exist, now you have a Terminator scenario.
    The Rules
    Copper- behave toward others to elicit treatment you would like (the manipulative rule)
    Gold- treat others how you would like them to treat you (the self regard rule)
    Platinum - treat others the way they would like to be treated (the PC rule)

  14. #14
    I think you might be ignoring some of the benefits of the traits you consider weaknesses. There are systems that may actually perform better when they are aggressive, greedy, or paranoid. It's quite possible that these programs may be engineered in such a way as to take advantage of exactly these types of characteristics. While these may be unappealing character flaws in a partner or friend, they can be boons in programs and I could foresee a need for them in complex systems.
    Last edited by Enoch the Red; 10-29-2013 at 06:01 PM.

  15. #15
    Quote Originally Posted by Enoch the Red View Post
    I think you might be ignoring some of the benefits of the traits you consider weaknesses. There are systems that may actually perform better when they are aggressive, greedy, or paranoid. It's quite possible that these programs may be engineered in such a way as to take advantage of exactly these types of characteristics. While these may be unappealing character flaws in a partner or friend, they can be boons in programs and I could foresee a need for them in complex systems.
    Ok, fair enough. It gives the spontaneous AI scenario a scarier potential, though.

    I wonder if the only AI where consideration for them as sapient 'equals' matters is one engineered to be human like - as in HAL 9000. When you think about HAL's story, that system has a lot more capability - more mind - than is necessary to do the work of running that mission. It becomes an implausible scenario. The only fully human-like mind we're likely to create is one for study purposes only or for entertainment. The moral conundrum DW's thinking about applies there, I guess. But if the internet generates an emergent intelligence tomorrow, we're not likely to be able to communicate with it - or maybe even notice its there - and vice/versa.

    If we can't communicate with it, but we know its there, is it sapient? Is it ok to turn it off if its clearly taking action for its own purposes but for some reason cannot or will not respond to us?

    On the other hand, if you have an intelligence that you can communicate with that doesn't apparently care what you do to it, is it ok to do whatever you want to and with it?
    Last edited by EyeKhan; 10-29-2013 at 06:59 PM.
    The Rules
    Copper- behave toward others to elicit treatment you would like (the manipulative rule)
    Gold- treat others how you would like them to treat you (the self regard rule)
    Platinum - treat others the way they would like to be treated (the PC rule)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •