Computer pioneer Alan Turing’s remarks in 1950 on the question, “Can machines think?” were misquoted, misinterpreted and morphed into the so-called “Turing Test”. The modern version says if you can’t tell the difference between communicating with a machine and a human, the machine is intelligent. What Turing actually said was that by the year 2000 people would be using words like “thinking” and “intelligent” to describe computers, because interacting with them would be so similar to interacting with people. Computer scientists do not sit down and say alrighty, let’s put this new software to the Turing Test - by Grabthar’s Hammer, it passed! We’ve achieved Artificial Intelligence!

  • deranger@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 months ago

    I think the Chinese room argument published in 1980 gives a pretty convincing reason why the Turing test doesn’t demonstrate intelligence.

    The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them. Chinese characters are written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping the reply underneath the door. The human is then given English instructions which replicate the instructions and function of the computer program to converse in Chinese. The human follows the instructions and the two rooms can perfectly communicate in Chinese, but the human still does not actually understand the characters, merely following instructions to converse. Searle states that both the computer and human are doing identical tasks, following instructions without truly understanding or “thinking”.

    Searle asserts that there is no essential difference between the roles of the computer and the human in the experiment. Each simply follows a program, step-by-step, producing behavior that makes them appear to understand. However, the human would not be able to understand the conversation. Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

    • 8baanknexer@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I am sceptical of this thought experiment as it seems to imply that what goes on within the human brain is not computable. For reference: every single physical effect that we have thus far discovered can be computed/simulated on a Turing machine.

      The argument itself is also riddled with vagueness and handwaving: it gives no definition of understanding but presumes it as something that has a definite location, and also it may well be possible that taking the time to run the program inevitably causes understanding of Chinese after even the first word returned. Remember: executing these instructions could take billions of years for the presumably immortal human in the room, and we expect the human to be so thorough that they execute each of the trillions of instructions without error.

      Indeed, the Turing test is insufficient to test for intelligence, but the statement that the Chinese room argument tries to support is much, much stronger than that. It essentially argues that computers can’t be intelligent at all.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Searle argued from his personal truth that a mystic soul is responsible for sapience.

      His argument against a computer system having consciousness is this:

      " In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing “system”, and does not require anything resembling the actual biology of the brain."

      -Searle

      https://en.m.wikipedia.org/wiki/Chinese_room

    • eggymachus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      3 months ago

      That just shows a fundamental misunderstanding of levels. Neither the computer nor the human understands Chinese. Both the programs do, however.

  • Hemingways_Shotgun@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    My impression as an atheist is that there is no special sauce that makes human intelligence impossible to achieve. It will happen eventually.

    Our brains are computers made of meat. Nothing more. Our thoughts, our dreams, our consciousness itself is quite literally just chemicals, hormones and synapses instead of circuits, binary code and wiring. There is no soul that would prevent true life from arising once the computing becomes powerful enough for it.

    • surph_ninja@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      Exactly this. We’re always moving the goalposts to maintain the belief of human exceptionalism. We also used to say that tool use and construction were what separated humans from the animals, until examples were found of animals using or making tools, and then we moved the goalposts further to exclude them.

      This pervasive belief that humans are beyond nature or singularly extraordinary in their intelligence and consciousness is rooted in arrogance and bad science, and it hinders our understanding of science and consciousness and our place in the universe.

      If an intelligence is able to feign consciousness so well that we can’t distinguish it from “real” sentience, then it’s close enough that we should treat it as such. Those who insist on defending the idea of human exceptionalism are simply invested in maintaining human superiority and exploitation of animals and machines beyond what humans and law would otherwise accept as moral, if we were to respect other intelligences as equal and deserving of their own rights.

      • Phunter@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I’d also like to popularize the opinion that critical thinking, sentience, and intelligence don’t necessary make a species “better”. High intelligence is demonstrably helpful for world domination, but this is not necessarily an entirely objective improvement.

        You think humans are the greatest? Have you met orangutans?

        • surph_ninja@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Personally, I think higher intelligence is better. I think it seems like it’s gone badly because we haven’t finished our progression yet, and we’re still a little too much primate. If we can keep from destroying the planet, we may get there.

          I also don’t think we’ll be the last species to get to this point. We were just the first.

          Really seems silly to me to focus so much on the distinctions between species. We all came from the same primordial soup of RNA on this planet (probably), and we’re all essentially just accumulated deviations & variations on those original building blocks. I believe in my bones that were still in the early stages of development, and this is no closer to the end result than an egg or a cocoon.

  • br3d@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I can’t remember who said this, but somebody said the version of the Turing Test as we all remember it is ridiculous: It’s basically saying that the test of intelligence is “Can a chatbot fool one idiot?”

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      More “can fool the average idiot.”

      ‘Passing’ isn’t fooling a single participant, but the majority of them beyond statistical chance.

  • shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Y’all might enjoy reading Blindsight. Really digs into questions of sapience, intelligence, etc. Is it evolutionary cost worth it? I’ve read it 15+ times. Because I’m a psycho.

    “You think we’re nothing but a Chinese Room,” Rorschach sneered. “Your mistake, Theseus.”

    And suddenly Rorschach snapped into view—no refractory composites, no profiles or simulations in false color. There it was at last, naked even to Human eyes.

    Imagine a crown of thorns, twisted, dark and unreflective, grown too thickly tangled to ever rest on any human head. Put it in orbit around a failed star whose own reflected half-light does little more than throw its satellites into silhouette. Occasional bloody highlights glinted like dim embers from its twists and crannies; they only emphasized the darkness everywhere else.

    Imagine an artefact that embodies the very notion of torture, something so wrenched and disfigured that even across uncounted lightyears and unimaginable differences in biology and outlook, you can’t help but feel that somehow, the structure itself is in pain.

    Now make it the size of a city.

  • gandalf_der_12te@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    3 months ago

    oh come on

    people are in denial that their way of life - getting paid for intellectual output - is coming to an end. it’s not the case that AI just produces slop. surely it does but so do a lot of humans. you know all the memes about human workers having imposter syndrome - feeling as if they don’t even really know what they’re doing? AI only has to produce higher quality output than them. and it definitely can.

    the reason why people shit on AI so hard is because they’re afraid - afraid that AI will “out-compete” them. in that sense, you could also call it “jealous”, like a woman fears she’s replaced by another woman.

    people need to respect themselves and others enough to agree to survive - and thrive, even - in the absence of a productive output. in other words, only if you can allow your fellow humans a living income without work, you are truly in a position where you can live comfortably in the future.

    • AngryRobot@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      My dude, our billionaire overlords are pushing AI to save them money. They won’t be willing to pay for something like UBI. They spent over a fuckton of money in this last election to hand the presidency to someone who only cares about billionaires and their profits.

      • daddy32@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        You are both right. But the parent is exhibiting too much techno-optimism when it should be focusing on capitalism-pesimism instead.

    • tb_@lemmy.worldB
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I don’t entirely disagree with the comic at the end; but given the current systems in place I doubt the robots will be used to support the masses and rather enrich the few.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      3 months ago

      No idea why this is getting downvoted. You can argue over the exact practicality of the current iteration of AI, but this is a proven good take on automation generally speaking

      • kungen@feddit.nu
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Because they’re saying that people are afraid of AI taking their job, as if the majority of people enjoy their jobs? People don’t want to be without an income. As if our benevolent oligarchs will suddenly give us even the smallest chance of getting some kind of basic income?

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          2
          ·
          3 months ago

          as if the majority of people enjoy their jobs?

          The enshittification of employment isn’t necessary. And having a role in how your society functions is necessary for any kind of democratic control of the economy. You can’t just be a consumer, on the outside looking in.

          Automating away drudgery is generally good for an economy. Automating away control is what sucks.

          As if our benevolent oligarchs will suddenly give us even the smallest chance of getting some kind of basic income?

          The structures of basic income are already in place. We have social security. We have pensions. We have annuities. The struggle is in if and how we continue to fund them.

          Since Reagan, the answer to funding basic income schemes has been to displace the cost from higher income earners to younger workers. Now that we’ve drained that well, there’s definitely a push to simply dissolve these systems entirely.

          But it’s hardly a given, any more than the Reagan Era was some historical inevitability. Americans can change course if enough of them can unify around an opposition.