Experimenting with unproven technology to determine whether a child should be granted protections they desperately need and are legally entitled to is cruel and unconscionable.

  • Basic Glitch@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    8 hours ago

    Companies that tested their technology in a handful of supermarkets, pubs, and on websites set them to predict whether a person looks under 25, not 18, allowing a wide error margin for algorithms that struggle to distinguish a 17-year-old from a 19-year-old.

    AI face scans were never designed for children seeking asylum, and risk producing disastrous, life-changing errors. Algorithms identify patterns in the distance between nostrils and the texture of skin; they cannot account for children who have aged prematurely from trauma and violence. They cannot grasp how malnutrition, dehydration, sleep deprivation, and exposure to salt water during a dangerous sea crossing might profoundly alter a child’s face.

    Goddamn, this is horrible. Imagine leaving shitty AI to determine the fate of this girl :

    ‘Psychologically broken,’ 8-year-old Sama loses her hair

  • prole@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    2
    ·
    3 days ago

    Fucking why?? Why is everyone so intent on shoehorning this half baked garbage tech into literally everything?

    • Basic Glitch@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      9 hours ago

      Bc they’ve already sunk too much money into it thinking that if they fed it enough data it would suddenly develop superintelligence, and nobody wants to admit it is likely decades away from being what they advertised (if it ever reaches that point at all).

      Their solution is to just keep throwing more money and data at it until they eventually make it work, or they kill us all trying. Which do you think will happen first?

      • rottingleaf@lemmy.worldBanned
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Exactly, it’s a tool to whitewash decisions. A machine that seemingly does not exactly what it should do. A way to shake off responsibility.

        And that it won’t ever work right is its best trait for this purpose. They’ll be able to blame every transgression or wrong where they are caught on an error in the system, and get away with the rest.

        At least unless it’s legally equated to using Tarot cards for making decisions affecting lives. That should disqualify the fiend further as a completely inadequate human being, not absolve them of responsibility.

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      edit-2
      3 days ago

      Keir Starmer is trying to out-fascist the fascists, and fascists love to experiment on children.

      Also, apparently capitalism doesn’t work unless everyone hypes whatever the techbros are selling this week.

    • db2@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Not everyone. Tech bros, stock bros and the incredibly stupid. Yes there’s some overlap.

  • pyre@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    3 days ago

    don’t buy this bullshit. i guarantee there’s no experiment, and probably no “AI” in the common sense of the word being used today. this is 100% going to be a deny-o-matic, because they’d rather say “the almighty AI determined it” than “we hate children”. This is the same thing united healthcare did which led to the famous—and very popular—deposition of its CEO, and also what Israel claims to be a targeting system while they’re commiting warcrimes on top of a genocide.