• 0 Posts
  • 1.35K Comments
Joined 1 year ago
cake
Cake day: July 15th, 2024

help-circle

  • Well, in the Soviet example everything was government.

    And governments seem to be so excited by the prospects of this “AI” so it’s pretty clear that it’s still their desire most of all.

    EDIT: On telegraph and Panama you are right (btw, it’s bloody weird that where it sounds like canal in my language it’s usually channel in English, but in the particular case of Panama it’s not), but they might perceive this as a similarly important direction. Remember how in 20s and 30s “colonization of space” was dreamed about with new settlements supporting new power bases, mining for resources and growing on Mars and Venus, FTL travel to Sirius, all that. There are some very cool things in Soviet stagnation - those pictures of the future lived longer than in the West against scientific knowledge. So, back to the subject, - “AI” they want to reach is the thing that will allow to generate knowledge and designs like a production line makes chocolate bars. If that is made, the value of intelligent individuals will be tremendously reduced, or so they think. At least of the individuals on the “autistic” side, but not on the “psychopathic” side, because the latter will run things. It’s literally a “quantity vs quality” evolutionary battle inside human kinds of diversity, all the distractions around us and the legal mechanisms being fuzzied and undone also fit here. So - for the record, I think quality is on our side even if I’m distracted right now, and sheer quantity thrown at the task doesn’t solve complexity of such magnitude, it’s a fundamental problem.





  • Exactly, it’s a tool to whitewash decisions. A machine that seemingly does not exactly what it should do. A way to shake off responsibility.

    And that it won’t ever work right is its best trait for this purpose. They’ll be able to blame every transgression or wrong where they are caught on an error in the system, and get away with the rest.

    At least unless it’s legally equated to using Tarot cards for making decisions affecting lives. That should disqualify the fiend further as a completely inadequate human being, not absolve them of responsibility.





  • as with most of the things people complain about with AI, the problem isn’t the technology, it’s capitalism. This is done intentionally in search of profits.

    So in our hypothetical people’s republic of united Earth your personal LLM assistant is not going to assist you in suicide, and isn’t even going to send a notification someplace that you have such thoughts, which is certainly not going to affect your reliability rating, chances to find a decent job, accommodations (less value - less need to keep you in order) and so on? Or, in case of meth, just about that, which means you’re fired and at best put to a rehab, how efficient it’ll be, - well, how efficient does it have to be? In case you have no leverage and a bureaucratic machine does.

    There are options other than “capitalism” and “happy”.







  • I’ve used it as an argument for libertarianism initially, because direct democracy is kinda impractical (thought me back then, LOL), but having grown up a bit I see the need for big militaries and in general synchronous pooling of resources, which libertarian models are notoriously not very good at.

    But right now things just as impractical as direct democracy are implemented everywhere, so times have changed.