• johnyreeferseed@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    66
    ·
    6 days ago

    Meanwhile I’m down town I’m my city cleaning windows in office buildings that are 75% empty but the heat or ac is blasting on completely empty floors and most of the lights are on.

    • stoy@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      The HVAC does serve a purpose, it reduces the moisture in the building, which would otherwise ruin the building

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    81
    ·
    7 days ago

    Worse is Google that insists on shoving a terrible AI-based result in your face every time you do a search, with no way to turn it off.

    I’m not telling these systems to generate images of cow-like girls, but I’m getting AI shoved in my face all the time whether I want it or not. (I don’t).

    • jjmoldy@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 days ago

      I am trying to understand what Google’s motivation for this even is. Surely it is not profitable to be replacing their existing, highly lucrative product with an inferior alternative that eats up way more power?

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        7 days ago

        Their motivation is always ads. The ai response is longer and takes time to read so more time looking at their ads. If the answer is sufficient, you might not even click away to the search result.

        AI is a potential huge bonanza to search sites, letting them suck up the ad revenue that used to goto the search results

      • Don_alForno@feddit.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 days ago

        They don’t want to direct you to the thing you’re searching for anymore because that means you’re off their site quickly. Instead they want to provide themselves whatever it is you were searching for, so you will stay on their site and generate ad money. They don’t care if their results are bad, because that just means you’ll stick around longer, looking for an answer.

      • WanderingThoughts@europe.pub
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        To make search more lucrative, they’ve enshitified it and went too far, but for a short time there were great quarterly resukts. Now they’re slowly losing users. So they try AI to fix it up.

        It’s also a signal to the shareholders that they’re implementing the latest buzzword, plus they’re all worried AI will take off and they’ve missed that train.

      • Eyedust@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        6 days ago

        You can also use alternatives like startpage and ecosia which use google results, I believe.

        • altphoto@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 days ago

          Both of which are probably training their own AI as middle men or stealing your search terms to tell Walmart what type of peanut butter you’re most likely to buy if they could lock it up on a plastic covered shelve.

          • Eyedust@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 days ago

            Probably, but neither automatically opt into AI replies. Ecosia has an AI chat, but it doesn’t run until you go to it. Startpage has no AI option that I can see.

            Ecosia has the upside of planting trees depending on user search rate. Not sure how true that is, though. I prefer startpage either way. Startpage claims to be privacy first, and I’ve never received tailored results or ads.

            That doesn’t mean they don’t sell info. We can’t know that for sure, but it sure as hell beats using Google and it’s automatic AI searching.

    • medgremlin@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      Firefox has a plugin that blocks the AI results. It works pretty well most of the time, but it occasionally has hiccups when Google updates stuff or something.

    • Getting6409@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      Piling on to the google alternatives heap: https://searx.space/

      You can pick a public instance of searxng and choose which engines it queries by going to the setting cog, then Engines. A few of these public instances I’ve checked out have only google enabled, though, so you really do need to check the settings.

      If you want to add a searxng instance as your default engine and your browser doesn’t automatically do it, the URL for that is: https://<searxng_url>/search?q=%s

      I have to add this manually for things like ironfox/firefox mobile.

    • rdri@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      There is a way to “turn it off” with some search parameters. However there is no guarantee that the AI is not consuming resources at the backend.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        Also the search parameters are undocumented internal things that can change or be disabled at any time.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    94
    arrow-down
    2
    ·
    7 days ago

    Yeah, that thing that nobody wanted? Everybody has to have it. Fuck corporations and capitalism.

    • bridgeenjoyer@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      7 days ago

      Just like screens in cars, and MASSIVE trucks. We don’t want this. Well, some dumbass Americans do, but intelligent people don’t need a 32 ton 6 wheel drive pickup to haul jr to soccer.

      • Madzielle@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        22
        ·
        7 days ago

        Massive trucks? They need those trucks for truck stuff, like this giant dilhole parking with his wife to go to Aldi today. Not even a flag on the end of that ladder, it filled a whole spot by itself.

        My couch wouldn’t fit in that bed, and every giant truck I see is sparkling shiny and looks like it hasn’t done a day of hard labor, much like the drivers.

      • Schadrach@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        7 days ago

        You underestimate the number of people you wouldn’t class as intelligent. If no one wanted massive trucks, they would have disappeared off the market within a couple of years because they wouldn’t sell. They’re ridiculous, inefficient hulks that basically no one really needs but they sell, so they continue being made.

        • moakley@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          7 days ago

          It’s actually because small trucks were regulated out of the US market. Smaller vehicles have more stringent mileage standards that trucks aren’t able to meet. That forces companies to make all their trucks bigger, because bigger vehicles are held to a different standard.

          So the people who want or need a truck are pushed to buy a larger one.

          • 5too@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 days ago

            They can meet them. But the profit margin is slimmer than if they use the giant frame.

      • IsThisAnAI@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        11
        ·
        edit-2
        7 days ago

        Do you have any data to support this is actually the case? I see this all the time but absolutely zero evidence but a 2015 Axios survey with no methodology or dataset. Nearly every article cites this one industry group with 3 questions that clearly aren’t exclusive categorical and could be picked apart by a high school student.

        I ask this question nearly every time I see this comment and in 5 years I have not found a single person who can actually cite where this came from or a complete explanation of even hope they got to that conclusion.

        The truck owners I know, myself included, use them all the time for towing and like the added utility having the bed as as secondary feature.

        • Schadrach@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          11
          ·
          7 days ago

          The truck owners I know, myself included, use them all the time for towing and like the added utility having the bed as as secondary feature.

          Then you put it beside a truck from 30 years ago that’s a quarter the overall size but has the same bed capacity and towing power along with much better visibility instead of not being able to see the child you’re about to run over. And then you understand what people mean when they say massive trucks - giant ridiculously unnecessary things that are all about being a status symbol and dodging regulations rather than practicality.

          • IsThisAnAI@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            7
            ·
            7 days ago

            Absolutely 100% incorrect on towing. The 95 top f150 towed about 7700 compared to 13500 today. That’s an f350 in 95. It’ll also fit a family of 4 comparable to a full size sedan eliminating any need of a secondary vehicle. The old f150/1500s were miserable in the back.

            As for the safety I find the argument disingenuous not based on reality. Roughly 160 kids were killed in 23 with the EU27. It was 220 in the US. Much of that could be correlated to traffic density as well.

            Country / Region Est. Fatalities/Year Child Pop. (0–14) Fatalities per Million

            United States ~225 ~61 million ~3.7 United Kingdom ~22 ~11.5 million ~1.9 Canada ~12 ~6 million ~2.0 Australia ~11 ~4.8 million ~2.3 Germany ~20 ~11 million ~1.8 France ~18 ~11 million ~1.6 Japan ~18 ~15 million ~1.2 India ~3,000 (est.) ~360 million ~8.3 Brazil ~450 ~50 million ~9.0 European Union (EU-27) ~140–160 ~72 million ~2.0–2.2

            I think we should offer incentives for manufacturers to start reducing size and weight, but things you are saying here aren’t really based off of any data nor was it what I was asking.

            I just wish I could find one person to show me what they are referencing when they repeat that seemingly false fact.

        • Bytemeister@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          7 days ago

          Let me express it to you with some numbers… The US is ~3.81 million square miles in size.

          The F150 has sold 8.810 million units in the US in the last 10 years.

          There are ~ 2.3 F150s fewer than 10 years old for every square mile in this country.

          There is no way the majority of those trucks are going to job sites, or hauling junk, or pulling a trailer, just look around. That’s not even all trucks. Thats just one model, from one brand, for a single 10 yr period.

          These trucks are primarily sold as a vanity vehicle, and a minivan alternative, and that’s what I think when I see one.

            • Bytemeister@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              7 days ago

              No, trump style math would be saying that The number of Trucks Towing has gone DOWN 400% PERCENT after the EVIL AMERICA HATING COMMUNIST Dems elected a soon-to-be-illegal Migrant Gang member as Mayor of New York NYC.

      • Tlaloc_Temporal@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        Do the new models even have non-“smart” fittings? I thought all the electronic chip plants closed during covid.

  • Pacattack57@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    5
    ·
    7 days ago

    When I’m told there’s power issues and to conserve power I drop my AC to 60 and leave all my lights on. Only way for them to fix the grid is to break it.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    6 days ago

    Could someone please help me save some power and just post the image with the 5tits so I don’t need to have it regenerated de novo?

  • Allonzee@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    9
    ·
    edit-2
    7 days ago

    We’re going away folks, and nothing of any true value will be lost, except all the species that did live in homeostasis with the Earth that we’re taking with us in our species’ avarice induced murder-suicide

    • millie@slrpnk.net
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      7 days ago

      Carlin had some good material, but this is an absolutely stupid mindset. We can cause an extreme level of ecological damage. Will the planet eventually recover? Quite possibly. But that’s not a certainty, and in the mean time we’re triggering a mass extinction precisely because irresponsible humans figure there’s no way we can hurt the Earth and it’s self-important hubris to think that we can.

      But the time we’re living through and the time we’re heading into are all the proof we should need that it’s actually hubris to assume our actions have no meaningful impact.

    • oni ᓚᘏᗢ@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      7 days ago

      I’ve been trying to write this comment the more concise possible, I’m trying my best. “We’re going away”, yes, that’s true. No matter what we are leaving this place, but, that doesn’t mean that the last days of humanity have to be surrounded by pollution and trash. All I can get of that quote in the image is that we should let big companies shit on us till we die.

      • Allonzee@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        4
        ·
        edit-2
        7 days ago

        “let”

        The sociopath fascist capitalists won. They have multilayered protection from us from propaganda dividing us and turning us on one another without end to government capture and having the exclusive use of state violence to protect the capital markets and literally having state sanctioned murder for for private profit. This isnt a war, this is a well oiled Orwellian occupation. The people surrendered half a century ago without terms and received the delusion that they’ll be the rich ones one day herp derp.

        We can’t do anything about the misery they spread, and that sucks. We don’t have to add to our misery by pretending there’s hope and we can turn any of it around. They’re going to do what we’re going to do and short of being a lone “terrorist” that takes a pot shot at them like Luigi there’s nothing to be done, because half of us are too cowardly and social opiate addicted(fast food, social media, literal opiates, etc) or too deluded and actually on the robber baron’s side out of pick me mentality to take it as a rallying cry.

        Only the planet itself, our shared habitat, can stop them. And it will, regardless of all the studies they kill, ecological treaties they betray, and all the propaganda they spread. The capitalists reign of terror will end when enough of their suckers are starving, not before.

  • HeyListenWatchOut@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    7 days ago

    Classic neo-liberalism - privatize the benefits, socialize the costs.

    Corporations : “We should get to gobble all power with our projects… and you should have the personal responsibility to reduce power usage even though it would - at best - only improve things at the very edges of the margins… and then we can get away with whatever we want.”

    Just like with paper straws. You get crappy straws and they hope you feel like you’re helping the environment (even though the plastic straws account for like 0.00002% of plastic waste generated) … meanwhile 80% of the actual pollution and waste being generated by like 12 corporations gets to continue.

    • gandalf_der_12te@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 days ago

      I feel like i’ve read a very similar argument somewhere recently, but i have difficulty remembering it precisely. It went something like this:

      • If a company kills 5 people, it was either an accident, an unfortunate mishap, a necessity of war (in case of the weapons industry) or some other bullshit excuse.
      • If the people threaten to kill 5 billionaires, they’re charged with “terrorism” (see Luigi Mangione’s case).
      • TriflingToad@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        7 days ago

        It’s from https://perchance.org/welcome and is super cool because it’s like half a soul-less AI and half a super cool tool that gets people into programming and they actually care about the Internet because they encourage people to learn how to code their own ais and have fun with it and I would absolutely have DEVOURED it when I was 13 on Tumblr (I forgot my ADHD meds today sorry if I’m rambling)

  • leftthegroup@lemmings.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 days ago

    Didn’t some legislation come out banning making laws against AI? (which I realize is a fucking crazy sentence in the first place- nothing besides rights should just get immunity to all potential new laws)

    So the cities aren’t even the bad guys here. The Senate is.

    • Corkyskog@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      From what I can tell it got stripped from the Senate version that was just approved. They barely have the heads to pass it, so they aren’t going to play volleyball to add it back.

    • MotoAsh@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 days ago

      It’s both. Also don’t let the house, supreme court, or the orange buffoon and his cabinet get out of culpability. Checks and balances can work … when they all aren’t bought and paid for by rich fucks.

  • jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    16
    ·
    edit-2
    7 days ago

    I know she’s exaggerating but this post yet again underscores how nobody understands that it is training AI which is computationally expensive. Deployment of an AI model is a comparable power draw to running a high-end videogame. How can people hope to fight back against things they don’t understand?

      • MotoAsh@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 days ago

        Well you asked for six tits but you’re getting five. Why? Because the AI is intelligent and can count, obviously.

    • domdanial@reddthat.com
      link
      fedilink
      English
      arrow-up
      25
      ·
      7 days ago

      I mean, continued use of AI encourages the training of new models. If nobody used the image generators, they wouldn’t keep trying to make better ones.

    • FooBarrington@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      7 days ago

      It’s closer to running 8 high-end video games at once. Sure, from a scale perspective it’s further removed from training, but it’s still fairly expensive.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        7 days ago

        really depends. You can locally host an LLM on a typical gaming computer.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          7 days ago

          You can, but that’s not the kind of LLM the meme is talking about. It’s about the big LLMs hosted by large companies.

        • floquant@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          7 days ago

          True, and that’s how everyone who is able should use AI, but OpenAI’s models are in the trillion parameter range. That’s 2-3 orders of magnitude more than what you can reasonably run yourself

          • jsomae@lemmy.ml
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            4
            ·
            edit-2
            7 days ago

            This is still orders of magnitude less than what it takes to run an EV, which are an eco-friendly form of carbrained transportation. Especially if you live in an area where the power source is renewable. On that note, it looks to me like AI is finally going to be the impetus to get the U.S. to invest in and switch to nuclear power – isn’t that altogether a good thing for the environment?

        • Thorry84@feddit.nl
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          7 days ago

          Well that’s sort of half right. Yes you can run the smaller models locally, but usually it’s the bigger models that we want to use. It would also be very slow on a typical gaming computer and even a high end gaming computer. To make it go faster not only is the hardware used in datacenters more optimised for the task, it’s also a lot faster. This is both a speed increase per unit as well as more units being used than you would normally find in a gaming PC.

          Now these things aren’t magic, the basic technology is the same, so where does the speed come from? The answer is raw power, these things run insane amounts of power through them, with specialised cooling systems to keep them cool. This comes at the cost of efficiency.

          So whilst running a model is much cheaper compared to training a model, it is far from free. And whilst you can run a smaller model on your home PC, it isn’t directly comparable to how it’s used in the datacenter. So the use of AI is still very power hungry, even when not counting the training.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          7 days ago

          Yeh but those local models are usually pretty underpowered compared to the ones that run via online services, and are still more demanding than any game.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        7 days ago

        Not at all. Not even close.

        Image generation is usually batched and takes seconds, so 700W (a single H100 SXM) for a few seconds for a batch of a few images to multiple users. Maybe more for the absolute biggest (but SFW, no porn) models.

        LLM generation takes more VRAM, but is MUCH more compute-light. Typically one has banks of 8 GPUs in multiple servers serving many, many users at once. Even my lowly RTX 3090 can serve 8+ users in parallel with TabbyAPI (and modestly sized model) before becoming more compute bound.

        So in a nutshell, imagegen (on an 80GB H100) is probably more like 1/4-1/8 of a video game at once (not 8 at once), and only for a few seconds.

        Text generation is similarly efficient, if not more. Responses take longer (many seconds, except on special hardware like Cerebras CS-2s), but it parallelized over dozens of users per GPU.


        This is excluding more specialized hardware like Google’s TPUs, Huawei NPUs, Cerebras CS-2s and so on. These are clocked far more efficiently than Nvidia/AMD GPUs.


        …The worst are probably video generation models. These are extremely compute intense and take a long time (at the moment), so you are burning like a few minutes of gaming time per output.

        ollama/sd-web-ui are terrible analogs for all this because they are single user, and relatively unoptimized.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          7 days ago

          I compared the TDP of an average high-end graphics card with the GPUs required to run big LLMs. Do you disagree?

            • FooBarrington@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              7 days ago

              They are, it’d be uneconomical not to use them fully the whole time. Look up how batching works.

              • Jakeroxs@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                edit-2
                7 days ago

                I mean I literally run a local LLM, while the model sits in memory it’s really not using up a crazy amount of resources, I should hook up something to actually measure exactly how much it’s pulling vs just looking at htop/atop and guesstimating based on load TBF.

                Vs when I play a game and the fans start blaring and it heats up and you can clearly see the usage increasing across various metrics

                • PeriodicallyPedantic@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  7 days ago

                  He isn’t talking about locally, he is talking about what it takes for the AI providers to provide the AI.

                  To say “it takes more energy during training” entirely depends on the load put on the inference servers, and the size of the inference server farm.

                • MotoAsh@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  7 days ago

                  One user vs a public service is apples to oranges and it’s actually hilarious you’re so willing to compare them.

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  7 days ago

                  My guy, we’re not talking about just leaving a model loaded, we’re talking about actual usage in a cloud setting with far more GPUs and users involved.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      7 days ago

      Right, but that’s kind of like saying “I don’t kill babies” while you use a product made from murdered baby souls. Yes you weren’t the one who did it, but your continued use of it caused the babies too be killed.

      There is no ethical consumption under capitalism and all that, but I feel like here is a line were crossing. This fruit is hanging so low it’s brushing the grass.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 days ago

        Are you interpreting my statement as being in favour of training AIs?

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          I’m interpreting your statement as “the damage is done so we might as well use it”
          And I’m saying that using it causes them to train more AIs, which causes more damage.

          • jsomae@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 days ago

            I agree with your second statement. You have misunderstood me. I am not saying the damage is done so we might as well use it. I am saying people don’t understand that it is the training of AIs which is directly power-draining.

            I don’t understand why you think that my observation people are ignorant about how AIs work is somehow an endorsement that we should use AIs.

            • PeriodicallyPedantic@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 days ago

              I guess.

              It still smells like an apologist argument to be like “yeah but using it doesn’t actually use a lot of power”.

              I’m actually not really sure I believe that argument either, through. I’m pretty sure that inference is hella expensive. When people talk about training, they don’t talk about the cost to train on a single input, they talk about the cost for the entire training. So why are we talking about the cost to infer on a single input?
              What’s the cost of running training, per hour? What’s the cost of inference, per hour, on a similarly sized inference farm, running at maximum capacity?

              • jsomae@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 days ago

                Maybe you should stop smelling text and try reading it instead. :P

                Running an LLM in deployment can be done locally on one’s machine, on a single GPU, and in this case is like playing a video game for under a minute. OpenAI models are larger than by a factor of 10 or more, so it’s maybe like playing a video game for 15 minutes (obviously varies based on the response to the query.)

                It makes sense to measure deployment usage marginally based on its queries for the same reason it makes sense to measure the environmental impact of a car in terms of hours or miles driven. There’s no natural way to do this for training though. You could divide training by the number of queries, to amortize it across its actual usage, which would make it seem significantly cheaper, but it comes with the unintuitive property that this amortization weight goes down as more queries are made, so it’s unclear exactly how much of the cost of training should be assigned to a given query. It might make more sense to talk in terms of expected number of total queries during the lifetime deployment of a model.

                • PeriodicallyPedantic@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 days ago

                  You’re way overcomplicating how it could be done. The argument is that training takes more energy:

                  Typically if you have a single cost associated with a service, then you amortize that cost over the life of the service: so you take the total energy consumption of training and divide it by the total number of user-hours spent doing inference, and compare that to the cost of a single user running inference for an hour (which they can estimate by the number of user-hours in an hour divided by their global inference energy consumption for that hour).

                  If these are “apples to orange” comparisons, then why do people defending AI usage (and you) keep making the comparison?

                  But even if it was true that training is significantly more expensive that inference, or that they’re inherently incomparable, that doesn’t actually change the underlying observation that inference is still quite energy intensive, and the implicit value statement that the energy spent isn’t worth the affect on society

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        7 days ago

        how about, fuck capitalism? Have you lost sight of the goal?

        • MotoAsh@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          7 days ago

          What tools do you think capitalism is going to use to fuck us harder and faster than ever before?

            • MotoAsh@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 days ago

              Running a concept to its extreme just to try and dismiss it to sound smart is an entire damn logical fallacy. Why are you insisting on using fallacies that brainless morons use?

              Have you never heard of a straw man fallacy? That’s you. That’s what you’re doing.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        there is so much rage today. why don’t we uh, destroy them with facts and logic

        • Jakeroxs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          7 days ago

          Hahaha at this point even facts and logic is a rage inducing argument. “My facts” vs “Your facts”

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    11
    ·
    edit-2
    7 days ago

    1 prompt is avg 1Wh of electricity -> typical AC runs avg 1,500 W = 2.4 seconds of AC per prompt.

    Energy capacity is really not a problem first world countries should face. We have this solved and you’re just taking the bait of blaming normal dudes using miniscule amounts of power while billionaires fly private jets for afternoon getaways.

    • f314@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      4
      ·
      7 days ago

      They are blaming the billionaires (or their companies), for making the thing nobody wanted so they can make money off of it. The guy making a five-breasted woman is a side effect.

      And sure, that one image only uses a moderate amount of power. But there still exists giant data centers for only this purpose, gobbling up tons of power and evaporating tons of water for power and cooling. And all this before considering the training of the models (which you better believe they’re doing continuously to try to come up with better ones).

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        21
        ·
        7 days ago

        nobody wanted according to whom? It’s literally the most used product of this century stop deluding yourself.

        All datacenters in the world combined use like 5% of our energy now and the value we get from computing far outweighs any spending we have here. You’re better off not buying more trash from Temu rather than complain about software using electricity. This is ridiculous.

        • bridgeenjoyer@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          7 days ago

          Where are you getting your false information. Its certainly not the most used. And, the reason it’s used at all is from advertising and ownership of the media by the billionaire class to shove the gibbity in our faces at every waking moment so people use it. They’re losing money like never before on ai.

          • Dr. Moose@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            7 days ago

            This level of collective delusion is crazy. I don’t think any amount of stats will change your mind so you’re clearly argueing in bad faith but sure:

            https://explodingtopics.com/blog/chatgpt-users says 5.2B monthly visits compared to Facebook 12.7 and Instagram’s 7.5. Chatgpt is literally bigger than X.com already. Thats just one tool and LLM’s have direct integrations in phones and other apps.

            I really don’t understand what’s the point of purposefully lying here? We can all hate billionaires together without the need for this weird anti-intellectual bullshit.

            • bridgeenjoyer@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              7 days ago

              I don’t agree that it’s the most used invention this century. Also, typing “hi there gibbity” hardly counts as actually using the tool. If thats what you mean though, sure.

        • weststadtgesicht@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          7 days ago

          People hate AI so much (for many good reasons!) that they can’t see or accept the truth: many many people want to use it, not just “billionaires”

    • InternetCitizen2@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      7 days ago

      A lot of things are solved, but capitalism means that we need a profit motive to act. World hunger is another good example. We know how to make fertilizer and how to genetically alter crops to ensure we never have a crop failure. We have trains and refrigeration to take food anywhere we want. Pretty much any box that we need to check to solve this problem has been. The places that have food problems largely have to do with poverty, which at this point is a polite way to say “I won’t make money, so I am okay with them starving”

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        Im not sure what’s the point here? If we dont like LLMs and data centers using power then we use existing strategies that work like taxing their power use and subsidizing household power use which btw we’re already doing almost everywhere around the world in some form or another.

        The data centers are actually easier to negotiate and work with than something like factories or households where energy margins are much more brittle. Datacenter employs like 5 people and you can squeeze with policy to match social expectations - you can’t do that with factories or households. So datacenter energy problem is not that difficult relatively speaking.

        • InternetCitizen2@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 days ago

          I am agreeing with you that the solutions exist, but the will to implement them is going to be the hard part. A big dampener is simply going to be the profit motive. There is more money in siding with the data center than a the households. Are households okay with an increasing in price? Data center is likely to manage that better, or even just pay a bribe to someone. I used food as another example of a problem that is solved. We can grow food without fail and build the rail to get it to where it needs. We just don’t because need does not match profit expectation. There are talks of building nuclear power for some data centers, but such talk would not happen for normal households.

          • Dr. Moose@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            7 days ago

            People definitely underestimate how cooperative big tech is relative to every other business mostly because big tech has a lot of money and very few expenses so friction is relatively a bigger bottle neck than almost any other industry. So I still think that pressuring openAi into green energy is easier than pressuring Volvo (or any manufacturer) which already is really brittle and has huge negotiation leverage in the form of jobs it provides.

            Take a look at any other business niche and no one’s comitting to green energy other than big tech. As you said yourself no other niche want to build their own nuclear reactors to satisfy their own green energy need.

            I think its OK to hate on big tech because they’re billionaires but people really lose sight here and complain about wrong things that distract from real much bigger issues and paints the entire movement as idiots.

    • salmoura@lemmy.eco.br
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      7 days ago

      A 12000 BTUs inverter split system at peak capacity requires less than 1500 W to run. After it reaches equilibrium it drops the power requirement significantly.

  • burgerpocalyse@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 days ago

    i feel it would actually kill some people to just say, yes, ai uses a lot of power, and no other qualifying statements tacked on