Meanwhile I’m down town I’m my city cleaning windows in office buildings that are 75% empty but the heat or ac is blasting on completely empty floors and most of the lights are on.
The HVAC does serve a purpose, it reduces the moisture in the building, which would otherwise ruin the building
Worse is Google that insists on shoving a terrible AI-based result in your face every time you do a search, with no way to turn it off.
I’m not telling these systems to generate images of cow-like girls, but I’m getting AI shoved in my face all the time whether I want it or not. (I don’t).
Then I guess it’s time to stop using Google!
And including the word “fuck” in your query no longer stops it.
And when it did it also altered the results, making them worse, because it was trying to satisfy “fuck” as part of your search.
If you can’t search for “fuck,” you can’t search for “fuck google.”
With apologies to Lenny Bruce.
Well fuck…
I am trying to understand what Google’s motivation for this even is. Surely it is not profitable to be replacing their existing, highly lucrative product with an inferior alternative that eats up way more power?
Their motivation is always ads. The ai response is longer and takes time to read so more time looking at their ads. If the answer is sufficient, you might not even click away to the search result.
AI is a potential huge bonanza to search sites, letting them suck up the ad revenue that used to goto the search results
They don’t want to direct you to the thing you’re searching for anymore because that means you’re off their site quickly. Instead they want to provide themselves whatever it is you were searching for, so you will stay on their site and generate ad money. They don’t care if their results are bad, because that just means you’ll stick around longer, looking for an answer.
To make search more lucrative, they’ve enshitified it and went too far, but for a short time there were great quarterly resukts. Now they’re slowly losing users. So they try AI to fix it up.
It’s also a signal to the shareholders that they’re implementing the latest buzzword, plus they’re all worried AI will take off and they’ve missed that train.
Someone posted here a while ago - if you use the URL https://www.google.com/search?q=%25s&udm=14 it doesn’t include the AI search. I’ve updated my Google search links to use that instead of the base Google URL.
You can also use alternatives like startpage and ecosia which use google results, I believe.
Both of which are probably training their own AI as middle men or stealing your search terms to tell Walmart what type of peanut butter you’re most likely to buy if they could lock it up on a plastic covered shelve.
Probably, but neither automatically opt into AI replies. Ecosia has an AI chat, but it doesn’t run until you go to it. Startpage has no AI option that I can see.
Ecosia has the upside of planting trees depending on user search rate. Not sure how true that is, though. I prefer startpage either way. Startpage claims to be privacy first, and I’ve never received tailored results or ads.
That doesn’t mean they don’t sell info. We can’t know that for sure, but it sure as hell beats using Google and it’s automatic AI searching.
Firefox has a plugin that blocks the AI results. It works pretty well most of the time, but it occasionally has hiccups when Google updates stuff or something.
I’ll have to look for this, thanks!
I have to look stuff up for medical school (usually trying to find studies and whatnot) so the gemini results are just obnoxious garbage to me.
Piling on to the google alternatives heap: https://searx.space/
You can pick a public instance of searxng and choose which engines it queries by going to the setting cog, then Engines. A few of these public instances I’ve checked out have only google enabled, though, so you really do need to check the settings.
If you want to add a searxng instance as your default engine and your browser doesn’t automatically do it, the URL for that is: https://<searxng_url>/search?q=%s
I have to add this manually for things like ironfox/firefox mobile.
There is a way to “turn it off” with some search parameters. However there is no guarantee that the AI is not consuming resources at the backend.
Also the search parameters are undocumented internal things that can change or be disabled at any time.
Yeah, that thing that nobody wanted? Everybody has to have it. Fuck corporations and capitalism.
Just like screens in cars, and MASSIVE trucks. We don’t want this. Well, some dumbass Americans do, but intelligent people don’t need a 32 ton 6 wheel drive pickup to haul jr to soccer.
Massive trucks? They need those trucks for truck stuff, like this giant dilhole parking with his wife to go to Aldi today. Not even a flag on the end of that ladder, it filled a whole spot by itself.
My couch wouldn’t fit in that bed, and every giant truck I see is sparkling shiny and looks like it hasn’t done a day of hard labor, much like the drivers.
You underestimate the number of people you wouldn’t class as intelligent. If no one wanted massive trucks, they would have disappeared off the market within a couple of years because they wouldn’t sell. They’re ridiculous, inefficient hulks that basically no one really needs but they sell, so they continue being made.
It’s actually because small trucks were regulated out of the US market. Smaller vehicles have more stringent mileage standards that trucks aren’t able to meet. That forces companies to make all their trucks bigger, because bigger vehicles are held to a different standard.
So the people who want or need a truck are pushed to buy a larger one.
They can meet them. But the profit margin is slimmer than if they use the giant frame.
Do you have any data to support this is actually the case? I see this all the time but absolutely zero evidence but a 2015 Axios survey with no methodology or dataset. Nearly every article cites this one industry group with 3 questions that clearly aren’t exclusive categorical and could be picked apart by a high school student.
I ask this question nearly every time I see this comment and in 5 years I have not found a single person who can actually cite where this came from or a complete explanation of even hope they got to that conclusion.
The truck owners I know, myself included, use them all the time for towing and like the added utility having the bed as as secondary feature.
The truck owners I know, myself included, use them all the time for towing and like the added utility having the bed as as secondary feature.
Then you put it beside a truck from 30 years ago that’s a quarter the overall size but has the same bed capacity and towing power along with much better visibility instead of not being able to see the child you’re about to run over. And then you understand what people mean when they say massive trucks - giant ridiculously unnecessary things that are all about being a status symbol and dodging regulations rather than practicality.
Absolutely 100% incorrect on towing. The 95 top f150 towed about 7700 compared to 13500 today. That’s an f350 in 95. It’ll also fit a family of 4 comparable to a full size sedan eliminating any need of a secondary vehicle. The old f150/1500s were miserable in the back.
As for the safety I find the argument disingenuous not based on reality. Roughly 160 kids were killed in 23 with the EU27. It was 220 in the US. Much of that could be correlated to traffic density as well.
Country / Region Est. Fatalities/Year Child Pop. (0–14) Fatalities per Million
United States ~225 ~61 million ~3.7 United Kingdom ~22 ~11.5 million ~1.9 Canada ~12 ~6 million ~2.0 Australia ~11 ~4.8 million ~2.3 Germany ~20 ~11 million ~1.8 France ~18 ~11 million ~1.6 Japan ~18 ~15 million ~1.2 India ~3,000 (est.) ~360 million ~8.3 Brazil ~450 ~50 million ~9.0 European Union (EU-27) ~140–160 ~72 million ~2.0–2.2
I think we should offer incentives for manufacturers to start reducing size and weight, but things you are saying here aren’t really based off of any data nor was it what I was asking.
I just wish I could find one person to show me what they are referencing when they repeat that seemingly false fact.
Let me express it to you with some numbers… The US is ~3.81 million square miles in size.
The F150 has sold 8.810 million units in the US in the last 10 years.
There are ~ 2.3 F150s fewer than 10 years old for every square mile in this country.
There is no way the majority of those trucks are going to job sites, or hauling junk, or pulling a trailer, just look around. That’s not even all trucks. Thats just one model, from one brand, for a single 10 yr period.
These trucks are primarily sold as a vanity vehicle, and a minivan alternative, and that’s what I think when I see one.
Nonsense. This is some Trump style math.
No, trump style math would be saying that The number of Trucks Towing has gone DOWN 400% PERCENT after the EVIL AMERICA HATING COMMUNIST Dems elected a soon-to-be-illegal Migrant Gang member as Mayor of New York NYC.
Oh, and you don’t want it and want the stupid model? You can still buy it for 3x the price.
Do the new models even have non-“smart” fittings? I thought all the electronic chip plants closed during covid.
Also they can build nuclear power generators for the data centers but never for the residential power grid.
There’s no money in selling residential energy.
When I’m told there’s power issues and to conserve power I drop my AC to 60 and leave all my lights on. Only way for them to fix the grid is to break it.
Literally rolling coal to own the cons
wow based
Could someone please help me save some power and just post the image with the 5tits so I don’t need to have it regenerated de novo?
We’re going away folks, and nothing of any true value will be lost, except all the species that did live in homeostasis with the Earth that we’re taking with us in our species’ avarice induced murder-suicide
Carlin had some good material, but this is an absolutely stupid mindset. We can cause an extreme level of ecological damage. Will the planet eventually recover? Quite possibly. But that’s not a certainty, and in the mean time we’re triggering a mass extinction precisely because irresponsible humans figure there’s no way we can hurt the Earth and it’s self-important hubris to think that we can.
But the time we’re living through and the time we’re heading into are all the proof we should need that it’s actually hubris to assume our actions have no meaningful impact.
Immediate to Short-Term (Days to Centuries)
- Hours to weeks: Power grids fail; nuclear reactors melt down without maintenance[11].
- Months to decades: Urban areas flood as drainage systems fail; buildings decay from weather and plant growth[6][11].
- 100–300 years: Steel structures collapse; concrete buildings crumble[5][7]. Most cities become overgrown forests[6].
Medium-Term (Thousands of Years)
- 1,000 years: Visible surface structures (e.g., roads, monuments) are buried or eroded. Plastics fragment but persist chemically[5][7].
- 10,000–250,000 years: Nuclear isotopes (e.g., plutonium-239) remain detectable in sediments and ice cores[7]. Mining tunnels fill with sediment but leave identifiable “industrial fossils”[7].
- 500,000 years: Microplastics and polymer layers in ocean sediments endure[5][10].
Long-Term (Millions of Years)
- 1–7 million years: Fossils of humans and domesticated animals persist. Geological strata show elevated carbon levels and mass extinction markers[4][8]. Deep mines and landfills remain as distinct layers[7][10].
- 50–100 million years: Continental drift subducts surface evidence; satellites decay or drift into space[3][10]. Only deep geological traces (e.g., mine shafts, isotope ratios) might endure[3][10].
- 250 million years: Next predicted mass extinction eradicates all mammals, including any remaining human traces[9].
Near-Permanent Traces
- Space artifacts: Lunar landers, Mars rovers, and Voyager probes persist for billions of years[3][10].
- Radio signals: Human broadcasts travel through space indefinitely at light speed[5].
Key Factors
- Detection likelihood: Aliens or future species could find traces for 100+ million years via deep geological analysis or space exploration[5][10].
- Total erasure: Requires Earth’s destruction (e.g., solar expansion in 5 billion years)[10].
Citations: [1] Human extinction https://en.wikipedia.org/wiki/Human_extinction [2] What If Humans Suddenly Went Extinct? https://www.youtube.com/watch?v=yuOKTZISXhc [3] How long would it take for all traces of humans to be gone? https://www.reddit.com/r/answers/comments/1azu120/how_long_would_it_take_for_all_traces_of_humans/ [4] What would happen to Earth if humans went extinct? https://www.livescience.com/earth-without-people.html [5] How long before all human traces are wiped out? https://www.newscientist.com/lastword/2215950-how-long-before-all-human-traces-are-wiped-out/ [6] Vanishing Act: What Earth Will Look Like 100 Years After Humans Disappear - Brilliantio https://brilliantio.com/if-people-dissapeared-what-will-happen-to-earth-in-100-years/ [7] If humans became extinct, how long would it take for all … https://www.sciencefocus.com/science/if-humans-became-extinct-how-long-would-it-take-for-all-traces-of-us-to-vanish [8] Nature will need up to five million years to fill the gaps caused by man-made mass extinctions, study finds https://www.independent.co.uk/climate-change/news/mass-extinctions-five-million-years-nature-mammals-crisis-animal-plants-pnas-aarhus-a8585066.html [9] Humans Will Go Extinct on Earth in 250 Million Years; Mass Extinction Will Occur Sooner if Burning Fossil Fuels Continues [Study] https://www.sciencetimes.com/articles/49951/20240430/humans-will-go-extinct-earth-250-million-years-mass-extinction.htm [10] How long would it take for all evidence of humanity to be … https://worldbuilding.stackexchange.com/questions/153618/how-long-would-it-take-for-all-evidence-of-humanity-to-be-erased-from-earth [11] What Would Happen If Every Human On Earth Just Disappeared? https://www.scienceabc.com/humans/life-like-humans-suddenly-disappeared.html
Stephen Baxter over here. I’m going back to bed, it won’t change anything…
We do have an impact but the earth will 100% be ok when we are dead and gone eventually. A million years ain’t shit to the earth.
We humans are a virus…a parasite and the earth will be better off once we are extinct.
I’ve been trying to write this comment the more concise possible, I’m trying my best. “We’re going away”, yes, that’s true. No matter what we are leaving this place, but, that doesn’t mean that the last days of humanity have to be surrounded by pollution and trash. All I can get of that quote in the image is that we should let big companies shit on us till we die.
“let”
The sociopath fascist capitalists won. They have multilayered protection from us from propaganda dividing us and turning us on one another without end to government capture and having the exclusive use of state violence to protect the capital markets and literally having state sanctioned murder for for private profit. This isnt a war, this is a well oiled Orwellian occupation. The people surrendered half a century ago without terms and received the delusion that they’ll be the rich ones one day herp derp.
We can’t do anything about the misery they spread, and that sucks. We don’t have to add to our misery by pretending there’s hope and we can turn any of it around. They’re going to do what we’re going to do and short of being a lone “terrorist” that takes a pot shot at them like Luigi there’s nothing to be done, because half of us are too cowardly and social opiate addicted(fast food, social media, literal opiates, etc) or too deluded and actually on the robber baron’s side out of pick me mentality to take it as a rallying cry.
Only the planet itself, our shared habitat, can stop them. And it will, regardless of all the studies they kill, ecological treaties they betray, and all the propaganda they spread. The capitalists reign of terror will end when enough of their suckers are starving, not before.
Can’t be soon enough, really. I’d rather we take fewer other species along with us.
Laughs in total recall
Classic neo-liberalism - privatize the benefits, socialize the costs.
Corporations : “We should get to gobble all power with our projects… and you should have the personal responsibility to reduce power usage even though it would - at best - only improve things at the very edges of the margins… and then we can get away with whatever we want.”
Just like with paper straws. You get crappy straws and they hope you feel like you’re helping the environment (even though the plastic straws account for like 0.00002% of plastic waste generated) … meanwhile 80% of the actual pollution and waste being generated by like 12 corporations gets to continue.
I feel like i’ve read a very similar argument somewhere recently, but i have difficulty remembering it precisely. It went something like this:
- If a company kills 5 people, it was either an accident, an unfortunate mishap, a necessity of war (in case of the weapons industry) or some other bullshit excuse.
- If the people threaten to kill 5 billionaires, they’re charged with “terrorism” (see Luigi Mangione’s case).
Let’s not forget billionaires in this consideration.
I tried to make an image of a woman with 5 tits but got distracted and got married to a rock
That looks like fun. How do I play that AI?
It’s from https://perchance.org/welcome and is super cool because it’s like half a soul-less AI and half a super cool tool that gets people into programming and they actually care about the Internet because they encourage people to learn how to code their own ais and have fun with it and I would absolutely have DEVOURED it when I was 13 on Tumblr (I forgot my ADHD meds today sorry if I’m rambling)
Didn’t some legislation come out banning making laws against AI? (which I realize is a fucking crazy sentence in the first place- nothing besides rights should just get immunity to all potential new laws)
So the cities aren’t even the bad guys here. The Senate is.
From what I can tell it got stripped from the Senate version that was just approved. They barely have the heads to pass it, so they aren’t going to play volleyball to add it back.
It’s both. Also don’t let the house, supreme court, or the orange buffoon and his cabinet get out of culpability. Checks and balances can work … when they all aren’t bought and paid for by rich fucks.
I meant to mention the other ones at fault, but I edited what I was typing and backspaced that part.
Thanks
I know she’s exaggerating but this post yet again underscores how nobody understands that it is training AI which is computationally expensive. Deployment of an AI model is a comparable power draw to running a high-end videogame. How can people hope to fight back against things they don’t understand?
She’s not exaggerating, if anything she’s undercounting the number of tits.
Well you asked for six tits but you’re getting five. Why? Because the AI is intelligent and can count, obviously.
I mean, continued use of AI encourages the training of new models. If nobody used the image generators, they wouldn’t keep trying to make better ones.
you are correct, and also not in any way disagreeing with me.
I try lol.
TBH most people still use old SDXL finetunes for porn, even with the availability of newer ones.
It’s closer to running 8 high-end video games at once. Sure, from a scale perspective it’s further removed from training, but it’s still fairly expensive.
nice name btw
really depends. You can locally host an LLM on a typical gaming computer.
You can, but that’s not the kind of LLM the meme is talking about. It’s about the big LLMs hosted by large companies.
True, and that’s how everyone who is able should use AI, but OpenAI’s models are in the trillion parameter range. That’s 2-3 orders of magnitude more than what you can reasonably run yourself
This is still orders of magnitude less than what it takes to run an EV, which are an eco-friendly form of carbrained transportation. Especially if you live in an area where the power source is renewable. On that note, it looks to me like AI is finally going to be the impetus to get the U.S. to invest in and switch to nuclear power – isn’t that altogether a good thing for the environment?
lol just purely wrong on that one. Hilarious.
Well that’s sort of half right. Yes you can run the smaller models locally, but usually it’s the bigger models that we want to use. It would also be very slow on a typical gaming computer and even a high end gaming computer. To make it go faster not only is the hardware used in datacenters more optimised for the task, it’s also a lot faster. This is both a speed increase per unit as well as more units being used than you would normally find in a gaming PC.
Now these things aren’t magic, the basic technology is the same, so where does the speed come from? The answer is raw power, these things run insane amounts of power through them, with specialised cooling systems to keep them cool. This comes at the cost of efficiency.
So whilst running a model is much cheaper compared to training a model, it is far from free. And whilst you can run a smaller model on your home PC, it isn’t directly comparable to how it’s used in the datacenter. So the use of AI is still very power hungry, even when not counting the training.
Yeh but those local models are usually pretty underpowered compared to the ones that run via online services, and are still more demanding than any game.
Not at all. Not even close.
Image generation is usually batched and takes seconds, so 700W (a single H100 SXM) for a few seconds for a batch of a few images to multiple users. Maybe more for the absolute biggest (but SFW, no porn) models.
LLM generation takes more VRAM, but is MUCH more compute-light. Typically one has banks of 8 GPUs in multiple servers serving many, many users at once. Even my lowly RTX 3090 can serve 8+ users in parallel with TabbyAPI (and modestly sized model) before becoming more compute bound.
So in a nutshell, imagegen (on an 80GB H100) is probably more like 1/4-1/8 of a video game at once (not 8 at once), and only for a few seconds.
Text generation is similarly efficient, if not more. Responses take longer (many seconds, except on special hardware like Cerebras CS-2s), but it parallelized over dozens of users per GPU.
This is excluding more specialized hardware like Google’s TPUs, Huawei NPUs, Cerebras CS-2s and so on. These are clocked far more efficiently than Nvidia/AMD GPUs.
…The worst are probably video generation models. These are extremely compute intense and take a long time (at the moment), so you are burning like a few minutes of gaming time per output.
ollama/sd-web-ui are terrible analogs for all this because they are single user, and relatively unoptimized.
How exactly did you come across this “fact”?
I compared the TDP of an average high-end graphics card with the GPUs required to run big LLMs. Do you disagree?
I do, because they’re not at full load the entire time it’s in use
They are, it’d be uneconomical not to use them fully the whole time. Look up how batching works.
I mean I literally run a local LLM, while the model sits in memory it’s really not using up a crazy amount of resources, I should hook up something to actually measure exactly how much it’s pulling vs just looking at htop/atop and guesstimating based on load TBF.
Vs when I play a game and the fans start blaring and it heats up and you can clearly see the usage increasing across various metrics
He isn’t talking about locally, he is talking about what it takes for the AI providers to provide the AI.
To say “it takes more energy during training” entirely depends on the load put on the inference servers, and the size of the inference server farm.
One user vs a public service is apples to oranges and it’s actually hilarious you’re so willing to compare them.
My guy, we’re not talking about just leaving a model loaded, we’re talking about actual usage in a cloud setting with far more GPUs and users involved.
Right, but that’s kind of like saying “I don’t kill babies” while you use a product made from murdered baby souls. Yes you weren’t the one who did it, but your continued use of it caused the babies too be killed.
There is no ethical consumption under capitalism and all that, but I feel like here is a line were crossing. This fruit is hanging so low it’s brushing the grass.
“The plane is flying, anyway.”
Are you interpreting my statement as being in favour of training AIs?
I’m interpreting your statement as “the damage is done so we might as well use it”
And I’m saying that using it causes them to train more AIs, which causes more damage.I agree with your second statement. You have misunderstood me. I am not saying the damage is done so we might as well use it. I am saying people don’t understand that it is the training of AIs which is directly power-draining.
I don’t understand why you think that my observation people are ignorant about how AIs work is somehow an endorsement that we should use AIs.
I guess.
It still smells like an apologist argument to be like “yeah but using it doesn’t actually use a lot of power”.
I’m actually not really sure I believe that argument either, through. I’m pretty sure that inference is hella expensive. When people talk about training, they don’t talk about the cost to train on a single input, they talk about the cost for the entire training. So why are we talking about the cost to infer on a single input?
What’s the cost of running training, per hour? What’s the cost of inference, per hour, on a similarly sized inference farm, running at maximum capacity?Maybe you should stop smelling text and try reading it instead. :P
Running an LLM in deployment can be done locally on one’s machine, on a single GPU, and in this case is like playing a video game for under a minute. OpenAI models are larger than by a factor of 10 or more, so it’s maybe like playing a video game for 15 minutes (obviously varies based on the response to the query.)
It makes sense to measure deployment usage marginally based on its queries for the same reason it makes sense to measure the environmental impact of a car in terms of hours or miles driven. There’s no natural way to do this for training though. You could divide training by the number of queries, to amortize it across its actual usage, which would make it seem significantly cheaper, but it comes with the unintuitive property that this amortization weight goes down as more queries are made, so it’s unclear exactly how much of the cost of training should be assigned to a given query. It might make more sense to talk in terms of expected number of total queries during the lifetime deployment of a model.
You’re way overcomplicating how it could be done. The argument is that training takes more energy:
Typically if you have a single cost associated with a service, then you amortize that cost over the life of the service: so you take the total energy consumption of training and divide it by the total number of user-hours spent doing inference, and compare that to the cost of a single user running inference for an hour (which they can estimate by the number of user-hours in an hour divided by their global inference energy consumption for that hour).
If these are “apples to orange” comparisons, then why do people defending AI usage (and you) keep making the comparison?
But even if it was true that training is significantly more expensive that inference, or that they’re inherently incomparable, that doesn’t actually change the underlying observation that inference is still quite energy intensive, and the implicit value statement that the energy spent isn’t worth the affect on society
How about, fuck AI, end story.
how about, fuck capitalism? Have you lost sight of the goal?
What tools do you think capitalism is going to use to fuck us harder and faster than ever before?
All of them at their disposal, we should get rid of all tools
Running a concept to its extreme just to try and dismiss it to sound smart is an entire damn logical fallacy. Why are you insisting on using fallacies that brainless morons use?
Have you never heard of a straw man fallacy? That’s you. That’s what you’re doing.
So mad at a joke
Still dodging any actual point. Have fun being too stupid to actually engage with a discussion. Genuinely pitiful of you.
I did, as a matter of fact, fuck AI.
You thought blind anger came from well informed opinions?
But then the rage machine couldn’t rage
there is so much rage today. why don’t we uh, destroy them with facts and logic
Hahaha at this point even facts and logic is a rage inducing argument. “My facts” vs “Your facts”
1 prompt is avg 1Wh of electricity -> typical AC runs avg 1,500 W = 2.4 seconds of AC per prompt.
Energy capacity is really not a problem first world countries should face. We have this solved and you’re just taking the bait of blaming normal dudes using miniscule amounts of power while billionaires fly private jets for afternoon getaways.
They are blaming the billionaires (or their companies), for making the thing nobody wanted so they can make money off of it. The guy making a five-breasted woman is a side effect.
And sure, that one image only uses a moderate amount of power. But there still exists giant data centers for only this purpose, gobbling up tons of power and evaporating tons of water for power and cooling. And all this before considering the training of the models (which you better believe they’re doing continuously to try to come up with better ones).
nobody wanted according to whom? It’s literally the most used product of this century stop deluding yourself.
All datacenters in the world combined use like 5% of our energy now and the value we get from computing far outweighs any spending we have here. You’re better off not buying more trash from Temu rather than complain about software using electricity. This is ridiculous.
Where are you getting your false information. Its certainly not the most used. And, the reason it’s used at all is from advertising and ownership of the media by the billionaire class to shove the gibbity in our faces at every waking moment so people use it. They’re losing money like never before on ai.
This level of collective delusion is crazy. I don’t think any amount of stats will change your mind so you’re clearly argueing in bad faith but sure:
https://explodingtopics.com/blog/chatgpt-users says 5.2B monthly visits compared to Facebook 12.7 and Instagram’s 7.5. Chatgpt is literally bigger than X.com already. Thats just one tool and LLM’s have direct integrations in phones and other apps.
I really don’t understand what’s the point of purposefully lying here? We can all hate billionaires together without the need for this weird anti-intellectual bullshit.
I don’t agree that it’s the most used invention this century. Also, typing “hi there gibbity” hardly counts as actually using the tool. If thats what you mean though, sure.
Nah dawg im pretty sure the most used product this century is food.
the most used product of this century is actually your momma
People hate AI so much (for many good reasons!) that they can’t see or accept the truth: many many people want to use it, not just “billionaires”
A lot of things are solved, but capitalism means that we need a profit motive to act. World hunger is another good example. We know how to make fertilizer and how to genetically alter crops to ensure we never have a crop failure. We have trains and refrigeration to take food anywhere we want. Pretty much any box that we need to check to solve this problem has been. The places that have food problems largely have to do with poverty, which at this point is a polite way to say “I won’t make money, so I am okay with them starving”
Im not sure what’s the point here? If we dont like LLMs and data centers using power then we use existing strategies that work like taxing their power use and subsidizing household power use which btw we’re already doing almost everywhere around the world in some form or another.
The data centers are actually easier to negotiate and work with than something like factories or households where energy margins are much more brittle. Datacenter employs like 5 people and you can squeeze with policy to match social expectations - you can’t do that with factories or households. So datacenter energy problem is not that difficult relatively speaking.
I am agreeing with you that the solutions exist, but the will to implement them is going to be the hard part. A big dampener is simply going to be the profit motive. There is more money in siding with the data center than a the households. Are households okay with an increasing in price? Data center is likely to manage that better, or even just pay a bribe to someone. I used food as another example of a problem that is solved. We can grow food without fail and build the rail to get it to where it needs. We just don’t because need does not match profit expectation. There are talks of building nuclear power for some data centers, but such talk would not happen for normal households.
People definitely underestimate how cooperative big tech is relative to every other business mostly because big tech has a lot of money and very few expenses so friction is relatively a bigger bottle neck than almost any other industry. So I still think that pressuring openAi into green energy is easier than pressuring Volvo (or any manufacturer) which already is really brittle and has huge negotiation leverage in the form of jobs it provides.
Take a look at any other business niche and no one’s comitting to green energy other than big tech. As you said yourself no other niche want to build their own nuclear reactors to satisfy their own green energy need.
I think its OK to hate on big tech because they’re billionaires but people really lose sight here and complain about wrong things that distract from real much bigger issues and paints the entire movement as idiots.
A 12000 BTUs inverter split system at peak capacity requires less than 1500 W to run. After it reaches equilibrium it drops the power requirement significantly.
ok so 5 seconds of AC then? my point still stands.
i feel it would actually kill some people to just say, yes, ai uses a lot of power, and no other qualifying statements tacked on