

An extension of uBlock Origin. It does the same thing but also clicks on every ad before it removes it.
An extension of uBlock Origin. It does the same thing but also clicks on every ad before it removes it.
My dad has been talking about wanting something like AdNauseam for years, i was very happy when i found it. The extra mile would probably be to expand it with a VPN and constantly spam clicks, clear cache, switch IP and obfuscate data. Now we just wait for someone with enough time to build it…
Happy I got AdNauseam after uBlock Origin. Deleted my facebook a year ago, shit is an AI slopfest built upon the greed and manipulation of every part of the chain. Defcon 31 has a good talk that brings this up. “Disenshittify or die” by Cory Doctrow, cann recommend to watch.
As i wrote in my comment i have not read up on Deepseek, if this is true it is definetly a step in the right direction.
I am not saying i expect any company of significant scale to follow OSI since, as you say, it is too high risk. I do still believe that if you cannot prove to me that your AI is not abusing artists or creators by using their art, or not using data non-consentually acquired from users of your platform, you are not providing an ethic or moral service. This is my main concern with AI. Big tech keeps showing us, time and time again, that they really dont care about about these topics and this needs to change.
Imo AI today is developing and expanding way too fast for the general consumer to understand it and by extension also the legal and justice systems. We need more laws in place regarding how to handle AI and the data they use and produce. We need more education on what AI actually is doing.
The Open Source Initiative have defined what they believe constitutes “open source AI” (https://opensource.org/ai/open-source-ai-definition). This includes detailed descriptions of training data, explanation on how it was obtained, selected, labeled, processed and filtered. As long as a company utilize any model trained on non-specified data I will assume it is either stolen or otherwise unlawfully obtained from non-consenting users.
I will be clear that I have not read up on Deepseek yet, but I have a hard time believing their training data is specified according to OSI, since no big model yet has done so. Releasing the model source code means little for AI compared to all its training data.
I just like the analogy of a dashboard with knobs. Input text on one wide output text on the other. “Training” AI is simply letting the knobs adjust themselves based on feedback of the output. AI never “learns” it only produces output based on how the knobs are dialed in. Its not a magic box, its just a lot of settings converting data to new data.
This is just pure insanity, how can you possibly try to tell women the real choice is not having a choice? Even worse that this is being sneaked in as the the bill proposed to the house does not mention all of the stuff mentioned in the PWHC booklet.
Probably cheaper than the Shit 2
Except it doesn’t in Finnish, where Linus Torvald is from. Linus and Linux is pronounced the same except for the final consonant.
The more time you spend glued to your screen the less you notice slow changes. I assume this is part of why user retention is so important…