

Where does slop start? If you use auto complete and it is just adding a semicolon or some braces, is it slop? Is producing character by character what you would have wrote yourself slop?
How about using it for debugging?


Where does slop start? If you use auto complete and it is just adding a semicolon or some braces, is it slop? Is producing character by character what you would have wrote yourself slop?
How about using it for debugging?


I also want to say that Linus is still the one merging things into the kernel and he is ahm… opinionated?


Just to make sure I understand.
Genius.
Fun fact: she was no model, was just a random snapshot at a deftones party. Some videos on yt about that story.


For others reading it:
ChatControl 1: allow scanning on voluntary basis (voted down twice recently)
ChatControl 2: mandatory scanning


Most smart people that have used llms have reported that there is a constant temptation to just stop thinking and let the llm do it. It is very easy to give in. Studies support this.


Until ai companies decide / get pressured by politicians to push a certain agenda, and there is no critical thinking left for people to realize it.


That’ll be 10 extra per user per month.
Isn’t the idea that the brand advertises the wearer?


Have you tried introducing unnecessary complexity?


Another Walmart moment.
All VPN work on Linux because the protocols are open and standardized.


I once used that thing and couldn’t figure out why I got errors. After hours I realize postman was sending headers you didn’t specifically configure and that caused it to fail.


This may not be so easy, what protocol does the ISP use over fibre? Honestly the network card that you will probably need might already pull more than the modem.


I think you are making the mistake of attributing intent to an LLM. A LLM does not have intent. It takes the context and generates the statistically most likely tokens that come next. The biography is part of the context.
The fact that it gives different answers based on context purely comes down to how it was trained and that there is no concept of “factual information”.
I’m not defending LLMs, this is just LLMs doing exactly what they were trained to do.


I mean this study literally says that poorly worded prompts give worse results. It makes sense too, imagine you are on some conspiracy Facebook group with bad grammar etc, those are the posts it will try to emulate.


Just think about the fact llms are basically trying to simulate reddit posts and then think again about using them.
Look up “ice planet barbarians”. Forgot the author but there are a lot of different kind of barbarians, orcs, demons, etc. That are rough but with a soft spot, protecting the female lead heroically while taking her for themselves.


You were just boycotting before it was cool.
The nerve that connects the larynx to the brain in all animals goes down through the neck, around the aorta, up through the neck again.
In giraffes, it is the longest nerve in the animal kingdom. The signal delay is so long that giraffes are unable to make any complex sounds.
Source: inside natures giants documentary series, recommended.