• 6 Posts
  • 26 Comments
Joined 2 months ago
cake
Cake day: February 3rd, 2026

help-circle





  • I’ve been running straight Ubuntu with ZFS-on-Linux since 18.04, and it has been smooth sailing. If you’re running a lot of containerized things it’s very convenient to just be able to bind mount ZFS dataset into containers.

    Normally I prefer CentOS/RockyLinux, or some other EL distribution, but in this case I really appreciate that Canonical isn’t purist enough to ship ZFS as a loadable kernel module that is guaranteed to be in sync with the shipped kernel. And don’t have to deal with DKMS.






  • It’s extremely unlikely that they are going to do any kind of deep traffic inspection in the router/modem itself. Inspecting network traffic is very intensive though and gives very little value since almost all traffic is encrypted/HTTPS today, with all major browsers even showing scare warnings if’s regular unencrypted HTTP. Potentially they could track DNS queries, but you can mitigate this with DNS over TLS or DNS over HTTPS (For best privacy I would recommend Mullvad: https://mullvad.net/en/help/dns-over-https-and-dns-over-tls)

    And of course, make sure that anything you are self-hosting is encrypted and using proper HTTPS certificates. I would recommend setting up a reverse proxy like Nginx or Traefik that you expose. Then you can route to different internal services over the same port based on hostname. Also make sure you have a good certificate from Letsencrypt







  • Worth noting that despite the headline this does not have anything to do with the huge outage in the end of 2025.

    The company said the incident in December was an “extremely limited event” affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a “customer facing AWS service.”

    Neither disruption was anywhere near as severe as a 15-hour AWS outage in October 2025 that forced multiple customers’ apps and websites offline—including OpenAI’s ChatGPT.

    I would also have felt some level of schadenfreude if it turned out that any of the really big incidents in the end of 2025 was a result of managements aggressive pushes for AI coding. Perhaps that would cool off the heads of executives a bit if there were very real examples pf shit properly hitting the fan…










  • Maybe i misunderstand what you mean but yes, you kind of can. The problem in this case is that the user sends two requests in the same input, and the LLM isn’t able to deal with conflicting commands in the system prompt and the input.

    The post you replied to kind of seems to imply that the LLM can leak info to other users, but that is not really a thing. As I understand when you call the LLM it’s given your input and a lot of context that can be a hidden system prompt, perhaps your chat history, and other data that might be relevant for the service. If everything is properly implemented any information you give it will only stay in your context. Assuming that someone doesn’t do anything stupid like sharing context data between users.

    What you need to watch out for though, especially with free online AI services is that they may use anything you input to train and evolve the process. This is a separate process but if you give personal to an AI assistant it might end up in the training dataset and parts of it end up in the next version of the model. This shouldn’t be an issue if you have a paid subscription or an Enterprise contract that would likely state that no input data can be used for training.