• 2 Posts
  • 38 Comments
Joined 7 days ago
cake
Cake day: March 19th, 2025

help-circle
  • Considering I am the operations team, just goes to show how much I have left to learn. I didn’t know about the external-dns operator.

    Unfortunately, my company is a bit strange with certs and won’t let me handle them myself. Something to check out at home I guess.

    I agree with you about the LVM. I have been meaning to set up Rook forever but never got around to it. It might still take a while but thanks for the reminder.

    Wow. That must have been some work. I don’t have these certs myself but I’m looking at the CKA and CKS (or whatever that’s called). For sure, I loved our discussion. Thanks for your help.


  • I am using a reverse proxy in production. I just didn’t mention it here.

    I’d have to set up a DNS record for both. I’d also have to create and rotate certs for both.

    We use LVM, I simply mounted a volume for /usr/share/elasticsearch. The VMWare team will handle the underlying storage.

    I agree with manually dealing with the repo. I dont think I’d set up unattended upgrades for my k8s cluster either so that’s moot. Downtime is not a big deal: this is not external and I’ve got 5 nodes. I guess if I didn’t use Ansible it would be a bit more legwork but that’s about it.

    Overall I think we missed each other here.






  • I prefer some of my applications to be on VMs. For example, my observability stack (ELK + Grafana) which I like to keep separate from other environments. I suppose the argument could be made that I should spin up a separate k8s cluster if I want to do that but it’s faster to deploy directly on VMs, and there’s also less moving parts (I run two 50 node K8S clusters so I’m not averse to containers, just saying). Easier and relatively secure tool for the right job. Sure, I could mess with cgroups and play with kernel parameters and all of that jazz to secure k8s more but why bother when I can make my life easier by trusting Red Hat? Also I’m not yet running a k8s version that supports SELinux and I tend to keep it enabled.








  • Deepseek 1.5B doesn’t exist. I don’t know why the Deepseek team named the models on Huggingface like this, but what is labelled as “Deepseek 1.5B” is actually not the OG Deepseek 70B model distilled to 1.5B, it’s a different model either trained or finetuned by the Deepseek team. My theory is some sort of intentional manipulation on their part so people stay confused on whether they are actually running the Deepseek model or not. There is a lot of commentary on this online, sorry I don’t have the links from the top of my head.