

We are getting to the point where llm are used to expand on a topic and fill out an article and then another llm provides an inaccurate tldr summary. What a world to live in. 🤢
We are getting to the point where llm are used to expand on a topic and fill out an article and then another llm provides an inaccurate tldr summary. What a world to live in. 🤢
Ah okay.
Probably ai slop then?
I installed LTSC on a device recently. Very little effort for bloat free Windows.
Free Palestine.
The Gaza war is genocide.
Fuck the IDF.
Please prevent me from visiting the genocide loving USA.
I hear you. I worked for an msp where some customers would refuse to invest in backup solutions and we either declined to renew their contract or they suffered an event and we were then setting up backups.
I was in the middle of a migration from OVH to Hetzner. I knew I had good backups at home so the plan was to blow away OVH and restore from backup to Hetzner. This was the mistake.
Mid migration I get an alert from the raid system that a drive has failed and had been marked as offline. I had a spare disk ready, as I planned for this type of event. So I swapped the disk. Mistake number 2.
I pulled the wrong disk. The Adaptec card shit a brick, kicked the whole array out. Couldn’t bring it back together. I was too poor to afford recovery. This was my lesson.
Now I only use ZFS or MDRAID, and have multiple copies of data at all times.
I’m lucky enough to run a business that needs a datacenter presence. So most my home-lab (including Lemmy) is actually hosted on a Dell PowerEdge R740xd in the DC. I can then use the small rack I have at home as off-site backups and some local services.
I treat the entirety of /var/lib/docker
as expendable. When creating containers, I make sure any persistent data is mounted from a directory made just to host the persistent data. It means docker compose down --rmi all --volumes
isn’t destructive.
When a container needs a database, I make sure to add an extra read-only user. And all databases have their container and persistent volume directory named so scripts can identify them.
The backup strategy is then to backup all non-database persistent directories and dump all SQL databases, including permissions and user accounts. This gets run 4 times a day and the backup target is an NFS share elsewhere.
This is on top of daily backuppc backups of critical folders, automated Proxmox snapshots for docker hosts every 20 minutes, daily VM backups via Proxmox Backup Server and replication to another PBS at home.
I also try and use S3 where possible (seafile and lemmy are the 2 main uses) which is hosted in a container on a Synology RS2423RP+. Synology HyperBackup then performs a backup overnight to the Synology RS822+ I have at home.
Years ago I fucked up, didn’t have backups, and lost all the photos of my sons early years. Backups are super important.
Really?! Okay. I think your troll radar is well off, but it’s your opinion so you do you I suppose.
Maybe you are the troll. Like 4D chess level of troll. =D
Your post history and mod logs are also quite weird.
Lol what does that mean
The entire article seems like an attack. The author finds a unique identifier and adds “Russia bad” throughout.
States the information is in cleartext but then explains how everything is encrypted (in transit).
What will the author do if they intercepted any single online stores transfer of credit card details. Also encrypted in transit but Is that also deemed as cleartext? Or is that okay?
I don’t think much new is learnt here. WhatsApp also sends metadata in “cleartext” (not really, as it’s encrypted in transit, but this article called that “cleartext”).
I love this series. Although since I started working from home I haven’t listened. I must pick this up again. Thanks for the reminder! 🙂
1% is still 170 million.
Someone got pwned during the tutorial.
How is blocking scrapers easy?
This instance receives 500+ IPs with differing user agents all connecting at once but keeping within rate limits by distribution of bots.
The only way I know it’s a scraper is if they do something dumb like using “google.com” as the referrer for every request or by eyeballing the logs and noticing multiple entries from the same /12.
That was a good read, thank you
Yeah this. I dunno what the fuss is about. Its just missing on github is all.
Literally missed the point my comment.
With Windows 10, Microsoft started performing a monthly cumulative updating schedule. Every second Tuesday of the month is “patch day” and a new monthly cumulative update is made available.
There are exceptions to this, for security and bug fixes that can’t wait until the next monthly round-up. So perhaps this month was one of those? But trends are that updates are monthly. I can see it being perceived as more often, as the update is forced onto us, with a reboot, which can be frustrating.
Azure servers now support reboot-less updating, hopefully that makes its way to consumer products, but who knows.
Microsoft has always had a bad rep for their OS being full of holes and getting exploited. However some of this was due to users not updating. Microsoft would patch an issue, but huge swaths of unpatched Windows machines would be exploited and used as botnets. I think the forced updates were in response to this situation. Not that I agree with it.
walked garden
Explain
There are loads of alternatives now so it’s a good time to have a look.
I’ve setup netmaker at home, and netbird at work They are both good solutions.
I think if I had to redo home I would swap to netbird. Both of these are fully self hosted.
Neither are as easy to setup as tailscale, but once you get over that hurdle it’s fine.