

Removing copyright entirely is a bridge too far.
Just roll it back to a reasonable time limit (I dunno, 7 years?), and categorically reject all further lobbying attempts from Disney and the like.
Removing copyright entirely is a bridge too far.
Just roll it back to a reasonable time limit (I dunno, 7 years?), and categorically reject all further lobbying attempts from Disney and the like.
I don’t think you can expect any VPN to work without sign-in for very long. Google’s playing whack-a-mole with VPNs.
I’ve never actually tried signing in with yt-dlp. How easy is it to make a throwaway google account nowadays? Do they require phone verification or something similarly onerous?
This is network-specific. It’s been going on for a few months at least. yt-dlp itself still works without sign-in, if your network is not “suspicious”.
Always worth making sure you’re updated to the latest version of yt-dlp, but this is probably a network thing.
Yes, I loved classic Trek for showing a better a future, where humans have moved beyond our greed, prejudice, and self-destructive tendencies. That was the through line in TOS and TNG, even if it wasn’t always 100% on-point and didn’t always age well (you need to view TOS in its historical context to get past the baked-in 1960s sexism, for example).
There’s a place for cautionary tales, and there’s a place for aspirational tales.
I liked Discovery well enough for what it was, but I hated its picture of a future where good humans are the exception rather than the rule.
Nowadays, I think solarpunk is where its at.
A paper-only journal would defend against the state, but not against people you live with. A digital journal can be encrypted, but an intelligence agency could potentially gain access
A digital journal doesn’t need to be any more government-accessible than a paper journal.
Depending on your threat model, this could require special hardware, special software, or both. In order of ease of setup, I would suggest:
Keep all your data on your own physical media. No cloud services, period.
Keep it encrypted.
Disable network connectivity at every level that you possibly can, such as:
OS level: disable wi-fi, disable blutooth, and disable networking entirely.
Firmware/BIOS level: If you BIOS has options to disable networking components (especially wireless ones), do that.
Hardware level: If your laptop has a switch to disable wi-fi, use it. If ethernet, unplug the cable. Etc.
Physical level: Remove any removable wireless cards or antennas.
Wallet level: buy a computer than never had wi-fi or bluetooth in the first place. This could mean a retro computer, or could mean using a micro-pc like some models of Raspberry Pi.
Neither of those can stream video in real time AFAIK. They will back up the video file on some unpredictable schedule after you’re done recording. So not ideal for a situation where your phone might be seized or destroyed.
But if that works for you, there are lots of open-source options that work similarly. SyncThing can sync to any server, and all you’d need to do is make sure your sync destination is network-accessible somehow (VPN, internet-facing server, whatever). Lots of cloud drive apps can auto-upload photos and videos, and some of those are open-source.
A better off-the-shelf proprietary workflow might be a Zoom call with cloud recording enabled. Then you’d be protected against a sudden (and perhaps permanent) loss of network connectivity.
Buy a dozen and you could fit a good chunk of LibGen.
I’ve never replaced a watch (smart or otherwise) in less than 5 years.
Wat.
How’s navigation with Pebbles? If I start bike navigation in Google Maps on my phone, can I get turn-by-turn directions on the watch, and does it not suck?
According to the Programme for the International Assessment of Adult Competencies, 2013, the median score for the US was “level 2”. 3.9% scored below level 1, and 4.2% were “non-starters”, unable to complete the questionnaire.
For context, here is the difference between level 2 and level 3, from https://en.wikipedia.org/wiki/Programme_for_the_International_Assessment_of_Adult_Competencies#Competence_groups :
- Level 2: (226 points) can integrate two or more pieces of information based on criteria, compare and contrast or reason about information and make low-level inferences
- Level 3: (276 points) can understand and respond appropriately to dense or lengthy texts, including continuous, non-continuous, mixed, or multiple pages.
Geany is a nice GUI option. It’s a bit more capable but still lean.
It’s probably time for me to re-evaluate the host of coding editors out there. For the most part I just use good text editors. Though I do love Spyder, I only use it for a certain subset of tasks.
I think they reached a point where their user base was predominantly mainstream, not tech-savvy enough to know the difference.
I mean, how else can any site survive on advertising when the ads are so obnoxious and it’s so easy to block them? Either the site is great and the ads are non-intrusive enough that I’ll make an exception in uBlock, or I’m never seeing the ads in the first place.
Gemini might be good at something, but I’ll never know because it is bad at all the things I have ever used the assistant for. If it’s good at anything at all, it’s something I don’t need or want.
Looking forward to 2027 when Google Gemini is replaced by Google Assistant (not to be confused with today’s Google Assistant, totally different product).
In case anyone is unfamiliar, Aaron Swartz downloaded a bunch of academic journals from JSTOR. This wasn’t for training AI, though. Swartz was an advocate for open access to scientific knowledge. Many papers are “open access” and yet are not readily available to the public.
Much of what he downloaded was open-access, and he had legitimate access to the system via his university affiliation. The entire case was a sham. They charged him with wire fraud, unauthorized access to a computer system, breaking and entering, and a host of other trumped-up charges, because he…opened an unlocked closet door and used an ethernet jack from there. The fucking Secret Service was involved.
https://en.wikipedia.org/wiki/Aaron_Swartz#Arrest_and_prosecution
The federal prosecution involved what was characterized by numerous critics (such as former Nixon White House counsel John Dean) as an “overcharging” 13-count indictment and “overzealous”, “Nixonian” prosecution for alleged computer crimes, brought by then U.S. Attorney for Massachusetts Carmen Ortiz.
Nothing Swartz did is anywhere close to the abuse by OpenAI, Meta, etc., who openly admit they pirated all their shit.
Joplin is great. I have its data stored locally with encryption, and I sync across devices with Syncthing. It also has built-in support for some cloud providers like you mentioned, and since it supports local encryption, you don’t need to depend on the cloud provider’s privacy policy.
Setting it up on multiple devices was a bit complex, but the documentation is there. Follow the steps, don’t just waltz through the setup assuming it will work intuitively. I made that mistake and while it was not the end of the world, it would’ve saved me 15 minutes if I’d just RTFM.
Again: What is the percent “accurate” of an SEO infested blog
I don’t think that’s a good comparison in context. If Forbes replaced all their bloggers with ChatGPT, that might very well be a net gain. But that’s not the use case we’re talking about. Nobody goes to Forbes as their first step for information anyway (I mean…I sure hope not…).
The question shouldn’t be “we need this to be 100% accurate and never hallucinate” and instead be “What web pages or resources were used to create this answer” and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.
Correct.
If we’re talking about an AI search summarizer, then the accuracy lies not in how correct the information is in regard to my query, but in how closely the AI summary matches the cited source material. Kagi does this pretty well. Last I checked, Bing and Google did it very badly. Not sure about Samsung.
On top of that, the UX is critically important. In a traditional search engine, the source comes before the content. I can implicitly ignore any results from Forbes blogs. Even Kagi shunts the sources into footnotes. That’s not a great UX because it elevates unvetted information above its source. In this context, I think it’s fair to consider the quality of the source material as part of the “accuracy”, the same way I would when reading Wikipedia. If Wikipedia replaced their editors with ChatGPT, it would most certainly NOT be a net gain.
99.999% would be fantastic.
90% is not good enough to be a primary feature that discourages inspection (like a naive chatbot).
What we have now is like…I dunno, anywhere from <1% to maybe 80% depending on your use case and definition of accuracy, I guess?
I haven’t used Samsung’s stuff specifically. Some web search engines do cite their sources, and I find that to be a nice little time-saver. With the prevalence of SEO spam, most results have like one meaningful sentence buried in 10 paragraphs of nonsense. When the AI can effectively extract that tiny morsel of information, it’s great.
Ideally, I don’t ever want to hear an AI’s opinion, and I don’t ever want information that’s baked into the model from training. I want it to process text with an awareness of complex grammar, syntax, and vocabulary. That’s what LLMs are actually good at.
I agree. Of all the UI crimes committed by Microsoft, this one wouldn’t crack the top 100. But I sure wouldn’t call it great.
I can’t remember the last time I used the start menu to put my laptop to sleep. However, Windows Vista was released 20 years ago. At that time, most Windows users were not on laptops. Windows laptops were pretty much garbage until the Intel Core series, which launched a year later. In my offices, laptops were still the exception until the 2010s.
Google as an organization is simply dysfunctional. Everything they make is either some cowboy bullshit with no direction, or else it’s death by committee à la Microsoft.
Google has always had a problem with incentives internally, where the only way to get promoted or get any recognition was to make something new. So their most talented devs would make some cool new thing, and then it would immediately stagnate and eventually die of neglect as they either got their promotion or moved on to another flashy new thing. If you’ve ever wondered why Google kills so many products (even well-loved ones), this is why. There’s no glory in maintaining someone else’s work.
But now I think Google has entered a new phase, and they are simply the new Microsoft – too successful for their own good, and bloated as a result, with too many levels of management trying to justify their existence. I keep thinking of this article by a Microsoft engineer around the time Vista came out, about how something like 40 people were involved in redesigning the power options in the start menu, how it took over a year, and how it was an absolute shitshow. It’s an eye-opening read: https://moishelettvin.blogspot.com/2006/11/windows-shutdown-crapfest.html
Kind of the opposite. It takes more effort to make a filesystem case-insensitive. Binary comparison is the laziest approach. (Note that laziness is a virtue.)
I’m on the fence as to which is better. Putting backwards compatibility aside, there’s a perfectly good case to be made for case-insensitivity being more intuitive to the human user.
Apple got into a strange position when marrying Mac OS (case-insensitive) and NeXTSTEP (case-sensitive). It used to be possible to install OS X on case-sensitive HFS+ but it was never very well supported and I think they axed it somewhere down the road.