Gemini has Encryption, Unicode, MIME, Markup of text pages.
Said that, it is in spirit quite similar to gopher.
Gemini has Encryption, Unicode, MIME, Markup of text pages.
Said that, it is in spirit quite similar to gopher.
I honestly don’t understand how this protocol can protect anything HTTP+HTML wouldn’t. If you build a browser that supports modern web technologies using Gemini, we’ll be back at the same spot. The only thing saving the protocol is its relative obscurity. A decicated and knowledgeable Dev could abuse it any way they like, no?
No. Just as examples:
Oh, and all that makes the “small web” uninteresting for advertising.
Of course, you could publish a blog in web pages which consist of plain ol’ HTML like in 1993. But setting up even a simple HTTP server is a lot of work. Most users won’t turn off JavaScript. And to many people, the modern WWW is a lost cause. And given Firefox’ dependency on Google, this isn’t to get better.
But who actually still writes HTML by hand?
One could also argue that formatting web content in Markdown breaks compatibility and one should rather use HTML for formatting comments, because it is the standard.
The Gemini markup and protocol are designed to be simple, and the markup is designed to be written by hand. This gives you a workflow very similar to a wiki, without any extra infrastructure needed - and this is what makes a decentralized web possible. For normal people, setting up a standard web server for a small blog is too complicated, and costs too much time.
And for protocol conversion, there are gateways, much like you can access FTP or gopher servers in a browser.
still not sold on gemini. the project has sort of a holier-than-thou smell to it, striving for the sort of technological purity that makes it unattractive to use. i would still choose gopher.
Does it annoy you when people try and make stuff that matches their values?
More comfortable with the killings that FB contributed to in Myanmar or in the Philippines? Or attacks on democracy like this one?
The power concentration of the “modern” Internet has consequences - and not good ones.
But me personally, even if it would not matter to me what effects power concentration, targeted advertising, disinformation and so on have, it still would annoy the hell out of me that one cannot open some web sites on a two-year old medium priced smart phone because everything is stuffed to the brim with bloat and tracking.
Writing code is itself a process of scientific exploration; you think about what will happen, and then you test it, from different angles, to confirm or falsify your assumptions.
What you confuse here is doing something that can benefit from applying logical thinking with doing science. For exanple, mathematical arithmetic is part of math and math is science. But summing numbers is not necessarily doing science. And if you roll, say, octal dice to see if the result happens to match an addition task, it is certainly not doing science, and no, the dice still can’t think logically and certainly don’t do math even if the result sometimes happens to be correct.
For the dynamic vs static typing debate, see the article by Dan Luu:
https://danluu.com/empirical-pl/
But this is not the central point of the above blog post. The central point of it is that, by the very nature of LKMs to produce statistically plausible output, self-experimenting with them subjects one to very strong psychological biases because of the Barnum effect and therefore it is, first, not even possible to assess their usefulness for programming by self-exoerimentation(!) , and second, it is even harmful because these effects lead to self-reinforcing and harmful beliefs.
And the quibbling about what “thinking” means is just showing that the arguments pro-AI has degraded into a debate about belief - the argument has become “but it seems to be thinking to me” even if it is technically not possible and also not in reality observed that LLMs apply logical rules, cannot derive logical facts, can not explain output by reasoning , are not aware about what they ‘know’ and don’t ‘know’, or can not optimize decisions to multiple complex and sometimes contradictory objectives (which is absolutely critical to sny sane software architecture).
What would be needed here are objective controlled experiments whether developers equipped with LLMs can produce working and maintainable code any faster than ones not using them.
And the very likely result is that the code which they produce using LLMs is never better than the code they write themselves.
Are you saying that it is not possible to use scientific methods to systematically and objectively compare programming tools and methods?
Of course it is possible, in the same way as it can be inbestigated whuch methods are most effective in teaching reading, or whether brushing teeth is good to prevent caries.
And the latter has been done for comparing for example statically vs dynamically typed languages. Only that the result there is so far that there is no conclusive advantage.
What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.
Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.
The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.
Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.
So, Google was perhaps slightly terrified from the specter of an Internet without advertising, haha.