• 0 Posts
  • 19 Comments
Joined 2 years ago
cake
Cake day: June 19th, 2023

help-circle
  • You make a great point. But just to stay on the example of cars: besides the innovation on EVs, there’s this horrible tendency to consider cars as tablets on wheels, both in the sense that you can forget about repairing them by yourself and in the sense that they are now increasingly becoming low-margin hardware to run higher margin subscription services. If anything warrants high valuation for a car company it would arguably be the innovation on EVs, rather than the SaaS model.

    I hope the idea of Car Software As a Service dies before becoming too widespread. But if it doesn’t, maybe car companies wouldn’t become “Tech” companies, just more shitty subscription vendors. And their stock should be valued as such, not for the largely unwanted “Tech innovation”.


  • By that measure shouldn’t Disney be considered a Tech company too? Or I guess banks and insurance companies.

    I hadn’t thought of it that way, but maybe the article (at least the small part I can read with no paywall) is on to something, Companies that sell access to technology or rely on technology to sell something else (he does give the example of e-commerce) should not be “Tech” companies.

    The part I didn’t get to is where the author draws the line to tell what companies ARE Tech. I guess OpenAI or Google would qualify. They sell services but they are services they invented and made, with considerable researxh and investment. But what about Amazon or Netflix?







  • I want to believe that commoditization of AI will happen as you describe, with AI made by devs for devs. So far what I see is “developer productivity is now up and 1 dev can do the work of 3? Good, fire 2 devs out of 3. Or you know what? Make it 5 out of 6, because the remaining ones should get used to working 60 hours/week.”

    All that increased dev capacity needs to translate into new useful products. Right now the “new useful product” that all energies are poured into is… AI itself. Or even worse, shoehorning “AI-powered” features in all existing product, whether it makes sense or not (welcome, AI features in MS Notepad!). Once this masturbatory stage is over and the dust settles, I’m pretty confident that something new and useful will remain but for now the level of hype is tremendous!



  • It’s not that LLMs aren’t useful as they are. The problem is that they won’t stay as they are today, because they are too expensive. There are two ways for this to go (or an eventual combination of both:

    • Investors believe LLMs are going to get better and they keep pouring money into “AI” companies, allowing them to operate at a loss for longer That’s tied to the promise of an actual “intelligence” emerging out of a statistical model.

    • Investments stop pouring in, the bubble bursts and companies need to make money out of LLMs in their current state. To do that, they need to massively cut costs and monetize. I believe that’s called enshttificarion.







  • I think that using large language models to summarize email (especially marketing), news, social media posts or any type of content that uses a lot of formulaic writing is going to generate lots of errors.

    The way I understand large language models, they create chains of words statistically, based on “what combination is the most likely based on my training material”?

    In marketing emails, the same boilerplate language is used to say very different things. “You have been selected” emails have similar wording to “sorry this time you have not won but…”. Same cheery “thanks for being such a wonderful sucker” tone and 99% similar verbiage except for a crucial “NOT” here and there.