Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit before joining the Threadiverse as well.

  • 0 Posts
  • 97 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle



  • Stable Diffusion was trained on the LIAON-5B image dataset, which as the name implies has around 5 billion images in it. The resulting model was around 3 gigabytes. If this is indeed a “compression” algorithm then it’s the most magical and physics-defying ever, as it manages to compress images to less than one byte each.

    Besides, even if we consider the model itself to be fine, they did not buy all the media they trained the model on.

    That is a completely separate issue. You can sue them for copyright violation regarding the actual acts of copyright violation. If an artist steals a bunch of art books to study then sue him for stealing the art books, but you can’t extend that to say that anything he drew based on that learning is also a copyright violation or that the knowledge inside his head is a copyright violation.



  • They’re saying that the NYT basically forced ChatGPT to spit out the “infringing” text. Like manually typing it into Microsoft Word and then going “gasp! Microsoft Word has violated our copyright!”

    The key point here is that you can’t simply take the statements of one side in a lawsuit as being “the truth.” Obviously the laywers for each side are going to claim that their side is right and the other side are a bunch of awful jerks. That’s their jobs, that’s how the American legal system works. You don’t get an actual usable result until the judge makes his ruling and the appeals are exhausted.









  • Masad’s comments have come up before and sparked huge outrage before and just like before people are missing the hugely important context here.

    He added that coding may become obsolete, but people will still need to continue to work on their fundamentals: “I’m at this point, like agents pilled. I’m very bullish. Like, I sort of changed my answer even like a year ago. I would say kind of learn a bit of coding. I would say learn how to think, learn how to break down problems, right? Learn how to communicate clearly, with as you would with humans, but also with machines.”

    The way I see it, he’s thinking that the current-day approach to coding is likely to go the same way that coding in assembly language went when high-level languages and compilers became good and common. The vast majority of programmers never need to think about individual registers or the specific sequence of opcodes needed to perform operations or access memory, the compilers handle that and they do a great job. Only a handful of specialists really need to go down to the metal like that any more.

    So too will it be for a lot of the programming that current day programmers do. It’ll still be useful to know how it works so that you’ll know what to ask for and what to do when something goes wrong, but 99% of the code will be done by AIs and will hardly even be looked at by a human. There’ll still be people who are experts at working with programs but the current approach to how that’s done is likely to be obsolete.







  • This guy’s court cases are widely misunderstood by the general public.

    In a nutshell: he’s a crank who is trying to tell the court “I don’t hold a copyright to the thing my AI produced, my AI holds the copyright.”

    And the court tells him: “Only people (or legal persons, like corporations) can hold a copyright. Your AI cannot. If you say that you yourself don’t either, we can’t force you to have a copyright on it. So I guess that thing has no copyright and is therefore in the public domain.”

    And then everyone gasps and exclaims “the court just ruled that AI-generated things are in the public domain!”