• 0 Posts
  • 30 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle

  • NVMEs are claiming sequential write speeds of several GBps (capital B as in byte). The article talks about 10Gbps (lowercase b as in bits), so 1.25GBps. Even with raw storage writes the NVME might not be the bottleneck in this scenario.

    And then there’s the fact that disk writes are buffered in RAM. These motherboards are not available yet so we’re talking about future PC builds. It is safe to say that many of them will be used in systems with 32GB RAM. If you’re idling/doing light activity while waiting for a download to finish you’ll have most of your RAM free and you would be able to get 25-30GB before storage speed becomes a factor.


  • From the article:

    Those joining from unsupported platforms will be automatically placed in audio-only mode to protect shared content.

    and

    “This feature will be available on Teams desktop applications (both Windows and Mac) and Teams mobile applications (both iOS and Android).”

    So this is actually worse than just blocking screen capturing. This will break video calls for some setups for no reason at all since all it takes to break this is a phone camera - one of the most common things in the world.


  • The only thing I’ve been claiming is that AI training is not copyright violation

    What’s the point? Are you talking specifically about some model that was trained and then put on the shelf to never be used again? Cause that’s not what people are talking about when they say that AI has a copyright issue. I’m not sure if you missed the point or this is a failed “well, actually” attempt.



  • Learning what a character looks like is not a copyright violation

    And nobody claimed it was. But you’re claiming that this knowledge cannot possibly be used to make a work that infringes on the original. This analogy about whether brains are copyright violations make no sense and is not equivalent to your initial claim.

    Just find the case law where AI training has been ruled a copyright violation.

    But that’s not what I claimed is happening. It’s also not the opposite of what you claimed. You claimed that AI training is not even in the domain of copyright, which is different from something that is possibly in that domain, but is ruled to not be infringing. Also, this all started by you responding to another user saying the copyright situation “should be fixed”. As in they (and I) don’t agree that the current situation is fair. A current court ruling cannot prove that things should change. That makes no sense.

    Honestly, none of your responses have actually supported your initial position. You’re constantly moving to something else that sounds vaguely similar but is neither equivalent to what you said nor a direct response to my objections.


  • The NYT was just one example. The Mario examples didn’t require any such techniques. Not that it matters. Whether it’s easy or hard to reproduce such an example, it is definitive proof that the information can in fact be encoded in some way inside of the model, contradicting your claim that it is not.

    If it was actually storing the images it was being trained on then it would be compressing them to under 1 byte of data.

    Storing a copy of the entire dataset is not a prerequisite to reproducing copyright-protected elements of someone’s work. Mario’s likeness itself is a protected work of art even if you don’t exactly reproduce any (let alone every) image that contained him in the training data. The possibility of fitting the entirety of the dataset inside a model is completely irrelevant to the discussion.

    This is simply incorrect.

    Yet evidence supports it, while you have presented none to support your claims.


  • When an AI trains on data it isn’t copying the data, the model doesn’t “contain” the training data in any meaningful sense.

    And what’s your evidence for this claim? It seems to be false given the times people have tricked LLMs into spitting out verbatim or near-verbatim copies of training data. See this article as one of many examples out there.

    People who insist that AI training is violating copyright are advocating for ideas and styles to be covered by copyright.

    Again, what’s the evidence for this? Why do you think that of all the observable patterns, the AI will specifically copy “ideas” and “styles” but never copyrighted works of art? The examples from the above article contradict this as well. AIs don’t seem to be able to distinguish between abstract ideas like “plumbers fix pipes” and specific copyright-protected works of art. They’ll happily reproduce either one.





  • The problem with any excuse you make for Elon is that Elon is too stupid to keep his mouth shut and give the excuse any plausibility. After the nazi salute he went on Twitter to make nazi puns about it. It is certain beyond reasonable doubt that he knows exactly what the salute was. Even if you give him the insane benefit of the doubt that it was really “his heart going out” and accidentally looked like the salute, his having shown he knows what it looks like but never stating he does not actually believe in the ideology or want present himself as an ally to nazis is just as damning.








  • The DMCA takedown seems to be specifically about Ryujinx’s ability to decode ROMs. Circumventing DRM is in fact illegal according to the DMCA so they appear to have a valid argument. However, in their takedown notice they assume that the decryption keys are obtained illegally. I’m wondering if the DMCA forbids extracting the decryption keys (without distribution) from your own legitimately owned Nintendo hardware for personal backup. If so, then the Ryujinx feature might also be defensible.

    This also raises the question of whether an emulator could be made to work on already decrypted media and let you figure out how to do that yourself. Nintendo could argue that its main use is still to play illegally decrypted ROMs but the emulator would have a decent defense imo.


  • Basically, all encryption multiplies some big prime numbers to get the key

    No, not all encryption. First of all there’s two main categories of encryption:

    • asymmetrical
    • symmetrical

    The most widely used algorithms of asymmetrical encryption rely on the prime factorization problem or similar problems that are weak to quantum computers. So these ones will break. Symmetrical encryption will not break. I’m not saying all this to be a pedant; it’s actually significant for the safety of our current communications. Well-designed schemes like TLS and the Signal protocol use a combination of both types because they have complementary strengths and weaknesses. In very broad strokes:

    • asymmetrical encryption is used to initiate the communication because it can verify the identity of the other party
    • an algorithm that is safe against eavesdropping is used to generate a key for symmetric encryption
    • the symmetric key is used to encrypt the payload and it is thrown away after communication is over

    This is crucial because it means that even if someone is storing your messages today to decrypt them in the future with a quantum computer they are unlikely to succeed if a sufficiently strong symmetric key is used. They will decrypt the initial messages of the handshake, see the messages used to negotiate the symmetric key, but they won’t be able to derive the key because as we said, it’s safe against eavesdropping.

    So a lot of today’s encrypted messages are safe. But in the future a quantum computer will be able to get the private key for the asymmetric encryption and perform a MitM attack or straight-up impersonate another entity. So we have to migrate to post-quantum algorithms before we get to that point.

    For storage, only symmetric algorithms are used generally I believe, so that’s already safe as is, assuming as always the choice of a strong algorithm and sufficiently long key.