I’m a lonely smut writer in Portugal! Feel free to say hello! :3

  • 0 Posts
  • 29 Comments
Joined 4 months ago
cake
Cake day: November 4th, 2025

help-circle











  • Going with the post’s idea for a moment, by making it law, the companies prevent any new social media from popping up and not requiring ID verification and stealing away all their users. “They can’t say no, it’s out of their hands because it’s law”.

    Not to mention, if everyone has to do it in one country because of the law, it makes it easier to push it in other places because now it’s a collective movement.



  • People walk around smashing themselves in the dick with a hammer, complain about the dick-smashing hammer, and lament the future dick-smashing hammer update, and the moment someone says, “Hey, have you tried these non-dick-smashing, hammer-free pants?” And they say, “Hahaha, do you also do CrossFit?”

    This joke is stolen valor, I only just swapped to Linux like two days ago and I haven’t had the opportunity to blurt it out to people yet.


  • There was an interview I saw recently with Asmodei where he said that Anthropic aren’t categorically against autonomous weapons, only that they didn’t think they were ready, seemingly implying they would make mistakes similar to how LLMs hallucinate. A lot of the media coverage around them seemed to imply that they had a higher ethical standard than the others, and I mean… maybe? I guess it could be argued that wanting to minimize collateral damage is more ethical, but regardless, I think it’s important to keep perspective when we see how they act in the coming weeks and months.




  • For your first question, what you’re describing is a problem with education and staffing, not a problem of the tool itself. I’m not suggesting you keep around ‘one old man who hates AI’, my pitch you bar the use of AI for human-level checks.

    For your second, yes I saw the part about how news and media are representing AI in healthcare, but I don’t really see how news or media are relevant here. Could you explain this a bit for me?

    I don’t intend to gloss over the issues with Generative AI/LLMs, I tried to be specific in my separation of ML from them in my original comment where I said LLMs in their public facing version (ChatGPT, Claude, whatever) aren’t very useful.

    The original comment I replied to asked “is “AI” even useful (etc)” but also mentioned LLMs. I was trying to make the point that LLMs aren’t the only type of AI and that others can be employed to great effect. If that was unclear, that’s my bad but that was my intention.

    The reason I don’t want to engage with a hypothetical is because I could just as easily counter with “what if it diagnoses at a 100% success rate? What if fear of losing skills results in doctors never wanting to use AI, resulting in more deaths?” Neither hypothetical argument is really very helpful for the discussion. I promise you I’ve thought about this a lot (but again, I’m not an expert, nor am I in the field), but more importantly I have friends finishing doctorates in the bioinformatics field whom I get some insight from, and I’m, at least at this point, convinced of the benefits.



  • I read both articles you linked, but I’m not really seeing how they support your point. The first article seemed to support the idea that healthcare staff would welcome more seamless, user-friendly AI tools in the field and the second discussed biases within tools they selected for cancer diagnoses and a tool they used to reduce those biases. Am I misunderstanding what you’re saying somewhere?

    Also, with regard to the reduction in diagnostic accuracy of diagnosticians with AI, I would need to see the specific article to be sure, but if it’s the one that was posted across reddit a few months back, I read through that one as well. It seemed to agree with a similar article about students writing papers with and without the use of ChatGPT (group A writes with it, group B writes without it, and afterwards they are asked to both write without the LLM. Group B’s essay was shown to be better. This is a hugely reductive description of the experiment, but gets the idea across). Again, it makes sense that if you use a tool to facilitate an action, that tool is replacing that skill and you get “rusty”. It does not mean that the existence of a tool would reduce skill in those who do not use it, though. My suggestion of using it as a screening tool wouldn’t affect the diagnostician’s skill unless they also used it, which sorta defeats the purpose of them being a human check on the process, post-screening flag.

    I can’t speak to your other points as they’re hypothetical. Obviously, I wouldn’t advocate for an inaccurate tool that causes an already overworked field to take on more work. I’m only suggesting that ML is a tool that has use-cases and can be used to supplement current processes to improve outcomes. They can, and are, being improved constantly. If they’re employed thoughtfully, I just think they can be a huge benefit.