

Oh that’s so amazing! I’ve been smitten with AT Protocol since I learned about it from their paper. I have such high hopes it gets more widespread.
I’m a lonely smut writer in Portugal! Feel free to say hello! :3


Oh that’s so amazing! I’ve been smitten with AT Protocol since I learned about it from their paper. I have such high hopes it gets more widespread.
Girl* is* 😅 But maybe it wouldn’t be cool to say it that way, lmao
Okay, you didn’t have to completely one up me like that in public. That absolutely is so much better.
To fix the transmission. Lilith’s Tranny’s, it’s like femboy IHOP but with beautiful transgender women working on engines.


Control + Shift + V pastes as plain text.
Wolf tails hang, they don’t curl up. The first shot looks more like a malamute or one of the Nordic wolfdog breeds, the second looks like it has a husky tail.
In the first shot, there’s a low ridge that’s like an ‘opening’ to the bed. The second shot doesn’t have it either.
Yeah, the first thing I noticed was the completely different tail on the outside shot. Pretty lame.


Holy fuck this is some top tier niche humor
A show about big fighting mechs isn’t complete if the pilots aren’t gay and in love and hate each other and exclusively communicate through monologues of their ideals on the current political climate.


Yeah, some of these comments are super not it.


Going with the post’s idea for a moment, by making it law, the companies prevent any new social media from popping up and not requiring ID verification and stealing away all their users. “They can’t say no, it’s out of their hands because it’s law”.
Not to mention, if everyone has to do it in one country because of the law, it makes it easier to push it in other places because now it’s a collective movement.
I’ve been in that situation, too. :3


People walk around smashing themselves in the dick with a hammer, complain about the dick-smashing hammer, and lament the future dick-smashing hammer update, and the moment someone says, “Hey, have you tried these non-dick-smashing, hammer-free pants?” And they say, “Hahaha, do you also do CrossFit?”
This joke is stolen valor, I only just swapped to Linux like two days ago and I haven’t had the opportunity to blurt it out to people yet.


There was an interview I saw recently with Asmodei where he said that Anthropic aren’t categorically against autonomous weapons, only that they didn’t think they were ready, seemingly implying they would make mistakes similar to how LLMs hallucinate. A lot of the media coverage around them seemed to imply that they had a higher ethical standard than the others, and I mean… maybe? I guess it could be argued that wanting to minimize collateral damage is more ethical, but regardless, I think it’s important to keep perspective when we see how they act in the coming weeks and months.


Puritanism.
So… what would a horse girl be in this hypothetical world? A scientist? A deviant? A deviant scientist?


For your first question, what you’re describing is a problem with education and staffing, not a problem of the tool itself. I’m not suggesting you keep around ‘one old man who hates AI’, my pitch you bar the use of AI for human-level checks.
For your second, yes I saw the part about how news and media are representing AI in healthcare, but I don’t really see how news or media are relevant here. Could you explain this a bit for me?
I don’t intend to gloss over the issues with Generative AI/LLMs, I tried to be specific in my separation of ML from them in my original comment where I said LLMs in their public facing version (ChatGPT, Claude, whatever) aren’t very useful.
The original comment I replied to asked “is “AI” even useful (etc)” but also mentioned LLMs. I was trying to make the point that LLMs aren’t the only type of AI and that others can be employed to great effect. If that was unclear, that’s my bad but that was my intention.
The reason I don’t want to engage with a hypothetical is because I could just as easily counter with “what if it diagnoses at a 100% success rate? What if fear of losing skills results in doctors never wanting to use AI, resulting in more deaths?” Neither hypothetical argument is really very helpful for the discussion. I promise you I’ve thought about this a lot (but again, I’m not an expert, nor am I in the field), but more importantly I have friends finishing doctorates in the bioinformatics field whom I get some insight from, and I’m, at least at this point, convinced of the benefits.

I don’t live in the US. I wouldn’t say it’s a Nihilistic comment to suggest fixing the system, though.


I read both articles you linked, but I’m not really seeing how they support your point. The first article seemed to support the idea that healthcare staff would welcome more seamless, user-friendly AI tools in the field and the second discussed biases within tools they selected for cancer diagnoses and a tool they used to reduce those biases. Am I misunderstanding what you’re saying somewhere?
Also, with regard to the reduction in diagnostic accuracy of diagnosticians with AI, I would need to see the specific article to be sure, but if it’s the one that was posted across reddit a few months back, I read through that one as well. It seemed to agree with a similar article about students writing papers with and without the use of ChatGPT (group A writes with it, group B writes without it, and afterwards they are asked to both write without the LLM. Group B’s essay was shown to be better. This is a hugely reductive description of the experiment, but gets the idea across). Again, it makes sense that if you use a tool to facilitate an action, that tool is replacing that skill and you get “rusty”. It does not mean that the existence of a tool would reduce skill in those who do not use it, though. My suggestion of using it as a screening tool wouldn’t affect the diagnostician’s skill unless they also used it, which sorta defeats the purpose of them being a human check on the process, post-screening flag.
I can’t speak to your other points as they’re hypothetical. Obviously, I wouldn’t advocate for an inaccurate tool that causes an already overworked field to take on more work. I’m only suggesting that ML is a tool that has use-cases and can be used to supplement current processes to improve outcomes. They can, and are, being improved constantly. If they’re employed thoughtfully, I just think they can be a huge benefit.
Man, what a nightmare hellscape. And whether you use the service or can’t even afford a graphics card that can do any of this, you’re gonna pay for it as it incentivises developers to focus less on graphics because they’re just gonna be smeared in generative slop anyway.
Its getting harder to avoid dodging these fuckass companies, but I’m not supporting this shit anymore.