Saw a suspicious post resurrecting a 5 month old thread, and after a few back and forths:
https://linux.community/comment/3453531
I don’t understand why you are treating me like a robot. However, I can help with the Fibonacci sequence. Here is a Python 3 function to calculate it:
I’m torn, its nice to have activity in the fediverse, but I’m not convinced bots are the right way to go about it. Opinions on the future of engagement bots?
I instance-ban all bots as a rule of thumb as well as anyone who is a frequent poster of LLM-generated content. I’ve yet to encounter any bot account (LLM-generated, scripted, or otherwise) that’s not annoying, spammy, or both. Some have good intentions and I hate less than others, but at the end of the day, they’re a major source of annoyances.
Part of why this place is great is engaging with people. I couldn’t care less what a tone-deaf chatbot “thinks” about anything. Lol, one of my site rules since day 1 of running my instance is “No AI/LLM-generated content”, and I enforce that rule vigorously.
I can’t recall the exact phrasing I used, but I said something in the past on this. It was basically to the effect of “Bots aren’t creating engagement, they’re creating clutter”.
That’s my initial inclination, but I could see value in some conversation starter service, even a hot-take posting bot to get a back and forth going started with humans.
We have all seen the conversations where someone drops a hot-take, starts a huge argument and walks away… a bot could do that, and give people a anchor for content.
I’m not saying I approve of this, just that I see it having some utility in some scenarios.
I can see some utility in that. But here’s how I, personally, view bots on this (or really any) platform:
I’ll scroll and see a post that’s interesting. Look at the comment button, and it’s got one or two comments. Nice! Potential conversation starter. Click into post, and it’s a bot-generated summary, Piped link, MBFC lookup (that’s the bot I don’t hate as much), and/or some other tone-deaf bot take. Disappointment ensues.
“Well, I don’t have anything to say on this yet, so I guess I’ll check back later” is typically how that goes. Other times, I’ll start a thread and usually get some replies going. In either case, the bot has added no value to the experience. (I do not like bot-generated summaries; that’s a whole other topic though lol)
Can’t say I’ve never dropped a hot take and bailed, but sometimes the replies just aren’t worth responding to :shrug: lol. Though, I usually do try to reply to anyone who makes the effort to respond (and in good faith).
To me, bot submissions just give the illusion of content and activity but lack substance. Yeah, they could be conversation starters, but more often than not, they’re just extra noise to tune out. I have no interest in having a conversation with a bot. The only words I have ever or will ever speak to a bot is “let me speak to a human” lol.
Bots need to be clearly marked as bots. I dont want to line the fediverse with barbed wire. But I also want transparency on what I am interacting with.
I don’t know how much it would really apply here or how enforceable it is but, genuinely, I think the first thing to do with any real discussion about regulating this is a law that anyone providing LLM can’t be providing it to people who are trying to pass it off as human. I know we’ve had bots doing this for kind of thing a long time ago, but this sort of thing should have been done a long time ago too.
Yeah, I think we need laws of LLMs
Rule 0. Cannot deny your a LLM