Multiple game creators describe ineffective moderation on the platform, resulting in unchecked hatred in forums and targeted campaigns of negative ‘anti-woke’ reviews
Does anybody wanna know the actual mechanics of why Steam is poorly user-content moderated?
Its because they primarily rely on automated systems, and a very, very small team of inhouse moderators/admins, as opposed to other comparable platforms (social media networks, basically), that have armies of contracted moderators in low income countries, whose job is to get more and more PTSD every day.
Thats how platforms with comparable amounts of user generated content have done moderation, for decades.
Nowadays such platforms are also using those human moderator workforces to train LLMs to be better at auto-moderating or at least auto-flagging things.
Valve absolutely should devote more time and energy to restructuring stages of automated review for user posted comments and content, to improving those review processes, and honestly, should probably just sunset the Steam Forums system, and rethink an entire new approach to it.
But… at the same time, the scale is a significant problem.
Steam has a comparable number of overall daily active users to a major social media platform.
… and the ones that do content moderation, well, they have armies of poor people manually reviewing everything, getting PTSD from that work, and nowadays, training an LLM to be a better auto content moderator.
Genuine question for everyone: Do you think that’s an ethically justifiable solution to the problem?
Offshore and concentrate the hate and suffering?
Other genuine question for everyone: What actual technical solution do you think should be implemented?
Should Valve run a massive LLM, an AI, to either directly moderate or screen all user generated content on Steam?
Final genuine question: Does your answer involve the concept that all user content on a platform, or website, should be the legal responsibility of the platform/website operator?
Because if your answer to that last question is yes, well then you’re basically saying we should overturn Section 230 of the Communications Decency Act, which would mean, amongst other things, any lemmy instance hosted in the US should itself be taken down if any of its users say something like ‘I hope Donald Trump dies a horrible death, soon.’
Because that’s almost certainly going to be viewed as a direct death threat by the current administration, if not just by the currently existing .world mod team.
Your entire comment reeks of “we shouldn’t fight fire because that puts firefighters at risk”.
There are no 100% ethical solutions to every problem, real life is a compromise. You can get better ethical results by allowing those workers to get adequate monetary compensation for their work and seek medical help if they need it. Otherwise what’s the solution, allow everyone to read the same stuff? Why is that more ethical? Is it more ethical for the random user (who may also be a suggestible kid, or a person belonging to a persecuted minority) who reads it? Is it more ethical for the developers who get their game review bombed by fascists and bigots, and see their source of revenue diminish or fizzle out because of it?
As for the legal responsibility, it becomes so when the platform is complicit with the users writing hateful stuff. You are not responsible for the random shithead declaring his love for Mein Kampf. You are responsible for the hundreds of users who do, while you repeatedly ignore the reports of their misconduct, thus implicitly accepting and normalizing their behavior.
Additionally, when hateful behavior is accepted and normalize, human shit stains will come in drove and multiply the problem tenfold. By moderating their spaces, they would prevent a lot of those hateful messages from being written in the first place.
Primarily because firefighters, firefighting, tends to be a fairly exclusive field, that requires a lot of training, that tends to pay pretty darned well.
Whereas the armies of content moderators tend to be incredibly poorly paid. The entire way this kind of work is done is that it nearly always either entirely or largely is done by the lowest bidder, in the poorest places possible.
As compared to firefighters, who… at least in terms of municipal firefighters, well that tends to be fairly local.
(* * * With the massive glaring exception of using prisoner labor to fill in gaps in often extremely dangerous firefighting conditions, which is more comparable to exploiting those who don’t really have better options * * *)
I am pointing out that yes, the problem exactly is that none of the potential solutions here are ethically wonderful, that this is not a kind of ‘oh well obviously they could just do this simple and easy fix and everyone would be happy’ kind of situation.
So… your ethical calculus seems to conclude that stopping the spread of bigotry and fascist rhetoric in richer countries is worth the cost of the sanity of workers in poorer countries.
Your ethical calculus seems to be that if 100s of users of a website/platform don’t get banned rapidly for violating TOS, then the website/platform should be held legally liable for that, which would mean that you believe that basically every website platform with over roughly half a million DAU, that doesn’t use a complex layered system of LLMs with absurd economic and environmental costs, or have a sizeable to massive human moderator team, that they should all be sued or fined into non existence.
… Unless you maybe want to clarify more exactly what you mean here.
You also don’t directly address at all the idea of using an LLM for these tasks… which is what all of the megaplatforms with much more active consistent, rapid, and often overzealous or erroneous moderation do.
I’m just trying to present the actual totality of the moral ramifications of the involved systems and practices relevant to this topic.
If confronting the actual ugliness of them challenges you, makes you defensive and accusatory, good.
That means you likely never thought about the totality of the situation here that deeply.
I already answered your questions, but you seem more intent at discussing abstract ethics like an armchair philosopher rather than the real problem at hand.
Whereas the armies of content moderators tend to be incredibly poorly paid. The entire way this kind of work is done is that it nearly always either entirely or largely is done by the lowest bidder, in the poorest places possible.
[…]
So… your ethical calculus seems to conclude that stopping the spread of bigotry and fascist rhetoric in richer countries is worth the cost of the sanity of workers in poorer countries.
Why is the assumption that those workers must be poorly paid? If Valve, the multi billion dollar company whose owner owns multiple yachts as well as the company producing them, doesn’t pay its workers adequately, then Valve is at fault. The solution shouldn’t be to throw up hands and go home. There is a solution but they aren’t willing to take it because it would require them to spend money, which is what I said in my first comment.
Your ethical calculus seems to be that if 100s of users of a website/platform don’t get banned rapidly for violating TOS, then the website/platform should be held legally liable for that […]
You know damn well what I meant but you keep this enlightened bullshit going on.
Valve literally got reports about those reviews and ignored them. They are at fault. Full stop.
If confronting the actual ugliness of them challenges you, makes you defensive and accusatory, good. That means you likely never thought about the totality of the situation here that deeply.
Please stop this enlightened philosopher bullshit. It’s painful to read and makes you look dumb.
I believe the answer is simply to give better moderation tools to the developers on their own games’ Store and Forum pages, since it’s developers who seem to have an issue with current moderation.
Yeah, a lot of it is based on having to manually flag things as harassment or bigotry or something like that, especially when it comes to actual game reviews, and it is obviously the case that whatever automated systems Valve currently has in place to auto flag things… are not sufficient.
And just for more context, here is the feed of Steamworks itself, which… more or less, is the sprt of update pipeline for Steam itself, as game devs would interface with it, which is also the system that would be the thing getting updated with new content moderation concepts.
Does anybody wanna know the actual mechanics of why Steam is poorly user-content moderated?
Its because they primarily rely on automated systems, and a very, very small team of inhouse moderators/admins, as opposed to other comparable platforms (social media networks, basically), that have armies of contracted moderators in low income countries, whose job is to get more and more PTSD every day.
Thats how platforms with comparable amounts of user generated content have done moderation, for decades.
Nowadays such platforms are also using those human moderator workforces to train LLMs to be better at auto-moderating or at least auto-flagging things.
Valve absolutely should devote more time and energy to restructuring stages of automated review for user posted comments and content, to improving those review processes, and honestly, should probably just sunset the Steam Forums system, and rethink an entire new approach to it.
But… at the same time, the scale is a significant problem.
Steam has a comparable number of overall daily active users to a major social media platform.
… and the ones that do content moderation, well, they have armies of poor people manually reviewing everything, getting PTSD from that work, and nowadays, training an LLM to be a better auto content moderator.
Genuine question for everyone: Do you think that’s an ethically justifiable solution to the problem?
Offshore and concentrate the hate and suffering?
Other genuine question for everyone: What actual technical solution do you think should be implemented?
Should Valve run a massive LLM, an AI, to either directly moderate or screen all user generated content on Steam?
Final genuine question: Does your answer involve the concept that all user content on a platform, or website, should be the legal responsibility of the platform/website operator?
Because if your answer to that last question is yes, well then you’re basically saying we should overturn Section 230 of the Communications Decency Act, which would mean, amongst other things, any lemmy instance hosted in the US should itself be taken down if any of its users say something like ‘I hope Donald Trump dies a horrible death, soon.’
Because that’s almost certainly going to be viewed as a direct death threat by the current administration, if not just by the currently existing .world mod team.
Your entire comment reeks of “we shouldn’t fight fire because that puts firefighters at risk”.
There are no 100% ethical solutions to every problem, real life is a compromise. You can get better ethical results by allowing those workers to get adequate monetary compensation for their work and seek medical help if they need it. Otherwise what’s the solution, allow everyone to read the same stuff? Why is that more ethical? Is it more ethical for the random user (who may also be a suggestible kid, or a person belonging to a persecuted minority) who reads it? Is it more ethical for the developers who get their game review bombed by fascists and bigots, and see their source of revenue diminish or fizzle out because of it?
As for the legal responsibility, it becomes so when the platform is complicit with the users writing hateful stuff. You are not responsible for the random shithead declaring his love for Mein Kampf. You are responsible for the hundreds of users who do, while you repeatedly ignore the reports of their misconduct, thus implicitly accepting and normalizing their behavior.
Additionally, when hateful behavior is accepted and normalize, human shit stains will come in drove and multiply the problem tenfold. By moderating their spaces, they would prevent a lot of those hateful messages from being written in the first place.
I think thats quite an unfair characterization.
Primarily because firefighters, firefighting, tends to be a fairly exclusive field, that requires a lot of training, that tends to pay pretty darned well.
Whereas the armies of content moderators tend to be incredibly poorly paid. The entire way this kind of work is done is that it nearly always either entirely or largely is done by the lowest bidder, in the poorest places possible.
As compared to firefighters, who… at least in terms of municipal firefighters, well that tends to be fairly local.
(* * * With the massive glaring exception of using prisoner labor to fill in gaps in often extremely dangerous firefighting conditions, which is more comparable to exploiting those who don’t really have better options * * *)
I am pointing out that yes, the problem exactly is that none of the potential solutions here are ethically wonderful, that this is not a kind of ‘oh well obviously they could just do this simple and easy fix and everyone would be happy’ kind of situation.
So… your ethical calculus seems to conclude that stopping the spread of bigotry and fascist rhetoric in richer countries is worth the cost of the sanity of workers in poorer countries.
Your ethical calculus seems to be that if 100s of users of a website/platform don’t get banned rapidly for violating TOS, then the website/platform should be held legally liable for that, which would mean that you believe that basically every website platform with over roughly half a million DAU, that doesn’t use a complex layered system of LLMs with absurd economic and environmental costs, or have a sizeable to massive human moderator team, that they should all be sued or fined into non existence.
… Unless you maybe want to clarify more exactly what you mean here.
You also don’t directly address at all the idea of using an LLM for these tasks… which is what all of the megaplatforms with much more active consistent, rapid, and often overzealous or erroneous moderation do.
I’m just trying to present the actual totality of the moral ramifications of the involved systems and practices relevant to this topic.
If confronting the actual ugliness of them challenges you, makes you defensive and accusatory, good.
That means you likely never thought about the totality of the situation here that deeply.
I already answered your questions, but you seem more intent at discussing abstract ethics like an armchair philosopher rather than the real problem at hand.
Why is the assumption that those workers must be poorly paid? If Valve, the multi billion dollar company whose owner owns multiple yachts as well as the company producing them, doesn’t pay its workers adequately, then Valve is at fault. The solution shouldn’t be to throw up hands and go home. There is a solution but they aren’t willing to take it because it would require them to spend money, which is what I said in my first comment.
You know damn well what I meant but you keep this enlightened bullshit going on.
Valve literally got reports about those reviews and ignored them. They are at fault. Full stop.
Please stop this enlightened philosopher bullshit. It’s painful to read and makes you look dumb.
I believe the answer is simply to give better moderation tools to the developers on their own games’ Store and Forum pages, since it’s developers who seem to have an issue with current moderation.
Well ok, that sounds reasonable to me!
What kinds of tools do you mean?
Like, I’m not trying to be duplicitous, I genuienly want Steam to not be a cesspool.
https://partner.steamgames.com/doc/marketing/community_moderation
There’s an overview of what currently exists.
Yeah, a lot of it is based on having to manually flag things as harassment or bigotry or something like that, especially when it comes to actual game reviews, and it is obviously the case that whatever automated systems Valve currently has in place to auto flag things… are not sufficient.
And just for more context, here is the feed of Steamworks itself, which… more or less, is the sprt of update pipeline for Steam itself, as game devs would interface with it, which is also the system that would be the thing getting updated with new content moderation concepts.
https://store.steampowered.com/news/group/4145017
And here is basically the Steam Group for Steamworks/Steam itself, more or less, that may also be relevant:
https://steamcommunity.com/groups/steamworks