deleted by creator
Human bias is a pervasive element in many online communities, and finding a platform entirely free from it can be akin to searching for the holy grail. Maybe look into self-hosting an instance and punish moderators who don’t follow their own rules.
Regrettably, complaining tends to be a common pastime for many individuals. I acknowledge your frustrations with certain users who may appear entitled or unappreciative of the considerable effort you’ve dedicated to developing Lemmy. Shifting towards a mindset that perceives complaints as opportunities for enhancement can be transformative. Establishing a set of transparent rules or guidelines on how you prioritize issues and feature requests could help turn critiques into opportunities for improvement. This transparency can help manage expectations and foster a more collaborative relationship with the users in your community. While not all complaints may be actionable, actively listening to feedback and explaining your prioritization criteria could go a long way in building trust and goodwill. Open communication and a willingness to consider diverse perspectives can lead to a stronger, more user-centric product in the long run.
The philosophy of Complaint-Driven Development provides a simple, transparent way to prioritize issues based on user feedback:
Following these straightforward rules allows you to address the most pressing concerns voiced by your broad user community, rather than prioritizing the vocal demands of a few individuals. It keeps development efforts focused on solving real, widespread issues in a transparent, user-driven manner.
Here’s a suggestion that could help you implement this approach: Consider periodically making a post like What are your complaints about Lemmy? Developers may want your feedback. This post encourages users to leave one top-level comment per complaint, allowing others to reply with ideas or existing GitHub issues that could address those complaints. This will help you identify common complaints and potential solutions from your community.
Once you have a collection of complaints and suggestions, review them carefully and choose the top 3 most frequently reported issues to focus on for the next development cycle. Clearly communicate to the community which issues you and the team will be prioritizing based on this user feedback, and explain why you’ve chosen those particular issues. This transparency will help users understand your thought process and feel heard.
As you work on addressing those prioritized issues, keep the community updated on your progress. When the issues are resolved, make a new release and announce it to the community, acknowledging their feedback that helped shape the improvements.
Then, repeat the process: Make a new post gathering complaints and suggestions, review them, prioritize the top 3 issues, communicate your priorities, work on addressing them, release the improvements, and start the cycle again.
By continuously involving the community in this feedback loop, you foster a sense of ownership and leverage the collective wisdom of your user base in a transparent, user-driven manner.
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
I did read the links, and I still strongly feel that no automated mechanical system of weights and measures can outperform humans when it comes to understanding context.
But this is not a way to replace humans; it’s just a method to grant users moderation privileges based on their tenure on a platform. Currently, most federated platforms only offer moderator and admin levels of moderation, making setting up an instance tedious due to the time spent managing the report inbox. Automating the assignment of moderation levels would streamline this process, allowing admins to simply adjust the trust level of select users to customize their instance as desired.
On a basic level, the idea of certain sandboxing, i.e image and link posting restrictions along with rate limits for new accounts and new instances is probably a good idea.
If there were any limits for new accounts, I’d prefer if the first level was pretty easy to achieve; otherwise, this is pretty much the same as Reddit, where you need to farm karma in order to participate in the subreddits you like.
However, I do not think “super users” are a particularly good idea. I see it as preferrable that instances and communities handle their own moderation with the help of user reports - and some simple degree of automation.
I don’t see anything wrong with users having privileges; what I find concerning is moderators who abuse their power. There should be an appeal process in place to address human bias and penalize moderators who misuse their authority. Removing their privileges could help mitigate issues related to potential troll moderators. Having trust levels can facilitate this process; otherwise, the burden of appeals would always fall on the admin. In my opinion, the admin should not have to moderate if they are unwilling; their role should primarily involve adjusting user trust levels to shape the platform according to their vision.
An engaged user can already contribute to their community by joining the moderation team, and the mod view has made it significantly easier to have an overview of many smaller communities.
Even with the ability to enlarge moderation teams, Reddit relies on automod bots too frequently and we are beginning to see that on Lemmy too. I never see that on Discourse.
There has to be a way to federate trust levels otherwise all of this just isn’t applicable to the fediverse. One of the links I posted talks about how to federate trust levels. So the appeal is processed by a user with a higher trust level.
A system like this rewards frequent shitposting over slower qualityposting. It is also easily gamed by organized bad faith groups. Imagine if this was Reddit and T_D users just gave each other a high trust score, valuing their contributions over more “organic” posts.
You are just assuming that this would work similarly to Reddit based on karma. I don’t know why you would assume the worst possible implementation just so you can complain about this. If you had read the links, you would know that shitposting wouldn’t help much because what contributes most to Trust Levels in Discourse is reading posts.
I think in a few years using an AI for this kind of task will be much more efficient and simpler to set up. Right now I think it would fail too much.
deleted by creator
I very much doubt this kind of system would be implemented for Lemmy.
deleted by creator
Yeah an appeal process to mitigate human bias would be nice.
Having AGI as moderators would be a futuristic dream come true. However, until that becomes a reality, it’s crucial to consider the well-being of human moderators who are exposed to disturbing content like CSAM and graphic images. I believe it would be important to provide moderators with the ability to decrease their moderation levels to avoid such content.
Lemmy was better before the Reddit exodus last year, when people started insulting others by calling them tankies and fascists. Before that, it was much more peaceful.