Peer-to-peer digital communities: Exploring how risk-management is crucial to facilitating positive mental health
April 14 2022 | By Ben Locke Ph.D.
Offering digital peer-to-peer support is an effective and accessible way for universities, colleges and organizations to enhance traditional campus mental health services. However, before engaging with these platforms, associated risks and how to mitigate them must be well-understood. The following perspective explores risks of digital communities and suggests considerations for risk management
First, for leadership wondering whether to offer digital peer-to-peer support, it is important to establish why these programs are valuable, despite risks. A 2021 study of 2,000 American college students found one in five college students use peer counseling; of the remaining four in five students, 62% say they are interested in doing so. Students cite stress, anxiety, depression, social life issues, and loneliness as the most frequent reasons for seeking peer counseling.
In this survey, the students most interested in peer counseling are Black, transgender, and first-generation, as these groups are more likely to say it is “very important” to them to receive support from peers who are similar to them (Born This Way Foundation and Mary Christie Institute, 2022).
Even among students struggling with more severe mental health challenges, participation in peer-to-peer communities is associated with feelings of greater social connectedness, group belonging, and increased effectiveness of strategies for day-to-day coping (J.A. Naslund, K.A. Aschbrenner, L.A. Marsch, and S.J. Bartels, 2016). Peer-to-peer communities are a beneficial way to support students; moving these communities to a digital format is a way to make services cost-effective and scalable, when comparing to one-to-one counseling models.
These are considerations to guide risk management of peer-to-peer digital platforms:
Anonymity
Many observers blame anonymity in social media as a contributor to inappropriate behavior online. Facebook, for instance, bans anonymous accounts because of the perception that anonymous accounts are more likely to spread misinformation or harass/hurt other users. Facebook has an “authentic name” policy where they require documentation to link accounts with real people. This decision, however, creates an inhibiting effect for people who would be more likely to participate in online conversations if they could do so anonymously. In the Togetherall community, for example, 64% of students said they were willing to share their thoughts and feelings because of the anonymous format of doing so.
The best practice for managing anonymity is to allow it because anonymity encourages candid participation, but only if safeguards are in place so that the anonymity doesn’t provide a shield behind which users can exhibit inappropriate behavior. Three safeguards are a supportive community, moderation by licensed clinicians, and clear escalation policies.
Supportive Community
For online peer-to-peer experiences to be positive and beneficial for students, the community must be supportive and nonjudgmental. Students must be able to trust that if they become vulnerable by sharing their feelings, the community will respond with compassion. In Togetherall’s community, licensed clinicians called “wall guides” will protect the community by ensuring that it stays positive and inclusive to all participants. Wall Guides also facilitate a compassionate intervention if a student seems overwhelmed or self-critical.
Moderation by Licensed Clinicians
Studies show that unmonitored social media use can lead to depression, body image concerns, self-esteem issues, social anxiety, and other problems (APA, 2021). In response to criticism, Facebook-owned Instagram pledged to be more discriminating about the type of content shown to young people and to intervene if users dwell on specific problematic types of content (Instagram, 2021).
Most social platforms rely heavily on artificial intelligence (AI) to monitor content, but AI moderation is primitive and frustrating for users. AI moderation is a blunt-edged weapon that indiscriminately warns or suspends users by identifying problematic keywords, but it often does so incorrectly because the AI is not sufficiently sophisticated to assess context.
AI often flags benign content as problematic because AI misses the context, and AI often cannot identify truly problematic content because users are sufficiently savvy to evade the AI. For example, TikTok users often use numbers instead of letters to bypass algorithms. Because TikTok removes content that mentions the words “death,” “dying,” or “suicide,” users substitute numbers or asterisks to create words such as s*uc1de. In 2021, TikTok users began using the word “unalive,” with #unalive quickly climbing to 10 million views (Skinner, 2021).
In contrast to AI, human moderators can evaluate complex content and make a judgment call about actions to take. Licensed clinicians are even better because they can draw on their education and professional experience to choose which situations to escalate and which simply need to be monitored.
Clinical moderators can also evaluate the tone and effect of each member’s participation to decide when a user is creating a problematic situation in the community. The moderator can then take steps to remind the user of the community rules, and if violations continue, to remove the user from participation. This results in a better community experience and a safer one.
Clear Escalation Policies
It is necessary to have clear policies in place about what moderators should do when members have concerns about something posted in the community and/or when participants seem to need additional support. Most social media platforms have some safety guidelines in place, but these guidelines rely on AI and the judgment of the community. Instagram, for instance, asks members to report other members who seem to be suicidal, and their escalation policies include showing the user a pop up with information about helplines and/or sending first responders for a wellness check.
In contrast, Togetherall provides 24/7 continuous and pro-active clinical moderation, with licensed clinicians ensuring reliable coverage and safety measures if a situation requires escalated support beyond the community. Because moderation is always happening, participants have access to post around the clock. No matter the time of day or night, licensed clinicians evaluate whether a student participant is at risk of harming themselves or others, and the moderator can then reach out to the member to coordinate higher levels of care.
Togetherall’s first aim is to enable and empower the student at risk to be an active participant in their own safety and treatment. This may mean assisting the student to connect with local mental health providers or campus resources. Clinicians will also use their training and experience to make judgment calls about whether to escalate to emergency services.
Togetherall partners with ProtoCall, the largest provider of behavioral health crisis and contact center services to colleges and universities in the United States. When a moderator determines that there is an urgent risk to health or safety, they follow an escalation pathway to ProtoCall or other crisis procedures in place at the institution. ProtoCall ensures that students have 24/7 access to a licensed clinician with specialty expertise in handling mental health emergencies.
Conclusion
Digital peer-to-peer communities are growing in popularity, so it isn’t a matter of whether universities and schools will choose to offer them, but when. Given this trend, it is strategic to put risk management policies in place to keep students safe while they enjoy the benefits of participation in peer-to-peer communities to enhance well-being.
About Togetherall
Togetherall offers a professionally monitored, risk-managed, digital peer-to-peer community. Students are engaged with other students who share similar experiences in a judgment-free space. Licensed clinicians oversee the community 24/7 and use crisis and escalation processes when needed to ensure the safety of student participants.
The community platform can be integrated into an evidence-based stepped care approach to offer a full continuum of holistic support. Togetherall is effective, with 93% of users reporting that they experienced improvement in their well-being.
To learn more about how Togetherall integrates with and provides cost-effective solutions to existing campus counseling models to extend your reach, watch our demo or make an appointment to speak to an expert.