Facebook Groups were a hot topic at the social media company’s Facebook F8 conference earlier this week, and health was one of the main focuses. The social media platform announced that it has created a “Health Support” tool that will help users find groups to fit their health concerns and needs.
“Different communities have different needs. From communities that are built around specific circumstances like health conditions, or interests like gaming, or a neighborhood, or even around a common purpose like finding a job or shopping,” Fidji Simo, head of Facebook app, said at F8 earlier this week.
As Facebook continues its overarching focus on privacy, the company plans on creating tools to help maintain the confidentiality of health group users.
“Now Health Support groups will let members ask admins to post anonymously on their behalf to protect their privacy around sensitive topics,” Simo said.
WHY IT MATTERS
When it comes to protecting a user’s privacy, Facebook continues to be the cautionary tale in the tech world. Of particular note has been the platform’s role in the Cambridge Analytical scandal, where data from more than 85 million Facebook users was used by the consulting firm working for the Trump campaign.
Allowing users to remain anonymous when posting about a health condition could indicate that the company is moving in a direction towards greater privacy. However, many critics remain skeptical of Facebook.
WHAT'S THE TREND
Facebook has been involved in the health world for some time now. In February the platform rolled out the US version of its blood donation feature, which lets users sign up as a blood donor and receive notifications when blood banks near them are in need.
It has also initiateda new strategy to curb the spread of vaccine misinformation on both its primary social media platform as well as Instagram. It is now reducing the prominence of certain flagged groups, pages and searches (though not banning them), as well as banning advertisements with false vaccine information.
Facebook has also been joining the conversation around suicide concerns. Starting in November of 2017, the platform began to use its artificial intelligence to identify suicide threats or high-risk postings. The technology can flag those posts and then prioritize the content so that the Community Relations team can address the immediate danger first. Additionally, the AI system can help the Community Relations team access if they need to call a first responder to assist the user.