Blood drive news. In 2017 Facebook created a new feature to encourage blood donation. Then in May it announced that it was expanding the feature in India, Bangladesh, and Pakistan. The way it works is Facebok users in these three countries will be able to identify themselves as blood donors and then receive a notification when an individual hospital or blood bank is in need. Users can also choose to supply their blood type, in which case they can be targeted by notifications when a particular type is needed.
“According the the World Health Organizations there are more than 70 countries in the world where people don’t have safe blood when needed and people often have to find their own blood donors,” Facebook’s Head of Health Hema Budaraju told MobiHealthNews in May. “We noted that in many of these countries there’s a lot of action on Facebook. There are people who comment, sometimes a thousand in a day, saying ‘We’re looking for blood donors, can you help?’”
Addressing the opioid epidemic. In July Facebook announced that only pre-certified addiction treatment centers will be able to advertise on its site. This news comes after a wave of criticism about social media’s role in the epidemic.
“People facing addiction or who have loved ones in need should be able to find support without encountering scams or predatory behavior,” Facebook wrote when announcing the change.
Now any center that plans to advertise on Facebook in the US will have to show a certification from LegitScripts. This certification will ensure that the facilities have a background check, and meet state legal and regulatory licensing requirements and privacy practices.
Suicide prevention. Facebook is using artificial intelligence to identify suicide threats or high-risk posts, the company announced in 2017.
"Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts," VP of Product Management Guy Rosen wrote in a blog post at the time of the announcement. "This is in addition to reports we received from people in the Facebook community. We also use pattern recognition to help accelerate the most concerning reports. We’ve found these accelerated reports— that we have signaled require immediate attention —are escalated to local authorities twice as quickly as other reports. We are committed to continuing to invest in pattern recognition technology to better serve our community."
The social network has been working on the issue for more than 10 years, but started employing AI in March of 2017. Facebook uses pattern recognition to find posts where either the content or the comments match a pattern for suicide risk (for instance, a high volume of comments like “Are you ok?” and “Can I help?”). It also reviews the comments and likes on Facebook Live posts to flag a particular part of the video for human review.
More data, better care? News broke in April that Facebook was looking to team up with major US hospitals, hoping to obtain data from the providers about their patients and care. According to CNBC, the plan was to match the user’s data with their profile in an effort to study ways to offer better and more customized care. That plan was put on hold after the Cambridge Analytica scandal came to light. The company told CNBC that the data would exclusively be used for research in the medical community.
Looking for a friend. In 2017 AI chatbot Woebot integrated with Facebook Messenger. The system, which is completely automatic, provides conversation for folks looking for supportive conversation to deal with anxiety or depression.