Despite its efforts, Facebook’s become the subject of recent controversy. For those that are otherwise unaware, since the release of their streaming and video feature known as Facebook Live, there’s been a growing number of incidents of people who’ve used the platform for self-destructive purposes.
Without going into detail, there have been several instances of users streaming content of them causing self-harm, harm to others or both. It has become a clear and visible symbol of what some experts are labeling a growing problem. According to the American Foundation for Suicide Prevention death by suicide is the 10th leading cause of death in the United States with over 40 thousand Americans dying from it each year. And as all measure can gauge it is a growing statistic.
So it comes as no surprise in the wake of this problem that Facebook itself would take some action to respond to this dilemma. Especially with it closing on a user base of nearly 2 billion after reporting 1.86 billion users this past February. As such when the time for the announcement came it was from Zuckerberg’s Facebook profile.
Over the last few weeks, we've seen people hurting themselves and others on Facebook — either live or in video posted…
Posted by Mark Zuckerberg on Wednesday, May 3, 2017
It’s a surprising move as given this age of AI algorithms and robots the company chose to employ more personnel rather than adopt a program to do the work or take an existing startup whose technology could be adapted to accommodate this problem.
At this time there are two questions to be asked. First, will this measure prove to be effective? By Zuckerberg’s admission within this post, the company receives “millions of reports” weekly about content that users flag. As it is, him boasting about the team catching someone on Facebook Live considering suicide feels hollow, at such an early stage when none of the new systems to combat this problem aren’t even implemented.
The second question is if other major companies will employ a similar methodology? For those that know Google’s suffered a loss of advertisers recently due to ads appearing over extremist videos on YouTube. Their catch-all solution was to block ads on channels with under 10,000 views. A highly criticized decision since we’re quite sure the problem with extremism isn’t a low attendance.
For now, we must wait for the outcome as a public announcement of this size must prove to be viable for Facebook to justify the expense. If it proves successful, then it’s likely other companies will adopt this strategy which will see an increase in the hiring of personnel.
The problem remains though; more people are using social media to perform illegal acts. What methods will be the ones that work and thus become universal as we move forward? People or machines?
- Save Money While Going Green With Arcadia Power - July 25, 2019
- 3 Compelling Reasons Why Your Tech Startup Needs A .TECH Domain - June 21, 2019
- Maximize Team Productivity With Project Management Tool monday.com - August 22, 2018
AI would help sort and categorize the videos I am sure, but how can AI take a look at something that is happening live? That is the hard part for now, so more employees make sense, but what happens when AI is able to see the live video?
I dont have a problem with hiring more people to monitor things that the public should not see. I am sure the number is nothing compared to what YouTube has to filter through videos uploaded each second of the day.
What I do not get is why the advertisers are leaving YouTube because of the content?!? Cant those companies figure out how to stop advertising on those videos? Google has a robust system for advertising and there just seems to be a lazy way out and that includes just not putting ads up at all instead of looking into the situation.
It was only a matter of time before the LIVE video thing was going to show things like suicide and other crimes. At least, for Facebook’s sake, they are addressing it……or trying to.