Facebook’s artificial intelligence tool can prevent suicide, but nobody knows how it works



[ad_1]

For many people who have dedicated their lives to preventing suicide, social media posts can be a valuable data set that contains clues about what people say and do before attempting suicide.

In recent years, researchers have built algorithms to learn which words and emojis are badociated with suicidal thoughts. They have even used publications on social networks to predict retrospectively the deaths by suicide of certain Facebook users.

Now, Facebook has launched a new artificial intelligence that can proactively identify the increased risk of suicide and alert a team of human reviewers who are trained to reach a user who considers fatal self-harm.

  An example of what someone can see if Facebook detects that it needs help.

An example of what someone can see if Facebook detects that it needs help.

The technology, announced on Monday, represents an unprecedented opportunity to understand and predict the risk of suicide. Before the artificial intelligence tool was even publicly announced, Facebook used it to help send first responders in 100 "wellness checks" to ensure user safety. The tool's potential to save lives is enormous, but the company will not share many details about how it works or will broadly share its findings with academics and researchers.

That will surely leave some experts in the field confused and worried.

Munmun De Choudhury, badistant professor at the School of Interactive Computing at Georgia Tech, congratulates the social media company for focusing on suicide prevention, but would like Facebook to be more transparent about its algorithms.

"This is not just another AI tool, it addresses a really sensitive issue," he said. "It's a matter of someone's life or death."

"This is not just another AI tool, it addresses a really sensitive issue, it's a matter of someone's life or death."

Facebook understands what is at stake, and that is why its vice president of product management, Guy Rosen, emphasized in an interview how the AI ​​significantly accelerates the process of identifying anguished users and obtaining resources or help.

But he refused to talk in depth about the algorithm factors beyond a few general examples, such as concerned comments from friends and family, the time of day and the text in a user's post. Rosen also said that the company, which has partnerships with suicide prevention organizations, wants to learn from researchers, but would not discuss how or if Facebook could publish or share ideas about its use of artificial intelligence.

"We want to be very open about this," he said.

While transparency may not be Facebook's strength, in a field such as suicide prevention it could help other experts save more lives by revealing behaviors or language patterns that emerge before suicidal thoughts or a suicide attempt . With more than 2 billion users, Facebook could be said to have the largest database of such content in the world.

De Choudhury says that transparency is vital when it comes to artificial intelligence because transparency instills confidence, a feeling that is scarce as people worry about the potential of technology to fundamentally interrupt their professional and personal lives. Without sufficient confidence in the tool, De Choudhury says, users at risk may choose not to share emotionally vulnerable or suicidal messages.

When users receive a Facebook message, it does not indicate that AI identified them as high risk. Instead, they are told that "someone thinks they might need additional support at this time and asked us to help." However, that someone is a human critic who followed the detection of AI risk.

It is also impossible to know how the AI ​​determines that someone is at imminent risk, the accuracy of the algorithm or how it makes mistakes when looking for clues to suicidal thoughts. Since users will not know they were identified by AI, they have no way of telling Facebook that they mistakenly identified them as suicidal.

De Choudhury's research involves badyzing social networks to obtain information about people's mental and emotional well-being, understanding the challenges of developing an effective algorithm and deciding what data to publish.

She recognizes that Facebook must reach a delicate balance. Sharing certain aspects of their findings, for example, could lead users to oversimplify the risk of suicide by focusing on keywords or other signs of distress. And potentially it could give people with bad intentions data points that they could use to badyze publications on social networks, identify those with perceived mental health problems and point them to harbadment or discrimination.

"I think that sharing how the algorithm works, even if they do not reveal all the unbearable details, would be really beneficial."

Facebook also faces a different set of expectations and pressures as a private company. You can consider your intellectual property for the AI ​​tool for suicide prevention developed for the public good. You may want to use features of that intellectual property to improve your offers for advertisers and advertisers; After all, identifying the emotional state of a user is something that could be very valuable to the competitiveness of the Facebook market. The company has previously expressed interest in developing that capacity.

In any case, De Choudhury argues that Facebook can still contribute to broader efforts to use social networks to understand suicide without compromising the safety of people and the company's results.

"I think that sharing academically how the algorithm works, even if they do not reveal all the unbearable details, would be really beneficial," he says, "… because at the moment it's really a black box."

Crisis Text Line, which partnered with Facebook to provide resources for suicide prevention and user support, uses AI to determine people's risk of suicide and shares its findings with researchers and the public.

"With the scale of data and the number of people Facebook has on their system, it could be an incredibly valuable dataset for academics and researchers to understand the risk of suicide," said Bob Filbin, chief data scientist at Crisis Text Line.

Filbin did not know Facebook was developing AI to predict suicide risk until Monday, but he said that Crisis Text Line is a proud and eager partner to work with the company to avoid suicide.

Crisis Text Line trains counselors to reduce the number of "hot to cute" messages and uses the first to respond as a last resort. Facebook's human reviewers confirm AI risk detection by examining user posts. They provide resources and contact emergency services when necessary, but do not compromise the user.

Did you know that users who send text messages 13 or less report more frequently about self-harm? More information and ways to help at https://t.co/ixEAAWHENT

– Crisis text line (@CrisisTextLine) August 16, 2017

Filbin expects Facebook's artificial intelligence to respond to Different signs of the surfaces in the Crisis Text Line data. People who come into contact with the line do so for help and, therefore, can be more explicit in the way they communicate suicidal thoughts and feelings.

A simple example is how texters most at risk of suicide say they "need" to talk to a counselor. That urgency, as opposed to "wanting", is just one factor that the AI ​​of the line uses to judge risk. Another is the word "ibuprofen," which Crisis Text Line discovered is 16 times more likely to predict that the person sending text messages needs emergency services than the word "suicide."

Filbin said that the Crisis Text Line algorithm can identify 80 percent of the text conversations that end up requiring an emergency response within the first three messages.

This is the kind of perception that counselors, therapists and doctors expect to have someday. It is clear that Facebook, by virtue of its mbadive size and its commitment to the prevention of suicide, now leads the effort to somehow put that knowledge in the hands of people who can save lives.

If Facebook accepts that position and the transparency it requires, it is a question that the company prefers not to answer yet. At some point, however, you will have no other choice.

If you want to talk to someone or are experiencing suicidal thoughts, send a text message with Crisis text line to 741-741 or call National Suicide Prevention Lifeline [19659038] at 1-800-273-8255. Here is a list of international resources.

  Https% 3a% 2f% 2fblueprint api production.s3.amazonaws.com% 2fuploads% 2fvideo uploaders% 2fdistribution thumb% 2fimage% 2f82913% 2f3a5c77e4 acf7 4502 8e87 212134a9f7f5

[ad_2]
Source link

Leave a Reply

Your email address will not be published.