During the past month, he said, first responders checked on people more than 100 times after Facebook software detected suicidal intent. It’s a pattern recognition software, the text of Facebook posts and comments is scanned and compared for phrases that could be signals of an impending suicide. Questions like “Are you ok?” and “Can I help?” in a Facebook conversation are included. Human intervention of Facebook workers is needed before to alert local authorities.
Facebook said it tries to have specialist employees available at any hour to call authorities in local languages. “Speed really matters. We have to get help to people in real time,” Rosen declared. The software can save lives. Rosen did not name the countries where Facebook was deploying the software. It will not be used in European Union due to sensitivities of privacy. The company has not been known previously to systematically scan conversations for patterns of harmful behavior. Facebook’s chief security officer Alex Stamos answered to media that at the same time ”the creepy/scary/malicious use of AI will be a risk forever, which is why it’s important to set good norms today.”