Following its search engine-based entry into algorithmic suicide prevention this past spring, the Trevor Project is fine-tuning Google’s AI technology by training algorithms on both initial conversations with counselors and the counselors’ post-session risk assessments.
And while the California-based nonprofit targets self-destructive behaviors specifically among LGBTQ youths, the logic behind its platforms—voice, text and instant messaging—is probably generalizable to other population segments.
The Atlantic posted an update on the work July 15.
The Trevor Project’s leaders are aiming to build “an AI system that will predict what resources youths will need—housing, coming-out help, therapy—all by scanning the first few messages in a chat,” reports Atlantic staff writer Sidney Fussell. “Long term, they hope to evolve the AI so it can recognize patterns in metadata.”
Fussell notes that, in May, Google injected $1.5 million into the project.
Read his report: