Try a thought experiment: Imagine you’ve been asked to predict whether it will rain day after tomorrow, and someone’s life depends on your accuracy. And the loved ones of that someone are looking to you to get it right. If you fail, and this person dies, they not only will be devastated (not to mention how you will feel), but they may very well hold you responsible in court for their loved one’s death.
Feeling a little (or a lot) nervous? Then you’ve begun to understand what it’s like for a mental health professional doing a suicide risk assessment. Believe it or not, much like the weather, suicide is not predictable with a high degree of accuracy. To the contrary: Even highly sophisticated statistical models get it wrong more often than they get it right. Although researchers have identified a wide variety of risk factors associated with suicide (such as depression, social isolation and substance abuse), the vast majority of people with these risk factors do not kill themselves. Thus, predicting suicide based on risk factors results in huge numbers of “false alarms.” Likewise for questionnaires: helpful but not prophetic.
To add to the frustration, merely asking an individual if he or she is planning to end his or her own life raises a bundle of other issues. The last thing on the mind of a seriously suicidal individual is sharing that information; anyone knows that this would likely move others to intervene. Preservation of life, a given among loved ones and professionals, is not a goal that this person shares.
Genius at work
Dr. Matthew Nock*, a Harvard psychologist and recent recipient of a MacArthur Foundation “genius” grant, doesn’t have a crystal ball, but he is asking an interesting, fundamental, question: Might there be a way of detecting suicide risk otherthan asking the patient or looking at the patient’s history for clues to the future?
Enter the Implicit Association Test (IAT).** This is a pretty straightforward, computer- administered task that requires the test taker quickly place words presented on the screen in one category or another. For example, does the word “apple” belong in the category “red” or “blue”? Sweet or salty? Food or tool? Now try this one: Obama – me or not me? This one’s a little different, for now we begin to get into matters of values and emotions. And such internal processes as these are not fully known to us; they are often unconscious (hence the term “implicit”), but the IAT has shown that they are detectable.
So, how about this one: Death – me or not me?
A better predictor?
The actual task is a bit more complicated than this; but, in essence, what Nock has discovered is that people at risk for suicide take slightly more time (in milliseconds) to make discriminations around death and suicide than nonsuicidal people. Moreover, performance on this task predicts future suicidal behavior above and beyond depression, clinical assessment, past suicidal behavior or even the patient’s own predictions. In other words, a measureable process, beneath the individual’s level of conscious awareness, predicts future behavior better than professional judgment or self-report.
This method is still in its infancy and is not nearly ready for “prime time” in clinical practice. But it does inspire some hope that research eventually might enhance our ability to save lives in an arena where clinical wisdom thus far has fallen short.
Perhaps there’s hope for better weather forecasts as well.
*To sample this test with respect to such emotional issues as political preference or racism, visit www.implicit.harvard.edu. It’s free and anonymous.
Editor’s note: Dr. Nock will give a presentation describing his work at Grand Rounds October 5 at the Menninger Department of Psychiatry & Behavioral Sciences at Baylor College of Medicine.