The Google Life Sciences Division is up to something powerful and concerning. A newly independent sector of the transformed Alphabet, a multinational conglomerate and Google’s parent company, Google Life Sciences attracts top scientists from across America, seeking the company’s extensive funding, experience with big data and lack of bureaucracy. Additionally, these scientists are buying in to Google’s mission to shift health care from a reactive paradigm to a proactive paradigm.
What does this mean? Currently, doctors respond to a problem once a patient starts to show symptoms of that problem. Although you go to your doctor’s office for an annual checkup, most of the time, you end up in the hospital because something feels wrong. The problem with this approach is that the disease you have just started to show symptoms of could have been manifesting for quite some time. This happens with Alzheimer’s disease: A man will see a neurologist after he has noticed that he has been having trouble remembering things, even though neurobiological evidence suggests that the disease starts much earlier than the symptoms. The earlier a disease can be detected, the better, but because doctors react to symptoms rather than predict them, early diagnosis of a disease depends quite heavily on chance.
Researchers at the Google Life Sciences Division want to solve this problem by rethinking health care. These researchers envision a world where people wear technologies that constantly monitor their health. Therefore, once something goes amiss, the gadget will notify its wearer and a person could immediately meet with a doctor. This person would have a much better chance of recovering effectively than if he waited for the symptoms.
Inspired by this approach, Dr. Thomas Insel, former director of the National Institutes of Mental Health, decided to move to Google. He is currently thinking of a project designed to detect psychosis — a symptom of schizophrenia — early through language analytics. Essentially, this project would include developing an algorithm that could detect the disorganized reasoning of a schizophrenic patient’s speech. Such an algorithm could hasten the diagnosis of psychosis tremendously, potentially improving outcomes for people with schizophrenia.
However, while such a project is purely hypothetical for now, it is worth considering the privacy issues at stake in implementing such a plan. Would Google be monitoring literally everything we say, searching for semantic inconsistencies? Might this lead to people feeling reluctant to share their thoughts, in fear of being diagnosed as schizophrenic? It does not seem problematic to me for a company to be monitoring my glucose levels to manage diabetes, but once it starts peeking at my use of language, my privacy could be at risk.
Interestingly, a parallel debate already exists within the realm of terrorism and privacy. Are we willing to sacrifice our privacy for the sake of national security? I think yes, although I recognize the potential for abuse of such power. I think similarly about healthcare; big data has the potential to make us safer and healthier. However, what do we do when that data overlaps with our personal information?
Ayan Mandal is a junior in the College. Grey Matter appears every other Tuesday.