It may sound like a headline straight from The Onion, but “Hundreds of AI Tools Have Been Built to Catch Covid. None of Them Helped” was actually a write up published in the MIT Technology review. The MIT-made “smart” software systems that helped erect the less than exciting architecture in Boston, even teamed up with IBM Watson, which is the same company that allegedly had the first artificial intelligence system to be put on a hospital patient trolley and rushed out of the emergency room exit, was still outsmarted by Covid-19.
The write up states:
The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory. In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.
That may sound like the preface of a Terminator movie, but that is the scary reality of the science coming out of what is supposed to be one of the smartest and most innovative institutions in the world.
But what exactly went wrong? A lot.
There was bad data, subpar algorithms, statistical drift, and grant-crazed researchers.
For some, this does not come as a surprise at all. As Stephen E. Arnold of Beyond Search stated,
In a sense, this is an old problem with research. Academic researchers have few career incentives to share work or validate existing results. There’s no reward for pushing through the last mile that takes tech from “lab bench to bedside.”
If you want to read about MIT’s shamelessly documented failure, go here.