They say that there’s an opportunity inside every crisis. Right now, we’re all about to find out how true that old saying is. The world may never have faced a crisis like the one it’s going through right now, but there are still plenty of people trying to make the best of it, and just as many people trying to find solutions to the problems we’re facing. Some of those solutions will come in the shape of advanced artificial intelligence technology.
We’ve all quietly accepted for some time that artificial intelligence will have a bigger say in the way we live our lives in the years to come. There are new artificial intelligence initiatives announced almost every month. Earlier this year, it was announced that AI technology would be used to assist players on UK slot site, and gamblers who use slots terminals inside casinos. The AI will spot irregular betting activity from online slots players, and step in if it believes that the players are showing signs of impulsive gambling. It’s a way that online slots can be maintained as a fun hobby rather than a potential addiction, but it’s still a niche application of AI. Now, faced with the fact that many people who should be out working are now at home self-isolating, AI is being asked to do more.
Perhaps unsurprisingly, one of the first companies to announce an expanded role for AI in the past few days was Facebook. With so many of its employees self-isolating or unable to go about their usual business, they’re allowing AI to do the important job of moderating posts on the social media website. They warned before setting the AI live that there would be problems, and it didn’t take long for them to be proved out. Because everyone on social media is currently posting about the coronavirus, the AI incorrectly started flagging coronavirus-related posts as spam. The issue was resolved quickly, but it underlined the fact that even the AI software of super-rich technology companies isn’t yet advanced enough to reliably perform the same tasks as a human being, or make the same common-sense judgments. It’s not the way Facebook wanted to get off the ground with the experiment, but will other companies fare any better?
If we want to have any confidence that the virus-related content we see from our usual news providers (and the internet in general) is accurate, we better hope that the answer to the question we just posed is ‘yes.’ It isn’t just Facebook that’s struggling to cope with coronavirus difficulties at the moment. LinkedIn, Twitter, Reddit, YouTube, Google, and Microsoft have all identified ways in which they’re going to be challenged in the weeks ahead like never before, and they’re all hoping that AI initiatives might play at least a small part in making the situation better for them. In a joint statement, they’ve spoken of wanting to promote something they’re referring to as ‘authoritative content,’ which is business-speak for ‘correct information.’ In practical terms, that means ensuring that verified information is seen, and misleading information is removed from the internet as soon as possible. That’s a bigger moderation task than any of the companies have ever faced before, and even in normal circumstances, a human workforce would struggle to cope with a job so big. AI isn’t just an optional extra here – it’s a necessity. We just don’t yet know whether this huge assignment has come too early in the development of the AI technologies that exist today.
So far, all we’ve talked about is the moderation of information posted on the internet. Perhaps you don’t think that’s a big deal – and perhaps you’re right. That isn’t the only thing that AI is being asked to do, though. The first large-scale outbreak of COVID-19 occurred in China, and it didn’t take the Chinese authorities long to start involving artificial intelligence in their coping strategy. Alibaba, which is a big name in Chinese e-commerce, has come up with an AI system which they claim can not only accurately diagnose the infection, but can do so in a small number of seconds. The company claims that the accuracy rate of the system is above 95%. If that’s true, it would be an impressive achievement, but a lack of reliable peer-to-peer review evidence means that we’re not yet totally sure that we can trust that system. We can, however, be sure that the health authorities of every other developed nation in the world are looking at that claim and wondering if they could also use the technology.
This is an exceptionally difficult time for health care providers all over the world. Thousands, if not millions, of people need to be tested for the virus and then given patient-specific advice on the best way to treat it. There’s a big problem with that, though – every in-person diagnosis exposes another healthcare worker to the virus themselves, and they can then pass that virus on to the next patient they see. Just the simple act of trying to diagnose the virus can ultimately help the virus to spread. Right now that’s a grim necessity, but it needn’t be the case if we can rely on artificial intelligence to diagnose the infection without the need for human assistance. It would involve putting a lot of trust in computers, but if we’re not prepared to do that right now, when will we ever be?
Artificial intelligence will have a huge role to play in the way we live our lives in the future, but it probably wasn’t ready to have this level of demand placed upon it at this stage of its development. Nevertheless, this is an emergency situation, and emergency situations give us the freedom to try things we would never normally consider trying. The use of artificial intelligence in such vital roles is one of those experiments. When all of this is over, and the world begins to get back to normal, we may find that the field of AI has taken a huge leap forward because of it. The fields of healthcare and content moderation may never be the same again – and that might only be the start of AI’s greater expansion into every aspect of our reality.