In 21st Century Britain, the phrase, ‘Take back control’ has acquired a certain potency.
On hearing it, about half of Brits will swell with pride, and the other half in anaphylaxis. In 22nd Century Britain (and elsewhere), the significance of the phrase may well extend far beyond parochial matters of fish and passports. By then, it’s likely we’ll have been living for some time with a highly developed form of deep learning, an approach to artificial intelligence (AI) which even at its current level of sophistication has already enabled a step change in medical imaging. In the future, when we are diagnosed with a disease, when we are advised to follow a particular course of clinical management, this will be based on what a machine ‘thinks’.
There’ll be no opportunity for human supervision: the complexities of deep learning mean the machine’s decision will be utterly inscrutable.
No-one can seriously doubt that the application of ever more sophisticated AI will yield tremendous benefits in the management of health and disease. But equally, I think it’s a safe bet that some in the next century will be urging humanity to take back control (and that this will be futile).
The potential for deep learning to transform healthcare is the subject of a recent JAMA viewpoint by Geoffrey Hinton, a member of Google’s Brain Team. Here he argues that it’s only a matter of time before algorithms are developed that allow machines to engage in unsupervised learning, which he says is a pivotal, and so-far mysterious, aspect of human cognition that holds the key to closing the gap between human intelligence and AI.
An accompanying editorial urges clinicians, AI researchers and developers of AI applications to work together to accelerate progress and limit any adverse consequences as deep learning enters the mainstream of clinical medicine.
And a recent post in Nature (17-Sep-2018) provides real-world evidence of deep learning in action.
A team from the New York University School of Medicine successfully trained and tested a deep learning algorithm that could identify and distinguish between two common types of cancer. The network was then further trained on images labelled with the mutations underlying the cancer, resulting in the algorithm accurately predicting the mutations with some un-labelled images.
The implications for those of us in medical communications are far-reaching.
No brand will resonate with an artificial intelligence, so will the marketing of a new product still depend on building one? (If not, at least that would help with the problem of continuing to come up with brand names: for machines, a product could simply be denoted by a number).
What about medical education?
No doubt this will still be important in establishing the rationale for any novel intervention. Prescribers will still have to be clear about how and why a new option might help to address an unmet need and in which patients it should be considered. But rather than communicating directly with the prescribers, as we do at the moment, our efforts will need to be focused on a new breed of middlemen: the humans responsible for keeping the machines’ empirical knowledge up-to-date.
As the UK Government continues to face criticism for failing to prepare adequately for Brexit, perhaps it’s time for us to start thinking now about how we will pursue medical communications in a future dominated by our new, dispassionate and (we assume) entirely rational stakeholders.
Hinton G. Deep Learning—A Technology With the Potential to Transform Health Care. JAMA. Published online August 30, 2018. doi:10.1001/jama.2018.11100
And also look out for Hannah Fry’s new book about being human in the age of algorithms, entitled “Hello World”