Artificial Intelligence in the Global Spotlight: Opportunities and Risks

Image: Unsplash

I’m excited about the power and potential of AI. I even cofounded a company, Metadvice, to put that power at the fingertips of clinicians making ‘precision decisions’ on complex cases. But I’m well aware of the concerns about runaway AI, or AI turned to criminal or malicious purposes, so am also involved with Haia, seeking for greater alignment of AI with human values.

So it was with a ‘Haia’ hat on that I attended yesterday a satellite meeting at the UK Prime Minister’s Bletchley Park Global Summit, that has attracted senior politicians and executives from across the world, including the US Vice President, the President of the European Commission and China’s Minister of Science. And, of course, Elon Musk, along with Sam Altman of OpenAI – who recently told me he wants to see globally consistent regulation. My own contribution to this debate comes from my lengthy experience with pharmaceutical regulation, which has some features that AI regulators can learn from: benefit/risk assessment based on objective testing, clear accountability for product safety and a mix of government and self-regulation. Anyway, let us see what the Summit comes up with.

My strong belief is that, for healthy longevity, AI can offer very positive benefits for little risk. What might be some of those benefits? As we have already found at Metadvice, AI can deduce from someone’s medical records (and perhaps from genetic markers) what future diseases they may face, and – if the disease cannot be prevented - what the right treatment will be for the version of the disease they will suffer. But medical records are just one dimension of our health life: increasingly we will have rich data sets from wearables. Wearers of an Apple watch can see their heart rate variability, with its indication of stress level, oxygen saturation, with its link to lung health, and of course the various trends in exercise and other physical activity. To this we can add the output from a continuous glucose monitor (CGM), so that the wearer can see not only their average glucose level but also glucose ‘spikes’ that result from food intake. We now know that these spikes are very individual and depend on the timing of the food intake and the composition of the microbiome in the gut that processes it. The realtime, moment-by-moment nature of the CGM data would overwhelm the clinician trying to track it, without machine intelligence to deliver an intelligible, actionable summary. So our Metadvice company is working with Swiss diabetes specialists to characterise patients and their insulin control on the basis of individual CGM traces.

The diversity and depth of all this traditional and newly arrived health data cries out for AI to make sense of it! AI has arrived at just the right time to turn the ‘digital life’ into something practically useful. In a future newsletter, I’ll be interviewing Michael Geer, cofounder of Humanity, a leading digital health solution with a goal to maximise healthspan.

At the population level, we will see other AI outputs, as we feed into our models a wide range of non-medical data on demographics, employment, location, ethnicity, food shopping. The list of potential parameters that might affect health is endless. And the larger the dataset the more benefit will come from AI-based data analytics. And this will guide policymakers who wish to maximise the healthspan of whole groups of people, particularly the socially or economically disadvantaged, who often have less access to healthcare.

Are there any downsides, any risks to the application of AI to healthy longevity? Firstly, we must make sure that algorithms are as unbiased as possible (which is why Metadvice trains its AI engine on medical guidelines, not Big Data from specific and potentially skewed sources). Secondly, at least for now, we must keep a ‘human in the loop’ – AI can develop suggestions or recommendations, but a qualified professional should make the decision and be able to explain it to the patient. On that last point, AI practitioners must ensure their models are as transparent as possible – blackboxes will not engender trust! This can be done using techniques like Shapley Values, where the rationale for an AI recommendation is explained in terms of the most relevant driving factors behind it.

Returning to the Summit, does AI represent an existential risk for humanity? Well, the jury is out, but experts such as Stuart Russell and Max Tegmark, both of whom I met yesterday, believe this is a possibility, and media are at risk of whipping up anti-AI public sentiment. If that resulted in outright bans or over-bureaucratic regulation, many of the AI benefits I have described would themselves be at risk. We need to be very specific in controlling the greatest risks, like fully autonomous systems without ‘off’ switches, whose creators cannot themselves understand.

I’d love to hear your thoughts on how we maximise the benefit/risk of AI in healthcare. Please get in touch and I’ll include your thoughts in future newsletters.

Previous
Previous

Speaking at the NY Founders Forum: Precision Medicine, Longevity Science, and AI

Next
Next

The Role Of Genomics In Healthy Longevity