Lunch Hour Lecture: AI in healthcare – over-hyped or under-used? – Dr Amitava Banerjee


My name is Rose Luckin. I’m a Professor at UCL Knowledge Lab. I’m going to introduce
our speaker today. It’s a real pleasure for me to come
and introduce Ami, because I first met him
when I gave a Lunch Hour Lecture here about 18 months ago. Since then, we’ve collaborated
on a grant application and are finding various ways
of working together. It’s lovely to come back
and introduce him to you. When I said to Ami,
what would you like me to say, he said, and I thought
this was classically modest, he said, I guess I’m happy for you
to say I’m not an expert in AI. I thought, yeah, but you are an expert
in an amazing number of things. So, Ami is definitely an expert in evidence-based healthcare
and informatics and most of all,
patient-centred care. When I looked at his CV, it reads like what any budding medic
would want their CV to look like. He is a Senior Clinical Lecturer in the Clinical Data Science department in the Institute of Health Informatics
at UCL. He’s also an honorary consultant
in cardiology at UCL. He’s also leading the UCL Medical
School’s e-health curriculum. That is quite a lot
to be doing! When I looked through his CV,
it has all the names you’d love to see. He studied medicine
at Oxford Medical School. He has a Masters in Public Health
at Harvard. A PhD at Oxford. He’s worked for the
World Health Organisation. He is a clinical cardiologist,
still practicing. He’s the education lead
at the Health Informatics Unit at the Royal College of Physicians. A fellow of the Royal College
of Physicians of London and Edinburgh and the European Cardiology Society, and the American Heart Association, and the Higher Education Academy, and the new faculty
of Clinical Informatics. What else can I say? I can’t think of a better person
to come and talk to us on this topic in our lunch hour. As the sign says, bring your lunch
and your curiosity. I haven’t brought my lunch
but I have brought my curiosity. I know you will enjoy this talk. It’s over to you, Ami. Many thanks Rose
for your kind words. I am based at the Institute
of Health Informatics at UCL. My name is Ami Banerjee and I am also a cardiologist at
University College Hospital and Bart’s. This is the conflicts of interest slide. I like to call it
the limitations slide. As Confucius is quoted as saying, real knowledge is knowing
the extent of your ignorance. I want you to know that
I’m a clinician 30% of the time, 70% of the time
I’m researching and teaching. I’ve previously worked with
these pharma companies. There’s a digital medical company
called medipad, which is funding research
that I’m doing at the moment. I’m also a trustee
of the South Asian Health Foundation. I am not an expert in AI. I’m very interested
in its applications to healthcare. I am interested in evaluation
of new technology in healthcare, digital health,
evidence-based medicine. I’m at that stage of life, where my mum is sending me
inspirational and aspirational quotes. It used to be by email,
I get them by WhatsApp sometimes. She sent me this one
when my daughter was younger. Children are made readers
on the laps of their parents. When I tried to explain this
to my daughter, she said, it’s not laps,
it’s laptops. That’s how big computers and AI is. When you look in my Amazon feed, at the books
I should be buying for, this comes up. Everybody needs to know something
about AI. What is it? For a simple doctor, I would say, AI is importantly about performing
a given function. You’re using that intelligence
to perform a task. Machine learning is a subset, where, from the input data sets, the machine is learning
how to get to the output, how to get to that task. Deep learning is a further subset still
of machine learning, where you have several layers
of neural networks, to reach that task. For my daughter, her task is to get Santa
to bring the gift she wants, which is a Lego doll’s house. The machine learning was that
she realised that just sending a letter to Santa
isn’t enough. Putting a bit of push on Dad
seemed to work last year. She did the same this year, but the deep learning is
that she asked Alexa, can you ask Daddy
to buy me a doll’s house. She cut out the middle man,
Santa. That’s deep learning. Of course, our current health secretary,
Matthew Hancock, with in a week of joining
as Health Secretary, he had signed a deal
with Amazon Alexa to partner in delivering information. You can ask Alexa for health advice. AI is one of
the most written about topics. I don’t need to tell this audience. Here’s two heavyweights, Mark Zuckerberg,
founder of Facebook, is, of course, optimistic. He feels there will be lots of
improvements in our quality of lives. Elon Musk, serial entrepreneur,
serial digital entrepreneur, feels there is a risk
to the existence of civilisation. This is characterised in almost
everything you read about AI. It’s either friend or foe. It’s good or bad. There’s not a middle ground. Right here in your university, in the hospital opposite, we’re investing heavily in skills, in terms of both clinical practice
and in research, at how we can apply AI
to healthcare. What we’re seeing a lot of
is industry interest. Industry presence. This is a series
of co-sponsored articles, that seem to be appearing
in the Evening Standard with Babylon, a company which is all about
virtual GP consultation, but it has AI as part of what it does
in terms of symptom checking. This was a report last year, led by Lord Darzi
of the Institute of Public Policy, which said we could save £12.5 billion
a year in the NHS, if we better used AI. Theresa May latched onto that
and said, in the next decade, we’re going to
save thousands of lives using AI,
in terms of cancer. I’m a clinician,
I said that. This is somebody I saw
a couple of weeks ago, in my clinic in Whitechapel
at the Royal London Hospital. 53 year old Bangladeshi gentleman, he’s been more breathless
over the last couple of months. Now it’s so bad that he can’t walk from
here to the middle row of this hall. He had a heart attack ten years ago. He’s diabetic. He’s got high blood pressure. He’s on a pile of pills. He stopped smoking five years ago. With this breathlessness,
I organised an echocardiogram, a scan to see
if his heart is pumping well. It showed this ejection fraction
of 10%. That should be more like 55 or 60%. This is not good. A mixture of his questions
and my questions. What’s the likely cause
of my disease, doc? What’s my risk of dying
in the next five years? I’ve got two kids. Are particular drugs more or less
effective in this case? Should I refer him
for a heart transplant? He’s a young-ish bloke. If so, when, and what information
should I base that on? Heart failure
is on a continuum. Your heart pumps normally, there’s a phase where
you might not have symptoms, but you do have heart failure,
we just don’t know yet. You don’t know yet. There’s a phase like this man,
where he’s symptomatic, he’s breathless. My job, or the health system’s job, is to try and stop him having the worst
complication of all, dying, but also, being admitted to hospital. At the moment, we’re not very good at predicting
how things progress along this line. One thing I can do
as a good evidence-based doctor, is look at the literature. This is a study done in Sweden
last year. 45,000 patients. They looked at how good a particular
machine-learning algorithm was at predicting one-year survival. What they found was, the C-statistic, which is a measure of
how good a tool is at prediction, where .5 is like tossing a coin, you want it to be
as close to 1 as possible. What we think, as cardiologists,
is a good predictor, which is that ejection fraction, the measure of how well
the heart pumps, that is as good
as tossing a coin. Actually, machine learning improves that
vastly to .8 in this study. This is a Swedish study. Do these patients map
to what I’m seeing in Whitechapel? With one of my colleagues, Nick Chen, we’ve done a systematic review
and looked through the literature to see what other studies
have been done about heart failure and machine learning. There are 22 of these studies. They never,
other than one, have gone to another data set to see if the same results are found
in another data set. They use different methods
of machine learning. Mostly, they’re in patient groups
of 1,000 or less. They’re small studies. A couple of them
have involved industry. It’s not always declared,
the conflict of interest. For me, and my patient, it’s not clear how representative
these data are to the patient sitting in front of me. That’s the prediction. How well am I going to do
in the next five years. This is the other question in my head
about heart transplant. When heart failure is bad, you get put on a waiting list
for transplant. You have a transplant. Then you have a long survival
after you have a transplant. We only do about 200 of these
a year in the UK. These are precious organs. We need to do
as well as we possibly can at picking the people
who get a transplant and doing our best to predict
how well they will do in the long run. The point is that the risk scores
that we have at the moment are pretty poor at predicting
both pre-transplant survival and post-transplant survival. They use a limited number
of features and they’re not individualised. With my colleague Mihaela van der Schaar
and team, we applied her paradigm
of machine learning to a US database
of heart transplant patients, over 50,000 patients, over the period of 1985 to 2015, to see if we could improve
the prediction of survival. This algorithm she has developed
called Trees of Predictors uses more features. On average, the existing tools use no more than 8 – 10 risk factors
or features. By using more features, hopefully you will have
a more personalised approach to predicting risk and you also want to be able
to predict risk in a specific window. We’ve been saying one year,
but patients want to know five, ten. Instead of eight or ten risk factors, we ended up looking at 50 risk factors, and, without going into detail, what this algorithm does
is split that 50,000 gradually into smaller
and smaller clusters so you’re predicting based on
which cluster a patient is in. What it showed is that from the existing
risk scores and existing machine learning, we went from .6 C-statistic,
a bit better than tossing a coin, to more like .8 pre-transplant, but still about .6 to .7
post-transplant. Whether you were looking at
three months to ten years. Our approach did improve things
significantly for both pre- and post-transplant
survival predictions. The reason it did is because it
takes into account the variations in the population. This is using big data. It takes into account interaction
between those 50 risk factors. It takes into account variations
across different time horizons. But… it’s still far from perfect. I told you that after transplant
it’s still a C-statistic of .6 to .7. We want that to be .8 to .9. We need prospective studies. This is a retrospective study. We need to do this
in other datasets, possibly in a trial setting, to see if this actually works
in the wild. Most importantly, is this usable
by doctors like me and is it usable by patients? Going back to my gentleman in clinic. What does good healthcare
look like to him? There’s lots of variations. This could be a lecture
in itself. The WHO talks about health
being more than the absence of disease. It talks about social, physical,
mental wellbeing. Definitely, he’d like some of that. Universal is something we’ve tried to do
at the NHS since its inception. That’s about access. It’s about cost-effectiveness. Evidence-based. That sounds sensible. We want something that is proven
to be effective and validated
with reproducible results. We want something
that is high quality. It has value to the individual
and value to the health system. Muriel Grey and others from Oxford have developed this paradigm
of right care. The right care for the right patient
at the right time in the right place. That captures all of the above. I like to think,
in the information era, of learning health systems, which was a term coined
by the National Institute of Medicine, in 2006. They talk about this wastage when
you go from science, insights are wasted, they don’t
get into guidelines and evidence. Wasted again,
they don’t get into care. They produced this virtuous circle, the learning health system, where data is flowing freely
between these three silos, and it’s that electronic data, the digital data, that really makes this work. That’s where AI,
at each of those steps, can make good healthcare. In order for that to happen,
what do you need? You need data
at the individual level. You need personal health records. You need, at the system level,
electronic health records. Population, public
and so on. It has to be used
and fed back all the way to the individual. You need good science,
good evidence and good care. Do we have the science
for healthcare? This, I believe, is one of the best
attempts to gather all the evidence for AI in healthcare and whether we are
there in terms of science. Just note the last sentence here, over time, marked improvements in
accuracy, productivity and workflow will likely be actualised, but whether that will be used to improve
the patient-doctor relationship or facilitate its erosion,
remains to be seen. I’ll be coming back
to Eric Topol in a moment. In this article, he very neatly shows all the exciting things
that AI is being used for in healthcare and actually have been studied. In pathology, it’s been used
to check diagnoses of breast cancers, and lung cancers. In ophthalmology, it’s been used
to look at retinopathy scans. In cardiology, my own field, it’s been used to read scans,
the echocardiography scans. If you look, there are only
three or four studies which have been done prospectively. The rest have been looking
in the rear view mirror, retrospective studies. Probably the best data
we have so far is in ophthalmology, but there are only two prospective
studies and one trial to date. Eric Topol concludes that
we’re far from demonstrating very high and reproducible
machine accuracy, let alone clinical utility, for most medical scans and images
in the real-world clinical environment. He’s coined this term
the ‘AI chasm’, where even if you have
a really good predictor of accuracy, with an area under the curve of 0.99, you still do not have clinical utility unless you prove that. That’s where the Holy Grail of AI is
and where we should be focusing. He talks in this paper
about the accelerated FDA approval. The body that approves the things
we do in healthcare to check that they are effective
and safe. There’s now an accelerated
different pathway, for AI algorithms. Interestingly, only two of these have published data to match that you or I can access. This is a worry, that you’ve got an exceptional treatment
of AI in a way that wasn’t happening
with other new technologies. To answer the question, whether my job
is in danger from robots, he says, human health is too precious, relegating it to machines, except for
routine matters with minimal risk, seems especially far-fetched. It’s widely said
that AI isn’t the problem, it’s the data that’s feeding in
to the algorithm. That’s the problem. I tend to largely agree with that. Even if AI is ready, is the data in the NHS ready? The answer is maybe not. IBM Watson is one of the most famous
applications. They said they were going to transform the diagnosis and prognostication
of cancer and it hasn’t done that. It hasn’t lived up to the promise
in the last decade by any measure. I’ve divided up these different types
of problems with the data that might contribute to this. I mentioned Babylon earlier. They exemplify what I want to call
the digital divide. If not everybody has the ability
to provide their data then how are you going to be
put into the algorithm for AI. This is a company that has symptom
checkers for disease, but has initially been piloted
in well patients, people without the comorbidities, who are younger than most patients
I see in clinic or in hospital. You’re potentially making a divide,
producing more inequalities. There’s a data divide. If you look at genomics, one of the most optimistic areas, of big data. In 2009, only 4% of the studies in
genomics concerned patients who had
non-European ancestry. In 2016, we’re not yet
up to 20%. How can I benefit
from an AI algorithm when my data was never in there? Who owns the data? This is arguably one of the most
contentious issues. The Royal Free Hospital
got into a lot of trouble with the Information Commissioner. Google DeepMind accessed
1.6 million people’s records. This is in the public record. The issue is consent. The issue is who owns the data. Is it for research? What’s for commercial purposes? Now, there are further concerns because DeepMind and Google
were at arms length from each other and they are no longer so. Is any research going on in that setting
or otherwise going to be liable to linkage
of health data and non-health data. It’s hard to imagine not in a huge
multinational company such as Google. Quality of the data. That’s in terms of,
is there missing data, but is there evidence
of quality of an intervention? This is mobile phones
for diagnosing melanoma. This is one of
the most exciting developments. People don’t have to wait
to see their dermatologist. They can take a picture
of their tumour and the algorithm can tell you
what’s worrisome about that and whether they need
to seek referral. There’s plenty of evidence
in the press and the literature that this is a good thing. Actually, when you look
at the totality of evidence, the Cochrane systematic review, the best way of looking at
all the data, for this application
for melanoma, shows that at the moment,
smartphone applications using AI-based analysis, have not yet demonstrated sufficient
promise in terms of accuracy. They are associated
with a high likelihood of missing melanomas. That’s the important bit
with AI. Are we missing important things? Are we telling people they’re unwell
when they’re not? Are we over-diagnosing? Those are the two areas,
like any doctor as well. There’s standards that we need to
maintain. If you take heart failure, my patient has this disease
of heart failure, it’s defined differently
across different study designs, across trials,
across observational studies. It’s defined differently in clinic
versus the hospital. It’s defined differently in research
and hospitals. It might be slightly different
in America versus European data sets. How are you going to
cater for all of that in your AI algorithm? The data is messy. The science is a question mark. What about evidence? Here, we’ve got some interesting work
going on, led by my colleague Harry Hemingway, also at the
Institute for Health Informatics. We’ve got a consensus document
of what good looks like. What are the questions
we should be looking for in any application
of AI to healthcare for transparency, replicability,
ethics and effectiveness. They’ve developed
a set of 20 questions. This is in Open Archives
in the public domain. We should be using this to ask
if data is usable and whether AI is ready
and fit for purpose. The NHS has also sat down
and got together key stakeholders from NICE,
from the tech sector, and thought about
what are the guidelines, what are the standards
that we should seek for additional health technologies
including artificial intelligence. What I’d say is unfortunately
this is just guiding principles. There’s no requirement
for any company to meet these guiding principles
as yet. I’ve talked about science.
I’ve talked about evidence. Care. That’s what I do in my clinic. We’ve got several problems here
for AI. The first is training. The second is transparency. The third is clinical credibility. The fourth is patient-centred. We know that in this complex way
that we train doctors, from the five years
at medical school through to the ten plus years
it takes to get specialised, at the moment,
there is very poor provision of training in big data methods,
in informatics and definitely in AI for doctors. At the moment,
we’re working with the Royal College and medical schools to see how we can
develop this training. The doctors of today and tomorrow
are going to have to use this technology and evaluate it. At the moment, we’re focusing
on whether a hospital or your GP surgery is digitally mature rather than whether I,
as a worker in the NHS, am digitally ready. That is obviously going to restrict
what you can do with AI. There’s lots of work
to be done in that space. There’s lots of big promises, but at the moment,
this is not reaching reality. The World Economic Forum feels that
this is the answer to China’s shortage of doctors. I think this is the biggest issue for AI going forward. Transparency. This company, Babylon, has had several big stories
about it. One of them, which we’ve seen
in healthcare many times, is a silencing of whistle-blowers. Those involved in care who are worried
about how data is applied aren’t able to talk about it. Ultimately,
if you’re in this business, it’s not like selling jeans or cars, if you are looking after
people’s health, there’s certain principles
that you have to follow. If you aren’t able
to pull people out when the data is showing you should, that’s a problem. This is a story hot off the press
from yesterday night. Juliet Bauer, digital lead
at NHS England, has led to a new term
in the Financial Times. The new revolving door
from the Health Ministry to the health app industry. There’s freedom of movement
for people to work where they want, but she’s not had a period
of gardening leave. She’s not declared an interest
when promoting this app. Is that a problem? Yes, potentially. Any conflict of interest, any potential limitation on the use
of knowledge and output needs to be declared. The Darzi review of healthcare,
I mentioned this. This said that £12 billion
can be saved across different professions, across various functions
in healthcare. That’s a massive amount
of money. £12 billion. This is widely said
to be an independent report. That’s in the public domain. It’s not on the front of the report,
but maybe we should know that. Eric Topol. A cardiologist from the US. A real thought leader in the use
of digital technology in healthcare. And in how AI is going
to change things. He’s been invited to do a review about how we train people
in the healthcare workforce. Rose has been involved in that This is very exciting, there’s a panel that’s very august,
from various domains to look at how we are going to train
the workforce of the future, but again, these are the ones I could find
that aren’t declared in the document. There’s people working for companies
who are very interested in this space, who are on the board. Also, there’s nobody in that line-up who’s clinically active
at the moment. Not one. Neither a nurse, a doctor,
a physiotherapist or anybody. Would you do that if you were talking
about AI in the air industry? In the plane industry? We often compare healthcare
with planes. Would you have a panel
of people from business but exclude pilots
from the panel? I don’t think so. This is a concern. Also, we don’t have
any patients there. The only people we do have is these clinical fellows who are the trainees,
the younger doctors, who I believe
should be more involved and they are only invited to contribute
to the board’s thinking as helpful. Then, Matthew Hancock,
our Health Secretary, has set up
this technology expert panel. This is very much focused on how
we can roll out digital technology, particularly AI. It’s led by a champion
of evidence-based healthcare, a researcher and clinician,
Ben Goldacre, but if you look down the list, there’s not one person there
who’s clinically active in the NHS. There’s lots of people
who are in finance and investment and venture capitalist
and industry. But where are the patients? Where are the clinicians? Where are the people
doing research in AI? Are you going to be surprised
if this healthtech board says let’s roll out
a particular AI technology. Of course not. Ben Goldacre’s next book
might be Bad Digital. The bit that I think
really might suffer is patient-centredness. We use this word very fluidly
in research and clinical practice. We’re seeing it a lot
in the AI space, in digital technology. It’s more patient-centred. Actually, you just saw three boards
at government level, which involve zero
patient input. The only way patients
get involved in that is by putting in comments
afterwards. We know they won’t do that. We have to get out there
and produce resources like led by the Wellcome Trust, there’s this initiative
Understanding Patient Data, which I’ve advised on. Actually, it doesn’t tackle AI. We need much more of that
to give good quality information about what AI is, how to evaluate,
and know the wheat from the chaff. What else might we do
with patient centred-ness? Who’s heard of this website
theysolditanyway? Okay. Do look at this later. This is the NHS Digital Data Release
Register. It lists every single company, hospital, clinical commissioning group, university, that might have broken the rules. It said that if people opted out,
it wouldn’t use their data, and actually they have. Unfortunately, you’ll see
my institution, UCL, is listed there. It’s also listed in terms of the other
side of organisations, that have not breached the rules. This is constantly updated. It’s not such a problem
that people have breached the rules, of course that’s a problem,
but at least it’s in the open. You can see
what the breaches are, what the organisations are doing
about it. This is what patient-centred
or public-centred looks like. Back to my patient. I’ve got to say,
on the basis of what I told you, I don’t have evidence yet
that I can drive risk prediction and tell him anything
on the basis of AI, for his heart failure. At the moment,
we are over-hyped, rather than under-used. That’s because we don’t have the data
to get the most out of AI. We don’t have the ways of evaluating
that we’d like. We don’t have the training
for people like me and my patients to understand what it all means. Then, most importantly,
a lack of transparency in the culture. We need to move
all these three spaces along. The science needs to go from
unrepresentative data to representative. We need the evidence space, rather than breaking up each patient,
clinician and counter into little widgets of care, we need to think holistically
about a patient’s pathway, and take advantage of data
that might be available, questions that are relevant
to the whole patient pathway. Then, these words. Data-driven. Technology-centred. I lose more and more hair
every time I read these terms. Actually, where we want to be, I’ve never had a patient
every say to me, I need more technology
in my healthcare. I’ve been doing this
for 20 years now, I’ve never had someone say
I want more technology. I have had people say
I want my data. I want to know what’s happening. I want better communication. I want it to be about me. That happens quite a lot. That tells me we do need
to be more patient and more data-guided. This is one of very pieces of work,
to my shame, that I’ve done co-authored
with patients. We talked about what does
using patient data look like. Data saves lives, how does it really save lives
and how could it for patients? Last two slides. My Christmas reading
included this. This is a really excellent book. This quote says,
“In the early twenty-first century, the train of progress
is again pulling out of the station – and this will probably be the last
train ever to leave the station called homo sapiens. Those who miss the train
will never get a second chance. In order to get a seat on it, you need to understand
twenty-first century technology, and in particular, the importance of
biotechnology and computer algorithms.” This other book that I also read
over Christmas, is why we can’t afford
to get this wrong. We can’t have another Theranos, another technology company, which is
based on bad data or evaluation. We have to do better. Thank you very much. Thank you, Ami. So many parallels with my world
of education and AI, really interesting, especially all those
points about ethics and transparency. Time for questions. If you’d like to ask a question, please
wait until you get the microphone. Any questions? Must be some.
There’s one up the back. Thank you very much
for your talk. Thursday saw a sell-out lecture
of Ali Parsa, the founder of Babylon, speaking to UCL medical students. Most medical students will be familiar
with How to Read a Paper, which tells you to critically appraise
evidence and work. What would you say
to those medical students about how to critically appraise
or challenge or assess someone like Ali Parsa
or Babylon when they present, so they can make the best decision
about whether to use that technology, or even whether to leave medicine
and join the company? Thank you. I think, firstly, you need to have
published evidence. If I’m not publishing my evidence or putting it somewhere
where people can see, until proven otherwise,
I’ve got something to hide. There’s plenty of evidence that the
symptom checkers and so on from Babylon,
are substandard. They’re not safe. Number one, look for published evidence
that you can see. Number two, if the claim
is too good to be true, it probably is. Thirdly, what we’re seeing worryingly in
the digital tech space is there are some people
or companies, that seem too big to fail
or too big to question. How is this company
allowed to carry on the way it is without these simple questions. There’s certain hoops that I have to
jump through as a researcher, to do a very small piece
of research. For one millionth of the amount of money
that they turnover. I think all of those, as well as
the training they’ve had in evidence would easily help them. Thank you, Ami,
for the wonderful lecture on AI, whether it’s a hope or hype. I’m Ganesh, I’m a physician myself. I have developed an interest in
artificial intelligence in healthcare. A couple of weeks ago, I went to
one of the Google AI conferences. The tech world is now calling data
the new oil. AI is the new electricity. That’s the kind of hype
being promoted by the tech industry. The next question is, the algorithm is as good as the data. The better the data collection is, and the standard
of how it is collected, will determine how well
we can train the algorithm. The problem with the NHS
is the data is fragmented and the data which is collected
is not standardised. This then begs the question, do we need to educate the doctors,
the clinicians, and the healthcare professionals, how they collect the data for future use in AI. My question is, when it comes to
the medical schools and upcoming graduates
from medical school, should they not have a course
in health informatics like a module in health informatics which will teach them
how to collect data? The easy answer
to the statements you’ve made is I agree and yes. Various people are working on it. The reason I wanted to ask you is you are in health informatics
and in the college as well. How soon are we looking at
for it to be implemented? I think we’re still at least
a year away. Thank you very much
for a most interesting talk. You mentioned there might
be a difficulty in having firewalls between
various companies, such as Google and DeepMind. I read last week
about one company that claims to be able
to perform certain diagnoses, particularly in cardiology,
diabetes, as an example, on voice alone. Our voice is something
that we give away all the time. How could this ever be regulated? I would give you the same answer
that I gave to the last question. Until I see a published study, with proper analysis,
independently done, rather than a claim
from the CEO of that company, I can’t judge
whether it’s good or bad. The likelihood is
that there is no data. One of the reasons,
if you’re sceptical, people are looking through
all those non-related data sources, is because they can be gathered
by the Alexa sitting in your front room. By your mobile phone
and so on, which are already being captured
in a way that is not regulated like your electronic health record. The really clever thing would be
when they can link that, the clever or worrying thing, is when they can link that
to your health data. Another question there. This is the last question. Thank you very much
for the very good lecture. I don’t know
if you’re the right person to ask, but this is regarding global health. In global health, we have this huge hype
in m-health and mobile applications. The two recent ones I’ve heard of
were Peek for the eye, for scanning the eye, and Safe Delivery app. My question is, how do we get people to be accountable for what they promote
in these countries? We’re talking about accountability here. We have big organisations
to safeguard patients. In developing countries,
we don’t have that. How can we get this industry
or these companies promoting their app,
accountable for what they’re doing, who do not have anyone
to protect them? Thank you.
That’s a great question. I declare an interest. I’m doing a project
using mobile phones to track whether people
are taking their medicines or not in South India. The first check
on what you’ve just described, is the culture of academia, not just industry, also of university academics. That stops you doing that work,
whatever you’re doing here. The same rules apply
in another country. Secondly,
it has to get through ethics committees, which are having to follow
international standards. Thirdly, I would say you need to look
at how these things are funded. For example, Babylon has done a lot of work
in Rwanda. If I was involved in that, I’d want to know
what are they funding exactly. What is the evaluation
of what they’re doing? The algorithms they’ve developed here, I’ve already described how
they’re not necessarily fit to be used. I would argue they can’t be
fit to be used in Rwanda. How are they developing their data
and their applications there? It’s that mixture of things.
Sorry if that’s a mixed up answer. That was the last question. We’d like to thank you all
for your questions and for coming to this really
well turned-out lecture. Obviously, we’d like to thank
Dr Ami for his fantastic lecture.

Leave a Reply

Your email address will not be published. Required fields are marked *