838
IamA radiologist, AI researcher and lead author of a recent paper that used deep learning to predict if patients will die in the next five years. AMA!
Update: Hi everyone, thanks for the questions. I'm going to have to call it a night here (it is getting late in Australia, and I have another media interview in the morning). I can come back on after that and answer any other burning questions though (if that is allowed in the sub?). Thanks again!
Update 2: I think that I'll end the AMA here. Thanks for all the great questions everyone. I hope I answered most of it (there were some duplicates, so check the other responses if I didn't get to you).
My short bio:
I am a radiologist, which is a doctor who analyses medical images (like X-rays, CT scans, MRIs, and so on). I am also doing a PhD in medical artificial intelligence at the University of Adelaide, where I build and test deep learning (and other machine learning) systems to detect disease in medical images. It is super fun, really rewarding, and I get to work with a fantastic team of world-class researchers.
Our recent publication has generated a lot of media attention and spent most of yesterday at the top of r/futurology (over 9000 upvotes!). Several people asked me to do an AMA in the comments, so here I am.
The research in short
In this paper we trained a deep neural network to predict which patients would die within five years from their CT chest scans. This is a proof-of-concept study with a modest dataset, which we are building on now with further research (with tens of thousands of patients).
We use mortality to estimate how healthy each patient is, because there is such a strong relationship between the two. The goal isn't predicting how long you will live per se, but to quantify how healthy you are, and to work out whether we can do something to help you be healthier. This is called "precision medicine", using data to tailor treatment to each patient.
Other stuff you might be interested in
I also have a popular blog about medical AI, where I am currently doing an in-depth exploration of claims that AI systems will take the jobs of human doctors.
I've blogged on my predictions for medical AI in 2017, and they are looking pretty much spot on.
I am also a huge fan of MOOCs (massive open online courses) which is how I learned to program computers and to build AI systems. I have reviewed some of them here.
So ask me anything!
My Proof:
https://twitter.com/DrLukeOR/status/872698570637320192
and verifying my twitter account http://www.adelaide.edu.au/directory/luke.oakden-rayner
Time-keeping:
I will start answering questions in about half an hour, and will continue for a couple of hours if there is enough interest.
drlukeor36 karma
Absolutely, that is the whole reason for doing the research. The obvious thing would be seeing that you have a higher risk of, for example, cardiac disease, and using the knowledge to make some lifestyle changes.
But it goes much beyond that. If we can accurately quantify ill-health in comprehensive manner, then we can start looking for patient subgroups that need different treatment.
Maybe a certain combination of diseases means therapy should be more aggressive. Or maybe an unexpectedly high level of underlying disease would change the patient's decision to have an elective surgery, because the risk of a bad outcome is too high in that individual.
We don't know for sure how this will play out yet, but the entire goal of precision medicine is to significantly improve patient outcomes.
ujazzz5 karma
I have huge amounts of respect for you guys. Building, managing and developing AI just fascinates me, the genetic algorithm is something that just blows my mind. As you say predicting the percentage of death is what the AI does does for you, does it also mean that it also has the capability to predict the most efficient and effective cure for a particular disease that humans are not aware of yet. there can be many ways to treat a disease right? can you use AI to put these various cures into this algorithm and just combine them with many different sorts of data to mix and match and let the AI decide whats best for us ? Can you create this environment for the AI? if yes can you elaborate how? Thanks <3
drlukeor3 karma
There is a lot of work by other teams about using AI to design drugs or discover molecular targets. It is a very interesting area, and while it is very much in the early stages, it also seems very promising.
padejumo9 karma
I am also interested in pursuing an MD/PhD in something analytical, potentially biostatistics, computational biology, or data science. I'm super interested in the research, but the extra years of school is really off-putting. What does a day in your life involve? Do you regularly interact with patients? And do you get paid about the same as an average radiologist, or more because of the high-end research you're conducting?
drlukeor17 karma
I'm just finishing up an 8 month sabbatical where I didn't do any clinical work, but my general balance is 2 days of clinical work per week, 2 days of research, and 1 weekday with my daughter.
My clinical work is paid, my research work is not. I made the choice to halve my income to pursue this research, definitely not a decision for everyone.
That said, I could probably get paid (well) to work in a company doing AI stuff. But I am happy with my balance right now, getting to explore things I think are valuable rather than profitable.
ColumbusAmongUs5 karma
I'm an MD/PhD student in bioinformatics and while it can be quite isolating, mainly due to the fact that traditional biology students (vast majority of MD/PhD students) don't know anything about what you do, it also sets you apart from your peers in a positive way. There is a tremendous amount of high dimensional clinical data being produced and a huge shortage of bioinformatics researchers who also understand medicine. I feel very good about this career's job prospects at the moment :)
drlukeor1 karma
I can echo this ... anyone with a decent amount of medical knowledge who can do high dimensional data analysis is in high demand. Worth plugging health informatics societies, they often run accreditation programs and give guidance about how to enter the field -
I'm not sure about the European equivalents?
spencehawkins8 karma
Can you provide a time frame of when and how you think deep learning will be integrated in radiology/dermatology?
drlukeor15 karma
Well, there is a skin check app that performs as well as a dermatologist already being tested in clinics. There is a Google ophthalmology system rolling out in India. There are chest xray screening tools on the ground in China.
In the "low hanging fruit" areas like this, we will see a pretty rapid proliferation of systems. How fast they get taken up, and how fast they pass through regulatory bodies like the FDA is another question.
I haven't tried to put exact numbers on it before, but at a rough guess I would expect AI to be reasonably common in medical practice in western countries in maybe 5 years. But they will probably only do a small fraction of medical work in that time.
pyrophorus1 karma
What do you think these developments will mean for the careers and role of doctors in affected areas? Do you see these technologies replacing doctors, or freeing them up for other tasks?
drlukeor3 karma
It is hard to say right now, because we are right at the start line, where we really have only seen a two or three examples of AI systems outperforming doctors at medical tasks. These examples only represent a small portion of medical work, so by themselves are unlikely to cause much displacement.
But what we see with this sort of technology elsewhere is a slow takeoff and then a rapid change. It makes it inherently harder to predict.
My current best guess is being explored in my blog, so far I would say that we would need a very rapid pace of automation to replace doctors, faster than we have seen with almost any technology before. For people looking to go into med school in five or ten years though ... I'm not sure.
Deep learning technology has existed for 5 years, and been competitively applied to medical tasks for about 1 year. Give it another year and we will have a much clearer picture.
pantsoffire4 karma
I am way too dumb to ask any specific questions. I'm just reading the ones already asked. Thanks for doing the AMA and your work in AI - espicially medical AI. Do you think we're 10 yrs or more from real AI?
drlukeor3 karma
It depends what you mean by "real AI", is it human level intelligence? How do we define that?
I don't think ten years is plausible. Robotics is lagging, but there are tons of tasks which will take longer than that window anyway.
Predicting further out, say 20 years? Harder to say. We just had a pretty big breakthrough paper this week that adds a new foundational skill to neural networks (at least, IMO). Things are moving so fast.
spj1043 karma
You mentioned that you learned a lot about data science and programming from MOOCs. Is it possible for a radiologist with no formal training in computer science whatsoever to meaningfully contribute to the imaging informatics field? Especially in a country where this field is almost non-existent at the moment.
And would you recommend radiologist trainees to familiarize themselves with data science for use in clinical practice in the future, or will these new tools be very user friendly and not require in depth knowledge to work with? The roles of radiologists will obviously change, but do you think that will move them more towards patient interaction, or data analysis?
drlukeor2 karma
So, until late 2014 I had never coded a line. No formal training, no informal training. It took me about a year to get up to speed with coding and data processing, about another 6 months to get to a level I was writing research quality machine learning code. After about 2 years total I was holding my own with ML PhDs/Professors and generating useful and novel research ideas. This was all while working full time, only via MOOCs, and wasn't too strenuous (my workload was probably about half that of my radiology finals/barriers).
It is definitely true that I have been lucky finding a great team who helped me along, but for sure you can do it if you want to.
Whether all radiologists should? No way. I love cleaning data, writing code, building models. But I can totally get how terrible it would be for most of my colleagues. They would hate it!
I honestly don't think there is a need for an entire radiology workforce of data scientists. My experience is that ML teams benefit a lot from one dual skilled doctor on board, but a second one adds much less value.
If ever radiologists more broadly need to work with data (I honestly doubt it will be a thing), there are tons of user friendly tools coming out, and they will only get easier if there is a demand for it.
I'm not sure about the role of radiologists. We will do more procedures, probably. I wouldn't be surprised if it is just more of the same, higher volume with AI for efficiency, and a slowly dwindling workforce over many decades. I want to see what the next few years holds before I truly trust any prediction in this area.
spj1041 karma
Thanks for the answer. Not very reassuring, but still, next year I'll apply to radiology and see what happens. Can't imagine myself anywhere else :)
drlukeor2 karma
I think anyone in the field now or soon is probably going to have a job for as long as they want one. The alternative, job losses, would require a very rapid pace of automation. When I say "a slowly dwindling workforce", I mean it shrinks as people retire, and we don't need to replace all of them because AI takes some of the slack.
I love radiology, and wouldn't want to leave it either.
QuietDesperate1 karma
Asking this question from the other end; do you think it is possible for a programmer with no radiology experience to contribute to the imaging informatics field?
As someone well into their second decade of an IT career, having been recently diagnosed and treated with CRT I would like to go on to contribute. Have you any advice?
drlukeor1 karma
I think the majority of people working in medical AI/IT are in that boat, programmers and engineers without much medical experience.
I think most pick it up over time. Any team doing something impressive will usually have someone with a good deal of medical knowledge on board, and it kind of osmoses out into the team. Some of my senior collaborators are actually very medically knowledgeable, simply because they have worked in the field for so long and have seen so many aspects of medicine.
That said, it is very easy to get yourself into trouble if you don't have expertise on hand. Asking the wrong questions, gathering the wrong data, doing the wrong testing. It is all too easy to spend a lot of effort and have very little to show for it.
qwerty12311211 karma
Can you recommend MOOCs and other starting point that you have found useful to get started in AI?
nlpster3 karma
There has been a lot of interest in work on structured data (eg lab tests, clinical codes, patient meta-data), lots of progress on imaging data, but comparatively little work & progress on medical free text. Given so much information is is stored in free text, why do you think this is, and is it a problem for advancing clinical ML?
drlukeor5 karma
There are a few ways to think about free text with deep learning.
Th optimistic way is that natural language processing has lagged behind image analysis by a few years, but it is getting there. We are probably where image analysis was in 2013/2014. It has only been a year or two that deep learning has consistently outperformed techniques that are decades old (and not very 'smart') when dealing with text.
But we really are starting to see breakthroughs now. Translation, text understanding and so on, results are improving at a pretty fast pace, and we will see those results in medicine soon enough.
In my own work, we have recently trained some models that vastly outperform LSI (the old technology) on an entailment/contradiction task, which is a building block of language understanding. So it is worth being hopeful.
The pessimistic way to see it is that our language models are still pretty bad, and medical free text is awful data. Most of what a doctor or nurse uses to make a decision is never written down, so you only get a partial story. What is written down has highly local vocabulary, you go a town over and the same note will be written pretty differently. And it is really hard to get to, because data interoperability is limited. Even just extracting and labeling data to train models with is often an impossible task.
It might be that medical data is just hard, and will remain hard for the forseeable future.
I sit somewhere in between the two. I think we are making headway, and each step reveal new fruit in reach to pick. But understanding medical text in a comprehensive way is not happening in the near future.
nlpster3 karma
I hear you on the awful data! In addition to your points, it turns out that quite a few clinicians can't type or spell either (under time pressure).
My concerns about the automated free text analysis lagging behind is motivated from an epidemiological perspective. There is a growing body of evidence that clinical retrospective studies are going to be a bit 'iffy', if they don't use the clinical notes [1] inter alia.
Some highlights taken from [1] :
- Diagnostic codes are not always applied at the time of diagnosis (Tate et al 2011), 22% of patients have a free text diagnosis before a coded diagnosis
- Some diagnoses are uncoded, and the diagnosis is only recorded in free text (11% in Bogon at al 2013)
- Prescription codes underestimate the duration of long-term therapeutics usage when compared with free text notes in 6 / 28 patients, further patients had been prescribed but there was no prescription record (Close et al 2014)
- Suicide was not recorded as the reason for death in 74% of evaluated cases (Thomas et al 2013), free text matched on 11%, reducing this to 63%.
The problem is, that manually reading these notes to extract the information is very time consuming for researchers. This task, if it could be automated, would be a real win for epidemiology. And yes, it's extremely hard...
[1] Price, Sarah Jane 2016What Are We Missing by Ignoring Text Records in the Clinical Practice Research Datalink? Using Three Symptoms of Cancer as Examples to Estimate the Extent of Data in Text Format That Is Hidden to Research. https://ore.exeter.ac.uk/repository/handle/10871/21692
edit : list formatting
drlukeor2 karma
Yep, great information there. A great talk from GTC17 just came online by u/beamsearch, which goes into more detail about some of these issues.
In particular, I love talking about EHR data (text or not) as healthcare dynamics. It isn't actually the thoughts of the doctor or other professional on the page, it is the interaction between the patient, the doctor, and the healthcare system with all of its weird and perverse incentives. Most obviously, doctors are actually incentivised to fudge medical records for a whole variety of reasons, not the least of which being it is the only way to get your patients the care they need. There are innumerable less obvious problems.
Edit: just to add to this, I would never even think about using coding for labels. Shudder
BaksideAttak1 karma
As another clinician who has done some work previously in bioinformatics and NLP I think another serious difficulty is the assumptions that are placed in parts of free text medical records. For example, my HPI may note "patient is afebrile with WBC count of 7.2, hemodynamically stable with H/H 9/32." At the surface this is just me reciting data about the patient, but my implication is that the patient is less likely to have a superimposed infection and is not actively bleeding. I generally write these terms in the assessment/plan portion of the note, but not everyone does, as it seems obvious to a physician. Every physician reading the HPI knows exactly why I asked those questions, whereas a computer can't read between the lines (at least as well/easily), so to speak. Some of these syntactic issues don't help, and most physicians won't improve their notes for NLP because it takes even more time to document when documentation requirements inflate annually as is.
drlukeor1 karma
Yep. I actually wrote a blog post that touched on this once, where I described medical notes as lossy compression.
NLP in general is hard, because it relies on shared pre-existing knowledge. The words are vehicles for much more complex meanings.
drlukeor10 karma
I'm not sure about the exact focus of this question, but I'll take a crack at it.
This study was a proof of concept, so we had a very small dataset (only 48 cases). There was a huge time cost in collecting this data, because I segmented every organ on every slice (drew a line around the edge of each organ). It took months.
Small datasets are a major challenge with machine learning, but it isn't insurmountable. We have to take the results for what they are, an indication of whether it works, with a certain range of uncertainty.
All that said, one of the reasons we chose mortality as an outcome to study is that the data for labeling is quite easy to collect. Governments have very complete records of births and deaths, so you don't have to struggle to find accurate records.
We are currently expanding our research into tens of thousands of cases, which will be large enough to get a very good idea of performance.
mygpuisapickaxe3 karma
I'm a first year radiology resident. All throughout medical school, I was told how the computers would be taking my job.
How can I position myself to be prepared for the inevitable arrival of machine learning in my field?
applepiefly3141 karma
(Not a radiologist, studying ML and have an idea of how to answer this though). Beef up in every procedural skill with every avenue available to you. It will be much longer before robots are at the level to perform medical procedures without skilled professional input. Add skills that increase face to face patient interaction. More generally, have a mindset that is willing to pick up new skills and adapt. The older mindset of specialising deeply in one niche then finding work doing that exact thing for 35 years is a luxury new generations aren't afforded. Also, live a lifestyle and budget/save for retirement as if you plan to retire in 25 years, not 35 or 40, adjusting this number as you get a better idea of the future of your area. Finally, don't worry about it that much. Your countries medical association/regulator will not just hang you out to dry. The technology will be phased in first in an assisting capacity freeing up your time to do other duties, and before it gets to the point that you have nothing left to do, the association will have developed training pathways for you to adjust to the new role of whatever radiologists will be doing.
drlukeor2 karma
I think this is all pretty reasonable advice.
I would also endorse
Finally, don't worry about it that much.
I talk to a lot of med students who are really worried, particularly if they are aiming for rads, derm, path or similar. Things will change, but it would take a really incredible rate of automation to replace practicing doctors.
I can certainly imagine training numbers tapering off over time, but widespread job disruption is less likely.
That said, we will have a much better idea on how fast things are progressing in the next year or two.
drlukeor2 karma
It just kind of happened. I finished my final radiology exams, which are like 18 months of full time study (while working full time, of course), and decided I didn't want to lose my hard won study skills and discipline. I'd always been interested in computer science but had no experience at all. I just decided "what the hey, I'm just gonna do it."
Six months later I had a project i wanted to do, met my team, the rest is history. I ended up doing the PhD pretty much because I was doing this big project anyway (motivated by personal interest and public benefit), and I was advised I may as well get a parchment out of it.
I never expected when I started that I would end up here, but it is an awesome place to be :)
put_the_punny_down2 karma
Can the AI with all its testing, perform anything that has shocked you?
drlukeor3 karma
So far my own work hasn't shocked me. But the field is moving so fast, there are surprising and groundbreaking results being published almost every day. It is a great time to do research in the field.
CentiPetra2 karma
Do you have any concerns that the technology you are developing will be exploited/ repurposed, so that the prime initiative is not to "help people be healthier", but rather to discriminate against them in regards to ability to obtain health/life insurance, secure/ maintain employment, etc.?
drlukeor2 karma
Only as much as current medical techniques. We have had this debate around genomics for decades, and the outcome world-wide has pretty much been to make it illegal to use genomic information to discriminate against people.
It is definitely an issue worth thinking about. We have a great group of medical ethicists at UoA and we will be working with them in the future.
getterdunn2 karma
Have you looked at the study suggesting depression/suicide can be seen 5 years away, any relation to your work?
drlukeor2 karma
Sorry, I have seen the paper but it isn't closely related to my work so I can't remember the details.
rtomek1 karma
Congrats on your paper getting so much attention! Any specific reason for choosing generic mortality rate as opposed to looking at a single disease?
As far as the industry predictions on your blog, your understanding of the field is off in a lot of ways. Plus, when I click on your links which are used as references, it redirects right back to your own blog. Then that reference article contains misconceptions that are backed up by, you guessed it, links to your own blog again! It reads like a conspiracy theory website.
drlukeor1 karma
Good question. Mortality is a very strong marker of health, is related to many visual features of disease, and is unambiguous. We chose this as the initial task for those reasons. We will look at disease specific outcomes in future work.
Re: my blog ... the internal linking is because all of the articles are one big piece, with one narrative. Each argument is predicated on previous ones.
That said, there are tons of links to external sources. Looking at the stats, I get about 2-5% of users hitting outgoing links. That sounds pretty reasonable.
Feel free to tell me where you disagree with me though, as I say at the start of each piece I will correct anything that is in error.
rtomek1 karma
Yes, mortality is interesting but still agnostic as to cause of death and isn't necessarily positively correlated to health in small sample sizes. Normally we use the term 5-year survival rate, but since you don't have a disease you can't really discuss survival :/
The external sources are there for some facts, yes. However, your interpretation is that of a student with a very limited network. It might be beneficial to link to some expert analysis in the field. You have little to no credentials in this space, so why do I care what your opinion is?
I disagree with the regulatory environment, I think there is a clear path of steps to take with the FDA and you're looking in the wrong place. There are people who have been working on this stuff for decades. If you do a literature search for the modern buzzwords, you'll miss all the pioneers in the field but a rose is a rose by any other name.
I think that 3D printing is already doing useful things in medicine, just not on a large enough scale that it's in every hospital yet.
The so-called biotech revolution has already started to fizzle from its huge head of steam it had in the past. Not sure what you mean there.
There's more, but I'm not going to pick apart your whole blog.
Something that I would like to see you try is to write that would actually get accepted by a machine learning journal. Most of the stuff I see about AI in a radiology department and in radiology journals is crap that would get laughed at by people in the CS department. You metion one of the reasons for that in your blog as a lack of data. Medical data is not easy to acquire, and the hospitals know they're sitting on a gold mine so they don't give it away.
rtomek0 karma
Your collaborators aren't you, but if you quoted your collaborators' comments in your blog it would certainly help. Right now your blog looks like a lot of opinions that are based on your own opinions.
If that's meant to be your network, I meant 'network' as personally having any information on what is going on in the corporate world before predicting what the corporate world is going to do. Unless you've been trained as an industry analyst and I don't know about that yet.
drlukeor1 karma
It is definitely my opinions, being a personal blog. I've never claimed authority other than what readers are willing to grant me.
If you disagree, I can only change my opinions of you tell me why. I have definitely updated my opinions on the FDA recently, after receiving more information. What I have written in the past is still generally right though, just missing some nuance.
When I get to the end of my current series of posts (where I am writing about 3000 words of researched material per week) I intend to go back and update several posts. But I haven't discovered anything egregious enough to need an immediate update.
Feel free to link me to "expert analysis" that contradicts what I have said. I'll change my position.
rtomek0 karma
Let's just say an NDA and a PMA are two totally different beasts. Signify research haven't been around too long, but they show a lot of promise in being able to accurately break down this field and actually have data to back it up.
drlukeor1 karma
Do you mean a de Novo? I assume that was a typo and you aren't taking about a non disclosure agreement.
All I can say is I have discussed these processes with people who know. Sometimes under NDA!
De novo is a funny beast. For a long time it has had longer approval times than PMA. Which is ridiculous and counter productive. I know historically a number of groups in the CAD space had been burnt by the process.
It seems like it has improved recently, but there is still a lot of resistance to using it, and resistance to attempting disruptive stuff in America in general. There is a reason that several major clinical implementations of "automating" deep learning are occurring in India and China.
cosmoceratops1 karma
Are you at the point where your program can adapt for differences in technique, scanner specs, and anatomical variants?
drlukeor1 karma
Our system currently manages with all the scanners from one hospital (there are about 4 or 5 CTs that have been used in the dataset), using contrast enhanced studies in any phase. I think it should generalise to a range of environments quite well.
GoTomArrow1 karma
Can you recommend a good book for an intermediate programmer who wants to get into deep learning neural stuff? I personally like books that start building knowledge from the ground up and explain exactly what is happening without using confusing simplifications (one of the worst examples I can think of is explaining object-oriented programming in terms of real-world examples, like apples and bananas with properties, absolutely terrible and confusing).
drlukeor2 karma
There are only a few deep learning books around, and I don't have experience with them. The field moves so fast it is hard to justify writing a book.
Obviously, the book "deep learning" is great (Bengio, Goodfellow, Courville), but it is quite advanced.
Honestly I recommend the Stanford course. Even if you prefer books, they have a great set of course notes online. It covers everything you need, in depth. You learn application, theory, fundamentals. It is good.
It does assume basic machine learning knowledge though.
drlukeor2 karma
My personal preference is introduction to statistical learning by Hastie and Tibshirani, which is a free ebook and also a free online course on the Stanford lagunita page. It is a bit more statistically focused than others so YMMV.
The classic course people recommend is Machine Learning on coursera by Andrew Ng. It is good, but it just didn't suit me when I went through it. It is also taught in MATLAB, which is no problem if you don't mind learning a language you probably won't ever use again.
Alternatively, there is a specialisation in coursera offered by Microsoft (I think it is called "certificate in data science" or something), which offers the same courses in python and R. These are both good languages for machine learning, and the content is good (although it varies a bit as the instructors change in each course).
nomad801 karma
What platforms do you leverage?
What has exceeded your expectations with regards to development & results progress?
drlukeor1 karma
Do you mean what languages/libraries etc?
I am a fence-straddling data scientist, because I like both R and python.
I train my neural nets in python, usually with Theano and Keras. But for cleaning non-image data, I love R and the tidy-verse.
glasgrisen1 karma
What would you like to see the outcome of this research to be? What do you belive Will be Done, and more importabtly, should be Done with it?
Thank you for doing this AmA btw.
drlukeor2 karma
I'd like to see medical images widely utilised for precision medicine. I'd love to see radiomics biomarkers as surrogate endpoints in clinical trials, because I think this could possibly speed up the process by giving an indication of success or failure of treatment earlier. Especially for things like medicines to increase longevity - long follow up times make trials incredibly expensive.
The closer we get to the computer science model of research, the better. Rapid results, rapid iteration, rapid progress.
drlukeor5 karma
Let's run with that hypothetical. You have autoimmune disease, say, rheumatoid arthritis. You are on standard disease modifying treatment, which controls your symptoms reasonably well, and you have ok function but struggle to do much strenuous activity. You get a scan.
The scan reveals you have subclinical heart disease, as well as significant frailty/sarcopaenia (presumably because you can't exercise very well). Normally you wouldn't treat subclinical heart disease, but your autoimmune disease is a constant stress on your body (and probably speeds up the development of coronary disease). On top of that, your frailty is reducing your cardiovascular reserve.
So you and your doctor decide that, unlike the average patient with rheumatoid arthritis, you should start preventative treatment for your heart disease. Maybe strict cholesterol/lipid control, some aspirin, and add in a graded physiotherapy program to regain some strength (funded by insurance because you now qualify due to the new data, which shows it will save them money over time).
Next time you get a scan, your health biomarkers are improved. You are, according to the model, further away from death.
This is all hypothetical, obviously, but this is what we want to happen.
drlukeor1 karma
The University of Adelaide, in Australia. This is also where I am doing my PhD.
Darkleptomaniac1 karma
Awesome to see someone else from Adelaide! With how small this city is I bet Ive walked by you a few times now
drlukeor1 karma
Undoubtably, especially if you are ever in the CBD around lunchtime (that's a radiology joke).
The one thing that has really surprised me in doing this research is the number of world-class researchers in Adelaide. The ACVT (the computer vision/deep learning lab at UoA) is massive and punches high above it's weight on the world stage. We also have some amazing health researchers, and world-class infrastructure.
It is kind of like ... why Adelaide? Love the place to bits, but I wouldn't have expected the level of quality we attract.
desultoryquest1 karma
How does someone without a medical education go about learning enough about radiology to be able to interpret scans (CT/MRI)? Are there any books / resources you would recommend?
drlukeor1 karma
I think u/iswwitbrn is kind of right, that any resource that gives that information is built on a huge base of background knowledge.
That said, if someone understood all of www.radiopaedia.org, then they would be as good or better than a radiologist.
Hells881 karma
Hi, I'm also a doctor and I want to know more about machine-learning.. as in.. how hard is it to learn from stratch by using a MOOC?
drlukeor3 karma
I did it, so it can't be that hard!
My general approach was -
1) Basic programming course like this.
2) Basic data science or machine learning course (or computational biostats) - like this - here you can choose between python and R, which is nice. Both languages are useful.
3) Deep learning course. This one. No other. Only this one :)
4) At this stage, you know your strengths and weaknesses. Maths classes, algorithms, optimisation, information theory, NLP specific DL, reinforcement learning. Whatever you want.
5) At every step, keep reading the papers as they come out. r/machinelearning is great to find what is cutting edge, as it twitter.
drlukeor1 karma
Deep learning systems learn their own features, which is why it is often called "feature learning" or "representation learning".
Mecjam1 karma
How much data exploration did you do to identify potential relationships? Why did you opt for your model to be a neural network over a regression or classification?
drlukeor1 karma
Regression and classification are broad categories of machine learning problems. Regression attempts to output continuous numbers (like 0 to 100), classification outputs categories (like yes or no).
Neural networks are capable of both sorts of tasks. In this case we did a classification task (will this patient die within five years), but we can also frame it as regression (how many days etc.).
TheMimicer1 karma
How much will life expectancy rise within the next 20-30 years? Considering that we are working to 3D print living organs; you are literally creating an AI to tell how healthy someone and what is possibly wrong just by scanning them. Modern medicine is getting awesome.
drlukeor1 karma
The current trend is that life expectancy will rise by between 6 and 9 years in that time period, but I personally think we could do better than that. There is a lot of funding in extending human lifespans right now, with several clinical trials already underway.
One of the long-term goals of this research is to find responsive markers of longevity for use in trials like this, to reduce the amount of follow-up time needed to show efficacy.
mus0u1 karma
Oh man, I should really start lurking 'new' and 'rising' more often...
Hi, I'm a radiologist too! SO YER THE GUY TRYNA GIT A 'PUTER TO TAKE MAH JERB!
How did you approach joining/starting a project that isn't entirely within your professional field? Any obstacles you had to overcome?
I have my own interest in doing research in the field of neuroscience, particularly in regards to neurodegenrative disorders and neuroplasticity. But as it turns out our health care system is only interested in filling jobs and getting quick returns from whatever they invest in.
drlukeor2 karma
Hi,
I know what you mean. Radiology in particular, at least in my neck of the woods, is ... disinterested... in research. i suspect it is because of the massive increases in workloads over the last several decades, no-one really has time to spend on research.
I just reached out. I got turned away by a few people, it took time, but eventually I found a great match in the team I have now. There were certainly a few lucky coincidences, and I had good support from my early collaborators, but at the end of the day it only happened because I tried.
thebub771 karma
What do you think about AMD and Nvidia, and which companies are at the cutting edge of AI?
drlukeor2 karma
nvidia has a huge lead right now, and has essentially captured the whole market. AMD is starting to approach deep learning, but the difference in resources is massive.
They are both GPU companies though. nvidia works with partners and provides tools, but most of the cutting edge work isn't by nvidia. They definitely support research very strongly though, almost every paper from research centres has "GPUs donated by nvidia" at the bottom!
eyekwah217 karma
If we could accurately predict the percentage of death within the near future of the patient, is there anything that could be done to lower those odds after the fact?
View HistoryShare Link