Highest Rated Comments


drlukeor36 karma

Absolutely, that is the whole reason for doing the research. The obvious thing would be seeing that you have a higher risk of, for example, cardiac disease, and using the knowledge to make some lifestyle changes.

But it goes much beyond that. If we can accurately quantify ill-health in comprehensive manner, then we can start looking for patient subgroups that need different treatment.

Maybe a certain combination of diseases means therapy should be more aggressive. Or maybe an unexpectedly high level of underlying disease would change the patient's decision to have an elective surgery, because the risk of a bad outcome is too high in that individual.

We don't know for sure how this will play out yet, but the entire goal of precision medicine is to significantly improve patient outcomes.

drlukeor17 karma

I'm just finishing up an 8 month sabbatical where I didn't do any clinical work, but my general balance is 2 days of clinical work per week, 2 days of research, and 1 weekday with my daughter.

My clinical work is paid, my research work is not. I made the choice to halve my income to pursue this research, definitely not a decision for everyone.

That said, I could probably get paid (well) to work in a company doing AI stuff. But I am happy with my balance right now, getting to explore things I think are valuable rather than profitable.

drlukeor15 karma

Well, there is a skin check app that performs as well as a dermatologist already being tested in clinics. There is a Google ophthalmology system rolling out in India. There are chest xray screening tools on the ground in China.

In the "low hanging fruit" areas like this, we will see a pretty rapid proliferation of systems. How fast they get taken up, and how fast they pass through regulatory bodies like the FDA is another question.

I haven't tried to put exact numbers on it before, but at a rough guess I would expect AI to be reasonably common in medical practice in western countries in maybe 5 years. But they will probably only do a small fraction of medical work in that time.

drlukeor10 karma

I'm not sure about the exact focus of this question, but I'll take a crack at it.

This study was a proof of concept, so we had a very small dataset (only 48 cases). There was a huge time cost in collecting this data, because I segmented every organ on every slice (drew a line around the edge of each organ). It took months.

Small datasets are a major challenge with machine learning, but it isn't insurmountable. We have to take the results for what they are, an indication of whether it works, with a certain range of uncertainty.

All that said, one of the reasons we chose mortality as an outcome to study is that the data for labeling is quite easy to collect. Governments have very complete records of births and deaths, so you don't have to struggle to find accurate records.

We are currently expanding our research into tens of thousands of cases, which will be large enough to get a very good idea of performance.

drlukeor5 karma

There are a few ways to think about free text with deep learning.

Th optimistic way is that natural language processing has lagged behind image analysis by a few years, but it is getting there. We are probably where image analysis was in 2013/2014. It has only been a year or two that deep learning has consistently outperformed techniques that are decades old (and not very 'smart') when dealing with text.

But we really are starting to see breakthroughs now. Translation, text understanding and so on, results are improving at a pretty fast pace, and we will see those results in medicine soon enough.

In my own work, we have recently trained some models that vastly outperform LSI (the old technology) on an entailment/contradiction task, which is a building block of language understanding. So it is worth being hopeful.

The pessimistic way to see it is that our language models are still pretty bad, and medical free text is awful data. Most of what a doctor or nurse uses to make a decision is never written down, so you only get a partial story. What is written down has highly local vocabulary, you go a town over and the same note will be written pretty differently. And it is really hard to get to, because data interoperability is limited. Even just extracting and labeling data to train models with is often an impossible task.

It might be that medical data is just hard, and will remain hard for the forseeable future.

I sit somewhere in between the two. I think we are making headway, and each step reveal new fruit in reach to pick. But understanding medical text in a comprehensive way is not happening in the near future.