CowboyNinjaAstronaut
Highest Rated Comments
CowboyNinjaAstronaut16 karma
The thing is it's supposed to be used to collect information on foreign targets, and there are no fourth amendment protections for that. In reality, they're also feeding data on all Americans into the system and using flimsy excuses to spy on them, too.
But yes, you can (and must) have procedures for military/national security organizations to spy on foreign targets. You can't expect the the fourth amendment to apply to the CIA when bugging the Soviet embassy during the Cold War.
Using those same systems arbitrarily against Americans, though, is a completely different story.
CowboyNinjaAstronaut9 karma
I read your whitepaper, your report and looked at the surgeon scorecard tool and I'm a little concerned.
First, my potential bias and background: I am an analyst at a hospital with an engineering degree. I know little about medicine, however. Little of what I do is related to clinical practice, and when it is I am guided by a medical professional.
However,
1) as an analyst and engineer my concern is truth, and as a human being and Catholic my concern is the patients. I have no finances or reputation at stake with regards to your report, so I assert that my criticism is well-intentioned.
2) even if I were self interested, the surgeons at the hospital for which I work performed well on your scorecard.
This all looks very, very close to providing people with medical advice, which should be scrutinized and vetted to the very highest standard. People may well make life or death decisions based on the tool you’ve published.
The reporting tells a compelling story, but on first blush I do not think the analysis supports the conclusions presented in the scorecard tool. You’ve identified a difference in the results sets of surgeons and have eliminated some confounding variables (hospital, age, sex, and to some extent “health”) but you haven’t actually explained the differences. And then you present a scorecard for the surgeon, as if the surgeon is the explanation. The surgeon may be! But I don’t think you’ve established that.
It looks to me as though you’ve identified a great area to research (“Why are this surgeon’s patients doing better than this surgeon’s patients?”) but then you’ve presented a conclusion (“this surgeon is better than this surgeon!”). That does not seem to logically follow, and is a dangerous thing to say. This looks like an alpha test tool, and you’re announcing it to the public as a finished product. I don't know this, as I've only been aware of your project for a few hours, so I will not give your report a scorecard. Instead of coming to a conclusion about your report, I will ask questions.
1) What were the exact dimensions you had attached to each surgery in your analysis? And I mean every single one. Not a general “health index,” but what specific measures did you have about each surgery?
2) This appears to be based off of billing data, and not clinical data. I know clinical data is very difficult to work with, given ethical concerns, HIPAA, modeling, differences in documentation practices, etc. But not being able to get clinical data is not an excuse for drawing conclusions without clinical data that should require clinical data. Why are your conclusions acceptable without clinical data?
3) I don’t know anything about the Elixhauser comorbidity measure. What are the reasons this is or is not an acceptable dimension for this study, and how was it calculated in this study?
4) Are you asserting that a surgeon with a poor score via your method is a poor surgeon? Or simply that he may be a poor surgeon?
5) To what lengths have you gone to eliminate confounding variables? Essentially, where would you go from here?
If you’re going to score professionals, you must have a good model, or you present people with bad data, which is worse than no data at all. And worse, influence people to conform to the poor model. Scoring teachers based on the performance of their students on standardized tests may not be the best idea. Conforming to an imperfect, but “conclusive” model can result in worse performance.
I’m skeptical that you have created an adequate model for the performance of a surgeon. You may have. I’m not saying you haven’t.
But it would be a mistake of me to score your tool one way or the other with insufficient data and analysis.
ETA: Transparency is valuable. Part of my concern is based on the fact that others cannot have access to the data sets you used to create these scorecards, for obvious reasons. However, your methods can be completely transparent, but do not seem to be. In the whitepaper you mentioned you used R. Can you release the R code (and any other source code and data definition language) used to generate the scorecard numbers? While the data can't be made public, I see no reason the algorithm can't be made public for review and critique.
CowboyNinjaAstronaut28 karma
From the legal definition of entrapment, nothing TCAP does in anywhere close to it. It's a trap(!), sure, but it's not entrapment.
If you're free to walk away, it's not entrapment. Entrapment requires coercion. Threats.
So even if the decoy was begging for sex...not entrapment. You can still say no and not show up at the house. Even if they offered to pay a million dollars, still not entrapment. You don't have to take it.
But if they (credibly) threaten to kill you or something if you don't do it, that's entrapment.
There's a difference between a trap and entrapment.
ETA: oh and even then that's only if you're talking about the state doing it. I think TCAP works with law enforcement, so that would count. If a private individual coerced you into committing a crime you'd have a duress defense depending on the severity of the crimes and the nature of the threats. Assuming you didn't kill anybody. There's no duress defense for murder.
View HistoryShare Link