Quote:
Originally Posted by AGDee
The way I read that, they didn't manipulate data to fit their needs, they adjusted to account for different methods of data collection and this is not unusual in research at all.
Say you have 6 sites doing the same protocol for a new cancer drug and they rely on blood test lab data. Each of the 6 sites will have slightly different instruments which may be calibrated slightly differently so that comparing them as raw data is NOT accurate. You take a control, figure out the variation at each site and adjust the data according to the variation. The same thing is done when you get a new lab instrument. You can't compare the data from the old instrument to the data from a new instrument because there will be variation. Statisticians calculate the variance between them and compare those numbers instead. This is not sloppy, this is standard operating procedure. It *is* sloppy to dump the raw data. It's hard to believe it doesn't exist on backup tapes somewhere. THAT is sloppy and could just as soon be the fault of the IT department as the scientists. Good IT people would never let that happen.
My two cents as an IT network administrator for a biostatistics department.
|
It depends on what variables were "cherry picked". There is a statistical method that allows for machine variance. Without the original data all the data is called into question. This was worse than sloppy and when you put into context the emails where they try to use statistical tricks we have a situation that calls the whole study into question.