Mobile-phone health research
TermsRR or Risk Ratio [aka Relative Risk]: If the probability of developing lung cancer among smokers in a life-time was 20% and among non-smokers 1%, then the relative risk ratio of cancer from smoking would be 20:1. If there is no increase in risk, the Ro would be 1. Some lobbyists will try to tell you that findings with an RR less than 3 can be ignored. But there are other factors to be considered. A robust finding (ie many studies) with an RR = 2 will still mean that users double their chance, and in large populations this might place millions of people at unnecessary risk. See more.
Confidence Level [or Interval]: Statisticians generally apply two different confidence claims to their study findings to express the likelihood of error. If the possibility of these result arising by chance is one-in-twenty similar studies [as determined statistically], then the finding is said to be at the 5% confidence level [and the finding has a 95% confidence interval]. At this level it is considered to be established, but require further investigation and confirmation. If a higher standard of 'proof' is achieved and the possibility of the result being a chance finding is only one-in-a-hundred, then it is said to be at the 1% level of confidence [99% CI], and is normally claimed as 'proved'. See more.
Level of significance: [as above] The medical research community generally holds that if the results of a study are unlikely to arise by chance in 95 studies out of 100, then it has a level of significance of 5%. At 99 out of a 100, the significance level is 1%. [ie. it is MORE significant] The more significant results can arise with very dramatic differences between the test and control groups with only a relatively few randomly selected subjects in the trial, or from very subtle-but-consistent differences in large numbers of subjects.
Odds ratio:the ratio of the odds of an event occurring in one group, to the odds of it occurring in another group. (Men to women, for example)
Meta-analysis: A statistical technique used mainly by regulators to combine a number of different (but closely related) studies to demonstrate the robustness of the findings. See next column and Wikipedia
Cohort: This is just a group or panel of subjects selected on the basis of shared characteristics.
Retrospective: A study that looks back on past experiences using existing statistical data.
Prospective: A study that recruits unaffected people with the aim of following them for a number of years to see what differences in their life-style or environment result in health problems. It is a longterm, or 'longitudional' study.
Layman's guide to terms, problems and research types.
However if the statistical links are weak, then at best we can likely say is that "X may be a cause of Y". In these cases other factors enter into contention: if the potential health risk are serious, or the potential for harm is widespread (or both) then it is still reasonable to advocate that the "precautionary principle" should apply. We should limit public exposure until the science is more robust. However "limit" does not necessrily mean "ban", and the positive benefits to the community of retaining the product must also be considered in balance.
Epidemiological studies can be of a number of different types:
Q. What does it take to be an epidemiologist?
A. A pocket calculator.
The fact is that some epidemiologist have the best university qualifications in advanced statistical techniques with an extensive background in biological research, while others have nothing more than a high-school diploma with basic maths, and a registered business name which suggests higher qualifications and a large skilled staff of dedicated professions. They use names like "Health and Environmental Research Associates" or "Consolidated Research Services". It also helps to have a web site and memberships in health-related committees, groups or societies (the more entrepreneurial of them start their own associations).
Q. What is the difference between an epidemiologist and a biostatistician?
A. Not much. The epidemiologist is mainly concerned with population studies, while the biostatistician looks more at the statistical work involved in laboratory research. But there is a strong cross-over, and there are shonks on both sides.
Q. How reliable are epidemiological studies?
A. As reliable as the person who designs and performs them.
Good epidemiology is an invaluable tool, and over the centuries, this kind of statistical research has probably saved as many lives through identifying public health risks as all the other forms of laboratory research combined. The first great public-health epidemiological study was done by Dr John Snow, who showed that a London cholera epidemic was caused by contaminated water. Before that, cholera killed hundreds of thousands of people every year around the world. Now it is confined only to the slum areas of underdeveloped countries. An enormous number of health problems and epidemic diseases have been identified by epidemiological studies.
Bad epidemiology is also rife, and often very difficult to identify. The worst area is probably that of modern processed food standards and nutrition, where vested interests in the form of trade associations,protect the business of everything from dairy-products to broccoli. They will all have their tame epidemiologists/nutritionists ready to run out a few quick studies whenever sales are threatened. These games are played by the seemingly good guys as well as the bad. (ie Newspaper headlines: "Broccoli Reduces Colon Cancer, say Experts") The evidence changes by the week, depending on which sensational story the local newspaper has culled from the latest trade press release.
Q. What happens when one epidemiological study produces directly contradictory results to another. How do we balance the weight between studies pointing to opposite conclusion?
A.This is actually a PR fiction. Epidemiological studies rarely (if ever) contradict each other, for the simple reason that, while it is possible to show a statistical link between X and Y, it is impossible to show NO statistical link. At best, you can only say "the design and conduct of this study failed to reveal any ... etc." which does not disprove the first finding, but simply fails to support it.
A 'no evidence' result could be due to the failure in the design, or inadequate numbers of subjects, or sloppiness in the way the study was conducted, or arise just by chance.
Q. How important is the need for replication? When a replication fails to duplicate the result, which study should be considered the most important?
A. No research finding should ever be considered as established without a) either a full replication which duplicates all of the original study conditions (but perhaps with a greater number of subjects), or b) closely related, and robust findings (but perhaps in other biomedical disciplines) which have produced parallel results. Exact replication isn't always needed.
There will always be a political cry for studies to be replicated whenever they indicate harmful potential in a product, and there can be little doubt that this is important. However often the only organisations able to fund such replications are governments or trade-associations with vested interests. Independent research organisations like universities (notoriously short of funds) need to make new discoveries for their survival, not confirm old ones. So their efforts tend to be directed toward leading-edge research, not replication.
This means that studies which reveal potential public health risk often go un-replicted -- and the industry involved with then cry loudly at every available opportunity that "It should not be treated seriously, because it hasn't been replicated." This has proved to be a highly effective way of stalling political action.
It is also important to recognised that governments often have vested interests in not getting involved. And, in an unfettered free-enterprise system, governments expect industries to look after themselves and their own problems -- including funding research into adverse effects of their produces [which is extraordinarily naive]. As a result, studies that turn up potential dangers to the public are rarely replicated unless they are so obvious and important that political pressure is bought to bear on the regulatory authorities. Even the,n the replication work is often done by supposedly 'independent' research groups,who are funded, directed, and therefore controlled by those with vested interests. The independence of a research scientist doesn't just extend to the duration of the one study, but it can also be compromised by the long-term expectations of future funding, and by the prospects of travel to international conferences, and by general political ideology. This applies also to research done in support of health- and environmental-activism.
Q. What value should we put on meta-analysis of epidemiological studies?
A. Meta-analysis is the name applied to a range of techniques used by epidemiologists to combine numerous other studies, and thereby create what appears to be a more robust finding. It is most often used by regulatory agencies in areas of health and environmental research, where they are attempting to arbitrate between various claims.
For instance, they might have on record a half-dozen small studies done at various universities around the world which show through similar epidemiological research, that the consumption of X is related to health condition Y. However, perhaps only one or two of these studies has been done with enough subjects to be statistically significant at the 1% confidence level [only likely to have arisen by chance once in every hundred similar studies]. The others may all be at the 5% or lesser levels of confidence, and some may have found no linkage whatsoever.
Clearly, those studies at the 5% level of confidence [only once in twenty likelihood of having arisen by chance] should carry some weight in regulatory determination even though none of them is considered 'proven'. So special statistical techniques are used to provide "weighting" to each of the many studies according to the quality of its findings, taking into account the differences in methodology. These are then consolidated to produce a single figure outcome: so in effect, meta-analysis attempts to treat a multiplicity of research studies as if they were all one.
The problem with meta-analysis is that it depends on many judgements and assumptions, and on the choice of statistical techniques applied. However it is obviously a useful tool when used by regulators (provided they aren't crusading). It is especially important because it brings into the regulatory picture the more trustworthy research done by independent research groups at universities. Because of the chronic lack of funding, these groups are more likely to produce marginal findings -- while industry research is generally well-funded and therefore more likely to produce 'robust' (but perhaps less-trustworthy) findings.
Without meta-analysis, many independent research findings would simply be ignored, and only the larger industry-funded projects would be available to the regulators in making their determination about rules.