I'd also like to inject a note of caution. If you look up information on the internet, you'll get a variety of suggestions and ideas from different websites. Some of these suggestions and ideas might indeed be quite valuable. As a cautious social scientist, however, I encourage you all to go beyond the testimonials and the individual anecdotes about success, which can be very compelling.
In biomedical research, the gold standard is a double-blind study showing a measurable positive effect that has been conducted with sufficient rigor to achieve publication in a peer-reviewed journal. I will explain what that means for anyone who's unfamiliar with how positivist scientific research works.
For a double-blind study, you have two groups -- one that gets the treatment you're testing, and one that gets something that looks like the treatment, but should have no effect. The research participants are assigned to one group or the other randomly, and participants are selected so that they're within a comparable range to hold other factors constant (i.e., if you're testing a supplement for injury recovery, you may not want to have 10 participants ranging from 70 year olds with degenerative arthritis to 8 year olds with acute gymnastics injuries). When the participants are assigned either to the treatment/no treatment arm of the study, the researchers who are evaluating the effects of the treatment don't know who's in which group. The study is run and researchers collect the data and analyze it to see what difference, if any, the treatment had.
A funny thing about fairly developed consciousnesses (including some animals) is that providing any kind of treatment at all, even one with no predicted clinical effect, will make some of these individuals feel better (the placebo effect). So the question is whether you can find a measurable difference between the people who got the tested treatment and the ones who just felt better because their brains were telling them that they should. Here's why double blind is so important -- neither the participants being treated nor the researchers know who's getting the actual treatment, so the researchers can figure out whether the treatment itself is having any effect without unconsciously cooking the books in favor of the finding that everyone wants to see.
Once the researchers have done their study, they will write it up and explain exactly how they've conducted it and send it out to journals in their field. The (good) journals will then do exactly the same thing! They'll strip away the researchers' identities and send the article out to some other scientists for evaluation. Those scientists will read the article without knowing who did the research and then will provide their evaluation back to the journal. The original authors are then able to respond to the evaluators' suggestions and improve the piece without knowing the identity of the evaluators. Sometimes --at the best journals, most of the time -- evaluators will look at a study and determine that it's not good enough and the article will be rejected.
However, even once something has made it through this crazy process, you can still engage in independent critical evaluation yourself! Here are some suggestions to make you a better consumer of these things:
- What's the gap between the breathless news report and the actual research itself? A lot of times we hear about things in some news outlet. The news outlet, seeking more clicks and attention, has puffed things up a bit to make them more dramatic. The news outlet itself probably got the story from a university press release, which puffed things up a bit to make them more dramatic so that the university's name would get into the news. If you patiently follow the links backward, you'll eventually get to the actual study itself, which may or may not live up to the hype.
- What kind of study was it? If you see the word "pilot" in the abstract (brief summary at the top of an article), this indicates that the research is just beginning and the researchers are using this publication to help them launch a much more comprehensive investigation.
- How many participants were involved? The more, the better is a good general rule of thumb. And the more participants a study has, the more weight you can put on sophisticated statistical instruments. A study involving 25 people, no matter how strong the findings, is just not something you should bet the house on. It may be highly suggestive, but the main thing it suggests is that more research should be done. Every one of us can easily identify 25 people in our lives who are weird in some way, right?
- Is it a study or a metastudy? A study is based on primary research that the researchers have done themselves. A metastudy is a review of a number of different studies that have been conducted. A metastudy is only as good as the individual studies that comprise it.
- Who funded the research? NSF or NEH? Terrific. An industry group? Well, perhaps a reason to take the findings with a bit of a grain of salt.
One final thing to remember is that science has a peculiar bias against recognizing null results as a scientific advance. What I mean by this is that finding out that something doesn't play a causal role or that some phenomenon isn't actually happening is harder to get published than a finding that something is having an effect.
[climbs down off soapbox, goes back to own research project]