top of page

How to analyze a research article

Beth Bradford

October 25, 2022 at 10:47:20 AM

Is this research any good? How can you tell? Check out this guide on how to analyze good research before you share it on social media.

In this era of misinformation, we’re called to question what we read and hear. This means a little more homework for us in challenging sources and questioning their motivation. It also means discerning marketing efforts disguised as truth.


For us to be responsible media consumers, this can be difficult, particularly if we don’t have a background in science or research. To be honest, much of the research writing is dizzying because it’s written in language only understandable for those in that discipline. Even those in the discipline might even say, “Whaaaat?”


However, there are a few things you can spot check on a research study to discern whether or not it is credible or not. I’ll use a research study that was sent to me as part of a marketing email for a mobile app on accupoint tapping therapy.


Authors/Experimenter bias

Even if you aren’t aware of the legitimacy of the research publication itself, you can learn a lot about the credibility of a study by its authors. Although it’s possible that you might have learned about this author from a recent book that he published, that doesn’t indicate that the research is credible. After all, anyone can publish a book if they have enough money to self-publish. What’s most important is the education and affiliation of the authors.


The education of the author can provide information in regards to their research experience. In this article, the first author has a Ph.D., but you can’t stop there. This might sound snooty, but where the author got his Ph.D. can be telling. This author’s Th.D. comes from a college that is a graduate seminary that isn’t accredited by a theological or educational governing body. The author’s Ph.D. is even more alarming because a Google Search sent me to a university in Russia that doesn’t have his particular specialty. After a little more searching, this “university” isn’t a university at all. It’s a self-study “religious-exempt” program with no information about its faculty. The only person listed on its website is the founder and dean, who died in 2019. Because this is a non-accredited religious program, it doesn’t provide the tools for research. Even if this isn’t the institution that granted him his Ph.D., this shows a lack of transparency on the author’s part.


The affiliation of the author can give insight into whether or not there is a conflict of interest. Most researchers work at a hospital or university, where research is part of their job. Researchers will sometimes receive grant money from nonprofits or corporations and will disclose this in the research document. In other words, a study about the health benefits of milk might be somewhat skewed if the research was funded by a dairy association. Although the researchers in this study claimed no affiliation with the mobile app, the first two researchers receive compensation for the technique described. Therefore, these authors get paid to speak and write about the therapy described in the research. This means that the results could possibly be skewed in order to advocate for this technique. It could also mean that the researchers might not report results that didn’t support their research hypotheses.


Participants/Expectancy

How many research participants in a study can influence whether or not a study is “statistically significant,” which means that the effect is less likely a result of random error. However, that might also be misleading, because a large sample might produce significant results even if the study’s effect is relatively minor. It’s also important to look at who the researchers chose as the participants. Were they randomly assigned to conditions, or did they self-select their conditions? If participants know what the study is about, then it’s possible they can bias the results because they are expecting a specific change to occur. Although this study collected almost 400,000 sessions, it didn’t consider the possibility that the same participants could have logged several sessions. Even still, this number of data points is relevant. However, consider the participants themselves. If they are familiar enough with this tapping technique to download an app, they are already believing that this technique is effective. They also might feel a little more invested in the technique since they are paying a fee for the app.


Control condition

A good research experiment has a control condition, which is the condition that is not subjected to the treatment or manipulation. A “true” experiment will randomly assign participants to the treatment or control condition so that differences between conditions can’t be attributed to initial differences among the participants themselves. The control condition can help determine whether or not the outcomes can be attributed to the treatment. Many times, experimenters will include a placebo condition so that participants might believe they are receiving the treatment. This particular study lacked a control condition or placebo, so it’s difficult to know if the effects are due to the treatment or to their belief about the treatment.


Procedure and measures

How researchers execute a study is also worth investigating. Sometimes the time of day of one condition can produce varied results compared to a different time of day for the control condition. Usually, the researchers will aim to keep the procedure consistent across all conditions so that any effects can’t be attributed to procedural differences. Along this same line, how the researchers measure the variables can influence the outcomes. If participants are given a pretest before a manipulation, oftentimes researchers will add other tests so that participants don’t try to guess the nature of the study. The measures themselves usually involve several statements along a 5- or a 7-point agree/disagree continuum. This points to the complexity of the concept itself.


For example, if you are experiencing stress, you might feel it in a varying degree in your body and mind. It might include a lack of control, a momentary burst of anger, or inability to focus. Therefore, researchers might administer the Perceived Stress Scale to cover the dimensions of stress people might encounter. This study asked participants to rank on a scale of 1-11 the degree of stress or anxiety before they engaged in the tapping procedure. Immediately after the procedure, participants measured their levels again. It’s possible that participants remember the number before their tapping procedure and felt compelled to mark it lower afterwards. What is not disclosed in this study—I found this out by downloading the app—is that users are encouraged to re-do the procedure again if the number doesn’t decline by two. This could explain the significant 2- and 3-point change from presession to postsession.


Results and discussion—what didn’t they find

Most of the time, a study is published because researchers found something of note. This can be problematic because sometimes a study with no effects can be equally informative. When researchers conduct a study, they won’t find support for all of their questions. The researchers will specify which hypotheses found support and which did not. The discussion section usually explains these results in light of other research findings. You’ll also find the limitations of the study, which discusses problems in procedures, measures, and bias. This study acknowledged that because participants were self-selected, the effects were biased towards the technique. The researchers also pointed out the problem of the intensity ratings rather than using validated measures of stress and anxiety.


Check your motivation and bias

Finally, it’s important to look at your own biases when reading research. Is your motivation to disprove a particular research agenda or to appease your own curiosity? For myself, I became curious about this tapping technique because I am always on the lookout for new alternatives to supplement traditional practices. What started for me a simple curiosity became a more complex concern about the proliferation of pseudo-science that is reaching the mainstream.

It’s also exciting to see how research evolves. The scientific process is always challenging its currently-held assumptions, and sometimes old ways of understanding the world gradually shift as more research is conducted. Remember that at one point in history we thought the world was flat and the sun revolved around Earth.


Reading the original research articles can be a challenge, but it’s better than accepting what you read at face value. Not only can you learn something new, but also you can share your informed perspective with others.


bottom of page