NUR 699 GC Week 3 Discussion2: critically appraising quantitative studies 

 

What factors must be assessed when critically appraising quantitative studies (e.g., validity, reliability, and applicability)? Which is the most important? Why?

You must proofread your paper. But do not strictly rely on your computer’s spell-checker and grammar-checker; failure to do so indicates a lack of effort on your part and you can expect your grade to suffer accordingly. Papers with numerous misspelled words and grammatical mistakes will be penalized. Read over your paper – in silence and then aloud – before handing it in and make corrections as necessary. Often it is advantageous to have a friend proofread your paper for obvious errors. Handwritten corrections are preferable to uncorrected mistakes.

Use a standard 10 to 12 point (10 to 12 characters per inch) typeface. Smaller or compressed type and papers with small margins or single-spacing are hard to read. It is better to let your essay run over the recommended number of pages than to try to compress it into fewer pages.

Likewise, large type, large margins, large indentations, triple-spacing, increased leading (space between lines), increased kerning (space between letters), and any other such attempts at “padding” to increase the length of a paper are unacceptable, wasteful of trees, and will not fool your professor.

The paper must be neatly formatted, double-spaced with a one-inch margin on the top, bottom, and sides of each page. When submitting hard copy, be sure to use white paper and print out using dark ink. If it is hard to read your essay, it will also be hard to follow your argument.

 

MORE INFO 

critically appraising quantitative studies

Introduction

If you had to write a research paper that looked at the effects of an intervention on breast cancer, would you use a qualitative or quantitative study? The answer depends on what your aims are. And it probably depends on who your audience is. So let’s take a look at each of these options and how they might be used in an informed decision.

What is the research question?

  • What is the research question?

  • What is the study design? Is it appropriate to answer the question?

  • Is there a hypothesis and what is it?

  • How were participants recruited and does this satisfy inclusion/exclusion criteria (Are participants representative of target population)?

What is the study design? Is it appropriate to answer the question?

The design of your study is important. It will affect how you answer your research question, and if it doesn’t address the question in an appropriate way, then you might not get the answers you need to make an informed decision.

A good design should have four characteristics:

  • The sample must be large enough (at least n=50) to provide statistically valid results.

  • There must be at least two groups of participants—one experimental and one control—so that differences can be detected between them by statistical analysis.*

Is there a hypothesis and what is it?

The hypothesis is a statement of what the researcher expects to find, and should be specific, measurable and testable. It should also be stated before the study is conducted. If it’s not feasible for you to state your hypotheses in this way (for example, if you’re conducting an experiment), then at least try to keep them vague so that they don’t get too specific about what you think will happen during your experiment (e.g., “We expect our participants’ scores on this task will improve after using our training program”).

The best way for us humans who aren’t scientists or researchers yet!

How were participants recruited and does this satisfy inclusion/exclusion criteria? (Are participants representative of target population?)

The second critical step is to determine whether the participants are representative of the target population. In other words, do they match up with your research questions?

Here’s how you can tell:

  • Make sure that your study sample includes a good number of people with similar characteristics as those in your sample (e.g., gender, age). This ensures that results are generalizable across all groups and not just one particular subgroup within them.

  • Make sure that there isn’t any overlap between participants’ characteristics and those found in other studies done on similar topics (e.g., if you’re studying women with eating disorders who have children but don’t want them). If so–and this may happen if there are many similarities between both groups–then it might be difficult for researchers to draw conclusions from their data because these groups could be lumped together too easily into one group instead of two distinct ones: one representing an entire population (e.g., women) and another representing subsets within this larger group (e., those who were recently diagnosed but haven’t yet received treatment).

Who collected the data and was it objective or subjective

  • Objective data is collected by a third party, such as the government or an academic study.

  • Subjective data is collected by the researcher.

Both types of data can be useful in certain situations, but they have different strengths and weaknesses. Objective data tends to be more reliable because it’s not biased by the researcher’s own beliefs or biases; however, this type of information may not always be available when you need it most (for example if you’re conducting an experiment). On the other hand, subjective information—such as what people think about something or how they feel about something—is often more accurate because it doesn’t rely on outside sources for verification.

Was there any selection bias?

Selection bias is a systematic error that occurs when the sample of study participants is not representative of the target population. This can happen when researchers choose their own participants, or simply base their research on convenience samples.

In any type of survey or interview, there are two main types of selection bias: sampling and non-sampling error. Sampling error occurs when you collect data from only some people instead of all members in your sample group; this can be addressed by using random sampling methods (which are discussed later). Non-sampling errors occur when researchers don’t choose an appropriate way to select people for their study based on its purpose and context; for example, if you’re trying to understand how people feel about something specific but don’t have enough time or money left over after paying for everything else involved in running such an endeavor successfully then this might lead  you down paths where those same qualities aren’t reflected accurately within your conclusions made based upon evidence collected during interviews conducted specifically because those same qualities weren’t considered beforehand – which means there’s still room left open here!

Is there potential for investigator bias, e.g., Hawthorne effect?

The Hawthorne effect is a change in behaviour due to the knowledge that you are being observed. For example, people may work harder if they know they are being watched or improve their performance when working for an organization with high expectations.

The Hawthorne effect can be avoided by using double blind study design where neither the participants nor the investigator knows which group they belong to.

Were any screening instruments or other data collection tools used and were they validated?

Validation is the process of ensuring that a measure is reliable and valid. It can be done by comparing results with other measures that have already been validated, or by comparing them with known facts (such as IQ tests). If there are no other measures available, then validation can also be performed by comparing your research findings with known facts (e.g., if you study how much time children spend on TV per day).

Why did participants leave the study – what is the drop out rate, if there was one?

Drop out rates are the number of participants who stopped participating in a study. They may be calculated as a proportion of the total number of individuals who began and completed the study, or they may be expressed as an absolute number (e.g., 3%).

Drop out rates can be calculated using two methods:

  • Divide the number of dropouts by those who started, then multiply that figure by 100%. For example:

  • Divide total dropouts by total starting participants (this is also known as “total completers”), then multiply this figure by 100%. For example:

Are any data missing – how will they be handled in analysis?

In a quantitative study, it is important to consider the possibility of missing data. Missing data can be handled by imputation or deletion.

Imputation involves calculating values for each case in the dataset that would have been observed if there were no missing values. For example, if you had only records for one variable (e.g., gender), but you had enough observations for two variables (i.e., gender and age), then you could use multiple imputations by including all possible combinations of those two variables into your analysis model so that each combination represents an observation with complete information about how much it differs from other cases’ means/proportions across their respective distributions.[1]

Deletion simply means removing certain observations from your dataset altogether; this may seem like an extreme option but can be necessary when dealing with very small sample sizes or high levels of missingness overall (for example: if 25%+ students didn’t complete their surveys).

Conclusion

Qualitative research is the best way to ensure your findings are valid and relevant. Qualitative methods often bring new insights that help you answer complex questions, but they also require more time and resources than quantitative methods. If you’re deciding between a qualitative or quantitative approach for your study, know that both can be valuable depending on the nature of your research question and how valid your data will be without careful attention to detail.


Leave a Reply

Your email address will not be published. Required fields are marked *