NURS 6431: Week 7: Discussion: A Critique of Evaluation Methodology Plans
NURS 6431: Week 7: Discussion: A Critique of Evaluation Methodology Plans
Imagine you have recently started a new job as the first nurse informaticist to be employed in a large community clinic. To prepare you for your duties, you receive the following job description:
Your primary responsibility will be the implementation of health informatics technology systems and all necessary support processes. Your supplemental responsibilities will include developing systems to meet stakeholder desires, planning personnel training, and maintaining current systems. In addition, you will be responsible for implementing and maintaining quality initiatives. All of these duties should be completed in a timely fashion and within budget. NURS 6431: Week 7: Discussion: A Critique of Evaluation Methodology Plans
On the first day of work, you are shown to your cubicle, and no further instructions are given. When you attempt to obtain further information, management states that you are in a new position and that they are still unclear about your role. How would you feel? Where would you start? Think of the details not included—such relevant information as what systems are currently in use, the identity of key stakeholders, budgets, and current plans—and how their absence influences your ability to be effective as a nurse informaticist.
The situation above indicates a lack of planning on the health care organization’s part. The organization’s leadership decided hiring a nurse informaticist would be useful but lacked a clearly defined methodology for integrating the field of informatics into their organization. This lack of methodology has promoted a rushed, expensive, and poorly understood hiring and onboarding process. Had the leadership developed a clear methodology, they could have minimized waste and improved understanding.
The same clarity is essential when designing the actual methodology for implementing an evaluation plan. All details of the evaluation procedure should be carefully identified, and the evaluation methodology should be written in unambiguous language. Someone unfamiliar with the project or process should be able to gain a clear picture of what the evaluation addresses and how it will be conducted simply from reading the plan. This week, you consider evaluation methodology planning and what is involved in creating a plan that is thorough and focused.
Learning Objectives – NURS 6431: Week 7: Discussion: A Critique of Evaluation Methodology Plans
Students will:
- Analyze the characteristics of strong and weak evaluation methodologies
- Analyze the process of developing an evaluation methodology plan from a PICO question
- Create an evaluation methodology plan for a PICO question*
* The Assignment related to this Learning Objective is introduced this week and submitted in Week 8.
Learning Resources
Required Readings
Friedman, C. P., & Wyatt, J. C. (2010). Evaluation methods in biomedical informatics (2nd ed.). New York, NY: Springer Science+Business Media, Inc.
- Chapter 4, “The Structure of Objectivist Studies” (pp. 85–112)This chapter examines the key concepts in relation to the design of studies and the measurement of results. It includes definitions for fundamental terms, a discussion on the levels of measurement, and a description of the different types of study designs.
- Chapter 9, “Subjectivist Approaches to Evaluation” (pp. 248–266)This chapter introduces the subjectivist approach to evaluation and highlights the key ways it differs from an objectivist approach. The chapter also examines the premises upon which this type of study is based, and how qualitative data are recorded and analyzed.
Centers for Disease Control. (n.d.). Evaluation planning: What is it and how do you do it? Retrieved from http://www.cdc.gov/healthcommunication/research/evaluationplanning.pdf
This document provides a brief overview of planning an evaluation, including the different types of evaluations and the components needed in developing the evaluation methodology.
Stroud, S., & Gansauer, L. (n.d.). Nursing evidence-based nursing practice tool kit: Practice, evidence, and translation process. Spartanburg Regional Health Care System.
This paper provides guidelines for conducting an evaluation. It highlights the different phases of conducting an evaluation and the steps included in each phase.
Discussion: A Critique of Evaluation Methodology Plans
Developing a relevant PICO question that accurately addresses the goal of an evaluation and then locating the most current information on the topic are both key steps in the evaluation process; however, of equal or greater importance is the development of the methodology to gather the data that will answer the PICO question. This is where the evaluator must determine the “who,” “what,” “when,” “where,” and “how” of the evaluation. The evaluation methodology outlines the specific steps that will be taken to complete the evaluation. Who will be involved? What sort of research design should be used? Where is the evaluation taking place? How much time will the evaluation require, and how many participants are needed? How will the evaluation be conducted? It is imperative that the evaluator takes the time to make sure a methodology plan is clear, specific, and thorough.
In this Discussion, you critique a series of poorly constructed evaluation methodology plans, identify areas of weakness, and recommend how they can be improved.
The following scenarios will be used for this Discussion:
Scenario #1: Agnes, the nurse informaticist at a small rural hospital, has been asked to develop an evaluation plan to determine the success of an upcoming training program for the launch of a new computerized nursing documentation system. Agnes has developed the following methodology plan:
“I will speak to participants immediately after the training program to determine the success of the training. They will be asked about the instructor, if the training was a good time length, if there were enough breaks, and if the training location was comfortable. After the implementation, I will ask the physicians and nurses if they like using the new nursing documentation system and how much time it saves them weekly.”
Scenario #2: Maria, a nurse informaticist in a large surgical center, has been asked to develop an evaluation of the implementation of a new Operating Room Management System (ORMS) that includes scheduling, case cart management, and surgical case documentation. Maria has developed the following evaluation methodology plan:
“I will conduct a 30-minute interview with each nurse in the surgical ward to determine his or her impressions of the new ORMS. I will ask them to specify how they log into the system, to detail how often they use it each day, to describe what types of information they utilize, and to provide a detailed list of issues they encounter. I will have the nurses rank 50 different characteristics of the ORMS on a 1 to 100 scale. In addition, I will ask each surgeon to document his or her impressions of the case documentation functions.”
Scenario # 3: The CEO of the hospital system in a major metropolitan area is a brusque, hard-to-please individual. Carl, a newly hired nurse informaticist, has been tasked with developing an evaluation to correspond with the implementation of a health analytic system that the CEO has hand-picked. Carl has developed the following evaluation methodology plan:
“I will arrange one morning where groups of three nurses at a time will have a 15-minute, face-to face meeting with the CEO to both answer his questions and discuss their experiences using the new health analytic system tool. By having this candid dialogue, but without structured questions or parameters, a good overall understanding of the value of the analytic system should be obtained.”
To prepare for NURS 6431: Week 7: Evaluation Methodology Planning Paper:
- Review the three evaluation methodology plans outlined within the scenarios above.
- Critique each plan. Is it concrete? Is it specific? What are the strengths? Weaknesses?
- Based on this week’s Learning Resources, recommend at least two changes that would strengthen each plan.
- Research the Walden Library to find an example in the literature of an evaluation study that has a strong evaluation methodology plan, and assess why you believe it to be strong.
- Consider your own PICO question and the elements that would need to be included in the methodology plan to adequately answer this question.
By Day 3
Post a brief critique of each of the evaluation methodology plans. Describe how each could be strengthened. Briefly summarize the evaluation study you identified in the Walden Library (include the reference in proper APA format), and explain the elements that made you conclude it has a strong methodology component. Describe how you can utilize what you have observed in both the poor and the strong methodology evaluation plans to ensure that you develop an appropriate methodology to answer your PICO question. Outline specific elements that would need to be clearly identified in your evaluation methodology, and explain why they are important to include.
By Day 6
Respond to at least two of your colleagues on two different days using one or more of the following approaches:
- Share an insight from having read your colleagues’ postings, synthesizing the information to provide new perspectives.
- Validate an idea with your own experience and additional research.
- Expand on your colleagues’ postings by providing additional insights or contrasting perspectives based on readings and evidence. NURS 6431: Week 7: Discussion: A Critique of Evaluation Methodology Plans.
Analyze the characteristics of strong and weak evaluation methodologies
Introduction
If you are an evaluator, it is important to understand the characteristics of strong and weak evaluation methodologies. This will help you decide on the best way to conduct your next study in order to ensure that it meets the needs of your program or intervention.
The evaluation questions should be directly connected to the goals of the program or intervention.
When you are designing an evaluation, it is important that the questions being asked be directly connected to the goals of your program or intervention. The goal of any program or intervention should be clearly defined so that you know what outcomes you want to measure. This will require a clear definition of what each outcome means and how it is going to be measured; this includes developing an understanding of what constitutes an appropriate level of evidence for determining whether something has been accomplished (i.e., “does this really help students learn?”).
The design for your evaluation should also include ways in which it can answer these questions for example, by asking questions about which students gained more knowledge/skills during their participation in activities designed around specific themes such as diversity awareness or conflict resolution training sessions offered during lunchtime periods (or after school hours) at various schools within your district’s jurisdiction if possible).
The evaluation should measure outcomes that are related to the goals of the program or intervention.
The evaluation should measure outcomes that are related to the goals of the program or intervention. Outcomes need to be measurable, relevant, specific and objective. In addition, they should be realistic given what you know about your target population and your resources available for this project.
-
Measurable: If you’re measuring something like blood pressure levels, then it’s possible to make objective measurements of whether someone has high or low blood pressure based on their results from a health test such as an electrocardiogram (ECG).
-
Relevant: An outcome that’s not relevant doesn’t provide any information about how well something is working it just tells us how well people did in general without considering why they were doing well or why they might not want too do so well again after completing treatment plans set forth by nurses during visits at clinics staffed with doctors who specialize in treating diseases like diabetes mellitus type 2.
The evaluation design should be able to answer the evaluation questions.
The evaluation design should be able to answer the following questions:
-
What do you want to measure?
-
How will you measure it?
-
Who will be involved in this process, and what are their roles and responsibilities?
What are your goals? What do you hope to accomplish by measuring this particular metric? How will you know if it’s working?
How will you know when to stop measuring? What are the limitations of this metric?
The resources and staff members required for an effective study should be available.
The resources and staff members required for an effective study should be available.
This means that you have the right people in place, with the right skills, to complete the evaluation in a timely manner and at a reasonable cost. A good way to ensure this is by having enough of both available so that when one person leaves or gets sick it doesn’t cause delays or problems with completing your project on time.
You also need to ensure that the evaluation team is well-trained and knowledgeable about how to conduct an effective study. This means that they should have experience with similar projects and understand what elements are important to focus on, such as data collection and analysis.
Decisions about how to interpret the results of a study should not be made in advance through statistical hypothesis testing.
It is important to understand the limitations of statistical hypothesis testing. These include:
-
The precision with which you can predict your results depends on how much information you have about your population, and what assumptions are made about that information. You can’t use a single sample from a population to make predictions about all members of that population; rather, you need additional samples to provide more precise results. For example, if we want to know how many people will die in a given year but don’t know exactly how many people there are or when they’ll die, then statistical hypothesis tests aren’t going to work very well because they require precise counts from small populations (e.g., census data).
-
Statistical hypothesis testing assumes independence between observations—that two observations cannot influence each other—and may not be valid if this assumption isn’t true (for instance because there is some correlation between them).
Use this methodical process when deciding on your next evaluation methodology.
For example, if you are evaluating a program for poor families, it would be important to use an evaluation methodology that accounts for resources. For example, if you are evaluating a program for poor families and the goal of the treatment is to increase income levels of participants by 10%, then an outcome-oriented evaluation could help determine if this goal has been achieved.
Another way of thinking about this is that we often tend to think about our interventions in terms of whether they work or not; however, we should also be looking at how well they work and what kind of effect they have on people or communities (i.e., how much impact do they have on improving health care outcomes).
Conclusion
The evaluation methodology should be a reflection of the goals of the program or intervention. It should measure outcomes that are related to those goals, and it should be able to answer the questions being asked. The resources and staff members should be available so that they can conduct the study in an efficient manner. Finally, decisions about how to interpret results cannot be made in advance through statistical hypothesis testing because this method is prone to bias and misinterpretation; instead, data scientists must explain their conclusions based on theory or research findings before making a decision about what conclusions truly represent reality.
Leave a Reply