Types of Program Evaluationby Darryl Upfold and Nigel Turner “Program evaluation” is a general term. Programs can conduct different types of evaluations, each dependent on the stage of development the program is in: if the program is new and just being plannedif the program is newly operationalif the program is well established. Ideally, a program would conduct the appropriate kind of evaluation at each stage of the program’s development before proceeding to the next evaluation. All types of program evaluations require data (information) to answer the evaluation questions that are relevant to the evaluation. There are a variety of data collection methods (e. Selecting a particular data collection method depends on several factors, such as the type of evaluation being conducted, the evaluation questions and the amount of human and fiscal resources that are available. Data collection methods are presented later in this section. Net Impact Analysis for Program Evaluation Modeling and SAS Programming. Model Energy Efi ciency Program Impact Evaluation Guide A RESOURCE OF THE NATIONAL ACTION PLAN FOR ENERGY EFFICIENCY. Develop programme theory/logic model; Identify potential unintended results. Impact Evaluation; Network Evaluation; Resources. AEA Coffee Break webinars; BetterEval: World; Events; Download the Rainbow Framework. OUTLINE OF PRINCIPLES OF IMPACT EVALUATION PART I KEY CONCEPTS Definition Impact evaluation is an assessment of how the intervention being evaluated affects. Impact Evaluation Methods, and Proposed Measures to Support a United Nations System Fellowships Evaluation Framework. Indeed, one evaluation model (or method as some prefer). Needs Assessment Evaluation. Needs Assessment is a type of evaluation that is conducted before a program is designed (or re- designed). A Needs Assessment evaluation is conducted to gather information to determine if the proposed service is actually necessary, and how the service should be designed. There is no one standard list of questions that is considered in doing a Needs Assessment evaluation, but the following questions are typically considered. Types of Questions When Planning a New Service. Is there a need for this kind of service? Are there other services that are the same or similar? What are the characteristics of the clientele for whom the service is being designed (e. What is the estimated size of the “target” population? What is the estimated demand for and capacity of the service being designed? Are there particular barriers that the “target” population is likely to encounter in using the service? What are the staff and facilities that are required? Is there a “best practice” literature in this area? Needs Assessment evaluation can also be conducted by an existing service to answer the following questions. Types of Questions When Reviewing an Existing Service. Is the service still relevant to the “target” group? Has the demand for service increased? Have the characteristics of the “target” group changed? Have other “target” groups been identified? The issues and questions considered in Needs Assessment evaluation do not address what happens during a program, nor does the evaluation attempt to measure outcomes. Little statistical analysis is required for a Needs Assessment evaluation. Process Evaluation. This type of evaluation is conducted during the early stages of a program’s development to ensure that the program is operating as planned, i. Implementation Objectives. This includes looking at who is using the service, the type and amount of service activities that are being provided, and the demand for and capacity of the service. Types of Questions for Process Evaluation. The following are examples of the types of evaluation questions that could be derived from Implementation Objectives. How similar are the participants to those anticipated when the service was designed (e. How many direct service hours on average did each participant receive? How many clients received each component of the program (e. Are the services being delivered similar to those that were planned? Process Evaluation focuses on describing the clientele (e. The information collected is reported using statistics such as percentages and averages (e. Some statistical analysis typically done in Process Evaluation can be used to modify the program if necessary. For example, if the program was designed for early- stage problem gamblers, scores on screening instruments administered at assessment can be used to determine if the program is attracting this clientele. If scores indicate that a large percentage of the clientele are later- stage problem gamblers, then program staff need to consider options, for example: Modify the program so that it is more consistent with the clientele that it is attracting. Work with other agencies to help them improve their “case finding” and referring practices for early- stage problem gamblers. Advertise to the general public to attract early- stage gamblers. Outcome Evaluation. Outcome Evaluation is conducted when the program has been in operation long enough to ensure that the program is being delivered as planned; that is, following Process Evaluation. The purpose is to determine if participants have changed as a result of participating in the program. To determine this, outcome indicators and an evaluation design are identified. Outcome Indicators Outcome indicators information (data) that are used to measure change are also referred to as “outcome measures,” or “outcome variables.” There is no one list of outcome indicators that are directly related to the specific goals and objectives of the program, and are implied in the outcome objectives on the logic model. The choice of indicators depends on the specific goals of a program. If the primary goal is concerned with the gambling behaviour itself, then outcome indicators could be amount of money lost gambling, number of gambling occasions in the past month, or scores on the SOGS (Lesieur & Blume, 1. If the goal is to improve the person’s quality of life, then measures of the person’s stress level or self- esteem could be outcome indicators. Generally, multiple measures are better than single indicators. As can be seen in the logic model (Figure 1), different components of the program might have different goals and therefore different outcome indicators. These indicators may change over time as the service changes. Design of the Outcome Evaluation. There are several methods, or designs, that can be used to assess the effect of a treatment program. All designs involve the same procedures of identifying outcome indicators, collecting outcome indicator data and analyzing data. Controlled Designs: In many clinical research studies, treatment participants are compared to a control group that doesn’t receive the treatment. This design offers the best way of measuring the true effectiveness of a new treatment. The main difficulty is determining the most appropriate control sample. Some treatment studies use a wait list group as a control, while other studies use a second form of treatment as the control group. The main problem with the wait- list method is that it may be unethical to have a “waiting list” control sample. Furthermore, those on the wait list may seek other forms of treatment while waiting. The problem with using another treatment group as a control is that if no difference is found between the two treatment methods, does it mean that both treatment methods are ineffective or that both are equally effective? Another problem with these designs is that the treatment received during this research study is often different from the normal treatment service, and is supervised by a doctor or researcher, so that generalizing the results to the normal treatment service is sometimes questionable. Another means of conducting a controlled study is to add a new treatment option on top of an existing program. For example, the design might be as the following two groups (1) General treatment vs. This is still a controlled design but it avoids the ethical and interpretation problems of a wait- list control. Nonetheless, if a controlled design is possible, it is the most powerful means of testing the effectiveness of a particular service. Controlled designs are very frequently used in the evaluation of prevention programs. School prevention studies, for example, often use two groups of classes that are randomly assigned to control (no treatment) or experimental (treatment) groups. Public awareness campaigns can also be evaluated using this powerful methodological technique. In one recent study, households in some postal districts were sent a pamphlet on responsible drinking, while in other, control, districts, no pamphlet was sent. A follow- up telephone interview measured differences between the two groups. Following Pre- determined Clients: In this design a predetermined number of clients are followed through the treatment process. One of the benefits of this design is that there is a good chance of recontacting the client at follow- up because his or her agreement to participate in the evaluation would have been sought during the intake process. These studies often involve contacting clients several times (e. These studies can produce a large volume of information on the clients that may include both quantitative and qualitative outcome measures. A problem with this type of design is that some clients drop out of the study over time so that the sample becomes less and less representative, because specific types of clients may drop out more often than others. In addition, conducting the follow- up surveys alone might produce a change in the person’s behaviour so that being in the study may affect the outcome, making the results ungeneralizable to the standard treatment. Selecting Clients Randomly: In another type of design, the researcher randomly selects a certain proportion (e. In this design the clients are usually only contacted for one follow- up. A benefit of randomly selecting clients at follow- up is that the follow- up sample is more likely to include all personality types, gambling types, levels of problem severity and levels of treatment success. A difficulty of doing this type of outcome study is it takes a tenacious follow- up worker to reconnect with the clients to achieve the 7. Christner, 1. 99. In either case, a follow- up that includes both completers and non- completers will help answer questions about the reason for client dropout, the quality of services being offered and the extent to which the treatment is meeting the goals of the clients.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
December 2016
Categories |