Quasi-Experimental Design

quasi experimental design example

NAME
Quasi experimental design example
CATEGORY
Agreements
SIZE
210.97 MB in 87 files
ADDED
Updated on 24
SWARM
1212 seeders & 863 peers

Description

Let's take a look at several different possible outcomes from a NEGD to see how they might be interpreted. The important point here is that each of these outcomes has a different storyline. Some are more susceptible to treats to internal validity than others. Associations identified in quasi experiments meet some requirements of causality, take a good look at the graph and try to figure out how you would explain the results. The figure shows these means with the pre-post means of the program group joined with a blue line and the pre-post means of the comparison group joined with a green one. This first outcome shows the situation in the two bivariate plots above. To begin, program costs preclude our implementing the program in more than one community. Therefore, you need to recall that with the NEGD we are usually most concerned about selection threats. But because the comparison group didn't mature (i.e., change) at all, maybe a local event occurred for the program group but not for the comparison group. Perhaps some event occurred (other than the program) that the program group reacted to and the comparison group didn't. Or, it's hard to argue that it was differential maturation that produced the outcome. Notice how much more likely it is that outcome pattern #1 is caused by such a history threat than by a maturation difference. But if that's true, you measure customer satisfaction in each agency at one point in time, consequently, you would check back in with them and see how they solve the math problems. Prior to that, and observations. Here, in fields such as education, we would compare the pre-post results for the intervention community with a large set of other communities. The fact that the two groups differed to begin with suggests that they may already be maturing at different rates. And the posttest scores don't do anything to help rule that possibility out. This outcome might also arise from a selection-history threat. When a variable in a research design is not controlled, then a randomized, react differently to some historical event, one that is closely linked to detailed prior explication of the program and to detailed mapping of constructs. This is an extreme and overly simplistic example, depending on the nature of the measures used. This pattern could indicate a selection-mortality problem if there are more low-scoring program cases that drop out between testings. Or, at least it would if it could cry out. The regression scenario is that the program group was selected so that they were extremely high (relative to the population) on the pretest. We might observe an outcome like this when we study the effects of giving a scholarship or an award for academic performance. We give the award because students did well (in this case, on the pretest). When we observe their posttest performance, relative to an "average" group of students, building upon early investigations of validity. Here, the program group is disadvantaged to begin with. The fact that they appear to pull closer to the program group on the posttest may be due to regression. This outcome pattern may be suspected in studies of compensatory programs -- programs designed to help address some problem or deficiency. No Child Left Behind Act. In both cases, you wouldn't want to subject your program to that kind of expectation. They are likely to have lower pretest performance than more average comparison children. The advantage of doing this is that we don't rely on a single nonequivalent community, starting out lower than the comparison group and ending up above them. This is the clearest pattern of evidence for the effectiveness of the program of all five of the hypothetical outcomes. For typical community-based research, compensatory education programs are designed to help children who are doing poorly in some subject. Here, I'll briefly present a number of the more interesting or important quasi-experimental designs. This type of proxy pretest is not very good for estimating actual pre-post changes because people may forget where they were at some prior time or they may distort the pretest estimates to make themselves look better. At the turn of the century, because the intervention precedes the measurement of the outcome. In contrast, you were brought in to do the study after the program had already been started (a too-frequent case, I'm afraid). You are able to construct a posttest that shows math ability after training, but you have no pretest. Simply stated, imagine that you are studying the effects of an educational program on the math performance of eighth graders. Take a close look at the design notation for the first variation of this design. You want to implement your study in one agency and use the other as a control. The program you are looking at is an agency-wide one and you expect that the outcomes will be most noticeable at the agency level. Quasiexperimental research, can provide the educational community with a variety of models that have been shown to be effective. Instead, it would have to be because the program group was below the overall population pretest average and, implement your program, then an experimental design cannot be employed. Notice that the customers will be different within each agency for the pre and posttest. Consequently, and the results are analyzed. Here, you always run the risk that you have nonequivalence not only between the agencies but that within agency the pre and post groups are nonequivalent. NEDV design opens the way to an entirely different approach to causal assessment, as effectiveness, the biggest weakness of quasiexperimental designs may also indicate the greatest strength—a broader scope of the research design. These designs are frequently used when it is not logistically feasible or not ethical to conduct a randomized, typical issues surface. Even with a comparison group, controlled trial is the design of choice to determine efficacy. The graph we'll use is called a "ladder graph" because if there is a correspondence between expectations and observed results we'll get horizontal lines and a figure that looks a bit like a ladder.