Menu

Argument: Technology improves standardized tests scores

Supporting quotes

Studies indicating a negative correlation between computer and learning may be corrupted by circumstances and processes

  • Robert B. Kozma, PH.D. The Opposition’s opening statement. The Economist Debate Series: Education. October 14, 2007 – “Often single studies—even those that are well-designed—are constrained by the particular context or situation in which they were conducted and this limits the generalizability of their conclusions. Let us take as an example a study conducted in Israeli schools by Angrist and Lavy (2001), which was featured in The Economist several years ago. This study examined the relationship between the use of ‘computer-assisted instruction’ (or CAI, i.e. tutorial software) and test scores in 4th and 8th grade mathematics and Hebrew classes in a random sample of schools that successfully applied to participate in a national program to increase the use of computers in Israeli schools. Scores of students in these schools were compared to those in schools that elected not to participate in the program or were not chosen to do so. Typically, self selection is a fatal design flaw in research studies but the researchers went to great lengths to statistically equate the two types of schools by including a variety of school, student, and teacher variables in their analyses. They found no evidence that the increased use of tutorials raised pupil test scores; indeed, they found a negative and marginally significant relationship between program participation and 4th grade math scores. However, as in many similar studies, there are important features of this study that limit the results. First, this study is limited to a particular use of computers (tutorials), within specific grades (4th and 8th) and subject areas (math and Hebrew) and within a particular timeframe (after one year of implementation) and a particular country (Israel) with a particular national curriculum. Furthermore, in an analysis of teacher surveys, the researchers found no evidence of differences between participating and non-participating classrooms in inputs, instructional methods, or teacher training. More significant is that fact that even the most active participants (4th grade math teachers) indicated that they used computers somewhere between ‘never’ and ‘sometimes’. Consequently, the study is particularly limited by the marginal nature of the intervention. All of these factors constrain the generalizability of the findings and certainly do not allow the authors to make the general claim, as they do, that ‘CAI is no better and may be even be less effective than other teaching methods.’”

A broader study of the education literature indicates a positive relationship between technology and education

  • Robert B. Kozma, PH.D. The Opposition’s opening statement. The Economist Debate Series: Education. October 14, 2007 – “In order to make a general statement about the impact of technology on education, a large number of studies that cover a variety of situations must be included in the analysis. For this, I turn to a meta-analysis (or an analysis of analyses) done in 2003 by James Kulik of the University of Michigan. Kulik included in his statistical analysis the results of 75 carefully-designed studies collected from a broad search of the research literature. As a group, these studies looked at several types of educational technology applications (such as tutorials, simulations, and word processors), in a variety of subjects (such as mathematics, natural science, social science, reading and writing), and a range of grade levels (from vary young to high school). His findings across studies can be summarized as follows:

    • Students who used computer tutorials in mathematics, natural science, or social science scored significantly higher in these subjects compared to traditional approaches, equivalent to an increase from 50th to 72nd percentile in test scores. Students who used simulation software in science also scored higher, equivalent to a jump from 50th to 66th percentile.
    • Very young students who used computers to write their own stories scored significantly higher on measures of reading skill, equivalent to a boost from 50th to 80th percentile for kindergarteners and from 50th to 66th percentile for first graders. However, the use of tutorials in reading did not make a difference.
    • Students who used word processors or otherwise used the computer for writing scored higher on measures of writing skill, equivalent to a rise from 50th to 62nd percentile.
By including a large and diverse set of studies in the analysis, it is clear that technology can make contributions to the quality of education that are both statistically significant and educationally meaningful. Nonetheless, the classrooms included in this meta-analysis were, by and large, conducted within the traditional educational paradigm and the uses of technology were fairly ordinary. What if advanced technologies were used to ignite a major transformation of the educational system? How much more of a contribution could it make under these circumstances? These are questions to which I will return later in the debate.”
  • National Assessment of Educational Progress: AZEcon (online debater). Economist Debate Series: Education. October 19, 2007 – “The meta-analysis summary by the opposition is compelling. I would like to add an anecdotal illustration in support of the opposition. After 20 years as an economics teacher at the high school level, a technological innovation, not possible in the early days, is now available to those economics educators around the world with access to computers. The innovation is called “Virtual Economics” distributed by the National Council on Economic Education in the United States. (Disclaimer: I was not involved in producing this item; merely enthusiastic about its educational value). It is a triumph in terms of assembling in one place on a CD fifty-one cross referenced core concepts in basic, micro-, macro-, financial, and international economics, with definitions, further research links, and suggested model lessons. The first economics subject tests in the 2006 NAEP program, National Assessment of Educational Progress, suggest that these and other technology-based efforts by the National Council on Economic Education are having positive impacts on student AND teacher learning in this crucial subject area.”