Share
Pin it!
Google Plus

Selecting the Right Curriculum Brief

(PDF)

One of the most critical decisions educational leaders make is the selection of a mathematics curriculum. Most mathematics teachers rely on curriculum materials as their primary tool for teaching mathematics (Grouws, Smith, and Sztajn 2004). If a topic is not included in the curriculum materials they use, there is a good chance that teachers will not cover it and students will not learn it. Others argue that how material is presented in curricula—that is, the pedagogical approach through which students are expected to learn the content—is of equal importance to what topics are covered (Project 2061 n.d.).  Finally, in today’s policy environment, leaders cannot ignore the amount of professional development that will be required for teachers to learn to implement the curriculum well and research evidence on the effectiveness of curricula in producing student learning. 

Content Coverage 

Content analysts typically compare selected curriculum materials against a set of external criteria to determine if important topics, concepts, and skills are covered and that sequencing is sensible. Analyses in the United States typically use standards, frameworks, or other countries’ curricula as their external criteria (National Research Council [NRC] 2004). Over the past decade, the external criteria used by researchers to analyze curriculum materials have varied widely, reflecting the range of priorities and values held by the individuals who have conducted them. Not surprisingly, these variations have produced different results. In their review of 36 content analyses, the NRC (2004) found that the ratings of many curricular programs vacillated from strong to weak depending on the criteria that were used. For example, on the one hand standards-based curricula fared well under the American Association for the Advancement of Science’s criteria, which stress the importance of developing deep conceptual understanding of a relatively small number of “big ideas” (Project 2061 n.d.). Conventional curricula, on the other hand, did better under Mathematically Correct’s rating system, which focuses on the mastery of the mathematical core, the essential elements of knowledge and skills that are deemed necessary for students to succeed in advanced mathematics (Mathematically Correct n.d.). These discrepancies make clear the need for transparency regarding the criteria used for content reviews of curriculum materials. They also point to the need for decision makers themselves to be clear regarding what they value in the way of student outcomes and then to select a review that reflects their values as closely as possible.

Content Is Presented 

Most curricula (standards-based and conventional) intend for students to learn concepts, skills, applications, problem solving, and efficient procedures. They differ, however, with regard to the order and manner in which these elements are presented, the balance that is struck among different elements, and organizational style. For example, conventional curricula tend to rely on direct explication of the to-be-learned material as well as careful sequencing and the accumulation of lower-level skills before presenting students with the opportunity to engage in higher-order thinking, reasoning, and problem solving with those skills. In contrast, standards-based materials rarely explicate concepts for students; rather, they rely on students’ engagement with well-designed tasks to expose them to the concepts. After the concept has been introduced and its features explored by students, the curriculum and teacher step in to apply definitions, standard labels, and standard procedural techniques.

Similar to the analyses of content coverage, reviews of how content is presented employ widely varying criteria.  In this instance, these criteria reflect reviewers’ (often implicit) judgments regarding the nature of mathematics and how students learn it. If the reviewer believes that mathematics is best learned through student construction based on active exploration, the standards-based curricula will be rated more highly; however, if the reviewer believes that mathematics is best learned through direct instruction and skills practice, then the conventional curricula will fare better.

Research Evidence on Implementation 

Learning to implement any new curriculum involves learning challenges for teachers. However, research suggests that curricular tasks that focus on specified skills and procedures (as do most conventional curricula) are less challenging for teachers to learn to implement well than are curricular tasks that demand students to think, reason, and problem solve (tasks often found in standards-based curricula) (Stein, Grover, and Henningsen 1996). Using standards-based curricula in ways that unlock the potential that their developers envisioned requires considerable knowledge and time on the part of the teacher, as well as support through professional development. Leaders will want to factor this increased learning demand into their decision making regarding curricular adoption.

Research Evidence on Effectiveness 

Typically, conventional curricula have not been subjected to effectiveness studies.  Most standards-based curricula (those supported by the National Science Foundation [NSF]), however, have been required to conduct both formative and summative evaluations of their materials (NSF 1989). The summative evaluations, many of which are reported in a volume edited by Senk and Thompson (2003) entitled Standards-Based Mathematics Curricula:  What Are They? What Do Students Learn?, were, on the whole, promising.  Students who used standards-based curricula generally performed as well as other students on traditional measures of mathematics achievement and did better on assessments of conceptual understanding and ability to use mathematics to solve problems.

The studies reported in the Senk and Thompson volume might be best understood as “proof of concept” studies, indicating that the standards-based curricula “are working in classrooms in ways their designers intended for them to work” (Kilpatrick 2003, p. 472). It should also be noted, however, that these studies did not satisfy skeptics who demanded that independent researchers examine students’ learning using more rigorous methodologies. With this controversy swirling, the NRC was charged to examine the “state of the evidence” with respect to standards-based versus conventional curricula.

Their conclusion was that “the corpus of evaluation studies . . .  does not permit one to determine the effectiveness of individual programs with a high degree of certainty, due to the restricted number of studies for any particular curriculum, limitations in the array of methods used, and the uneven quality of the studies” (NRC 2004, p. 3). The report goes on to caution that the inconclusive finding of the panel should not be interpreted to mean that these curricula are ineffective either but rather that problems with the data or study designs prevented the panel from making confident judgments about their effectiveness.
 
According to many scientists, incontrovertible evidence of curricular effectiveness can be provided only by studies that randomly assign students to standards-based versus conventional curricula, the so-called “gold standard” of the randomized controlled trial (Mosteller and Boruch 2002; NRC 2002). Using this standard, the What Works Clearinghouse (WWC) collects, screens, identifies, and evaluates studies of educational interventions, including mathematics curricula (currently reports are posted on elementary and middle school mathematics curricula). Readers are referred to http://www.whatworks.ed.gov for updated information from this government group. Although the rigor demanded for WWC endorsement is an important piece for judging effectiveness, other models for curricular evaluation—models that include attention to program theory and implementation as well as student outcomes—have been proposed by the National Research Council (Confrey 2006). If and when such evaluations become available, leaders will have a more comprehensive set of information to guide their decision making.

Turning to studies that do not meet the gold standard of the randomized controlled trial, one finds less-certain evidence for any one specific curriculum, but nevertheless interesting patterns across findings from several large-scale studies that compared achievement in classrooms using a variety of standards-based curricula to achievement in classrooms using conventional curricula. Despite the differences in grades and curricula studied and the methodologies employed, many of these studies have produced fairly consistent findings. The first is that students taught using standards-based curricula, compared with those taught using more conventional curricula, generally exhibited greater conceptual understanding and performed at higher levels with respect to problem solving (e.g., Boaler 1997; Huntley et al. 2000; Thompson and Senk 2001).  Second, these gains did not appear to come at the expense of those aspects of mathematics measured on more traditional standardized tests.  Compared with students taught using conventional curricula, students who were taught using standards-based curricula generally performed at approximately the same level on standardized tests that assess mathematical skills and procedures (e.g., Riordan and Noyce 2001; Thompson and Senk 2001). The differences that occurred were usually not significant, and some show the “standards-based curriculum” students doing slightly better, whereas others show the “conventional curriculum” students doing slightly better. For example, students in the Core-Plus Mathematics Project (CPMP) (a standards-based curriculum) outperformed others on tests of algebraic concepts set in real-world contexts, but the students taught using more traditional texts outperformed those in CPMP on tests of algebraic skills set in questions without contexts that did not allow calculators (Huntley et al. 2000). Unsurprisingly, students tend to do well on tests that match the instructional approaches with which they have been taught.
 
Substantively, it is striking to note the similarity between the patterns findings reported in the Senk and Thompson (2003) volume and the findings of the larger-scale comparative studies conducted by external reviewers. In both instances, students taught using standards-based curricula tended to hold their own on tests of computational skills and to outperform students taught with conventional curricula on tests of thinking, reasoning, and conceptual understanding. This pattern of findings—not the findings of any one study—has prompted some to point to the overall efficacy of standards-based curricula (e.g., Schoenfeld 2002).
 
But, efficacy for what? It is important to note that students tended to perform best on tests that aligned with the approaches by which they had been taught, repeating the well-worn finding that students learn what they are taught.  Combined with the findings from the analyses of curriculum materials cited earlier, the research examined here suggests that students taught using conventional curricula can be expected to master computational and symbolic manipulation better, whereas students taught using standards-based curricula can be expected to perform better on problems that demand problem solving, thinking, and reasoning.

By Mary Kay Stein
Judith Reed, Series Editor 

References 

Boaler, Jo. Experiencing School Mathematics: Teaching Styles, Sex, and Setting.  Buckingham, U.K.: Open University Press, 1997.

Confrey, Jere. “Comparing and Contrasting the NRC Report on Evaluating Curricular Effectiveness with the What Works Clearinghouse Approach.”  Educational Evaluation and Policy Analysis 28, no. 3  (2006): 195–213.

Grouws, Douglas A., Margaret S. Smith, and Paola Sztajn.  “The Preparation and Teaching Practices of United States Mathematics Teachers: Grades 4 and 8.”  In Results and Interpretations of the 1990 through 2000 Mathematics Assessments of the National Assessment of Educational Progress, edited by Peter Kloosterman and Frank K.Lester, Jr., pp. 221–67. Reston, Va.:  National Council of Teachers of Mathematics, 2004.

Huntley, Mary A., Chris L. Rasmussen, Roberto S. Villarubi, Jaruwan Sangtong, and James T. Fey. “Effects of Standards-Based Mathematics Education: A Study of the Core-Plus Mathematics Project Algebra and Functions Strand.”  Journal for Research n Mathematics Education 31 (May 2000): 328–61.

Kilpatrick, Jeremy.  “What Works?” In Standards-Based School Mathematics Curricula: What Are They? What Do Students Learn?, edited by Sharon L. Senk and Denisse R. Thompson. Mahwah, N.J.: Lawrence Erlbaum Associates, 2003.

Mathematically Correct. “Mathematics Program Reviews for Grades 2, 5, and 7: Summary of Overall Ratings by Publisher.” N.d.  Retrieved January 13, 2006, from http://mathematicallycorrect.com/booksy.htm.

Mosteller, Frederick, and Robert Boruch. Evidence Matters: Randomized Trials in Education Research. Washington, D.C.:  Brookings Institution Press, 2002.

National Research Council. Scientific Research in Education. Washington, D.C.: National Academies Press, 2002.

———. On Evaluating Curricular Effectiveness: Judging the Quality of K–12 Mathematics Evaluations. Washington, D.C.:  National Academies Press, 2004.

National Science Foundation (NSF). Materials for Mathematics Instruction: Program Solicitation. Arlington, Va.: NSF, Division of Materials Development, Research, and Informal Science Education, 1989.

Project 2061. “Middle Grades Mathematics Textbooks: A Benchmarks-Based Evaluation.” N.d.  Retrieved October 18, 2005, from http://www.project2061.org/publications/textbook/mgmth/report.htm.

Riordan, Julie E., and Pendred E. Noyce. “The Impact of Two Standards-Based Mathematics Curricula on Student Achievement in Massachusetts.”  Journal for Research in Mathematics Education 32 (July 2001): 368–98.

Schoenfeld, Alan. “Making Mathematics Work for All Children:  Issues of Standards, Testing, and Equity. Educational Researcher 31, no. 1 (2002): 13–25.

Senk, Sharon  L., and Denisse R. Thompson, eds.  Standards-Based School Mathematics Curricula: What Are They? What Do Students Learn?  Mahwah, N.J.: Lawrence Erlbaum Associates, 2003.

Stein, Mary Kay, Barbara W. Grover, and Marjorie Henningsen. “Building Student Capacity for Mathematical Thinking and Reasoning: An Analysis of Mathematical Tasks Used in Reform Classrooms.” American Educational Research Journal 33, no. 2 (1996): 455–88. 

Thompson, Denisse R., and Sharon L. Senk. “The Effects of Curriculum on Achievement in Second-Year Algebra: The Example of the University of Chicago School Mathematics Project.”  Journal for Research in Mathematics Education 32 (January 2001): 58–84.
 

Your feedback is important! Comments or concerns regarding the content of this page may be sent to nctm@nctm.org. Thank you.