Over the past decade, educational policymakers have
consistently called for data use. The No Child Left Behind (NCLB) Act of 2001,
with its emphasis on annual progress in students’ achievement scores and
quantitative evidence for school decisions, included a mandate for so-called
“data-driven decision making.” More recently, the Obama administration
designated building data systems that guide instruction as one of the four core
requirements of the Race to the Top funding competition. Across the country,
schools use data as part of Response to Intervention (RtI). Conversations about
data dominate the educational landscape, and these discussions only seem poised
Unfortunately, the rhetoric surrounding educational data
vastly outweighs the research, high-quality assessments, and structural
supports that would allow educators to use data well (Hamilton et al., 2009).
The result has been widespread misuse of data, particularly in lower-achieving
and high-poverty schools. Many schools place a shortsighted focus on improving
the achievement of students whose scores lie just below a proficiency “cut
score” (Bracey, 2008; Diamond & Cooper, 2007) at the expense of students
performing either above or well below proficiency.
Some schools may replace teaching that emphasizes conceptual
understanding with test-driven drill (Newcombe et al., 2009; Shepard, 2000) and
devote additional instructional time to mathematics and reading at the expense
of other subjects (Diamond & Cooper, 2007).They may use single, external
test scores from groups of students to evaluate teachers, a dangerous practice
(National Research Council, 2001). As the Government Accountability Office
(GAO, 2009, p. 17) neatly summarized, the current culture has resulted in
“teachers … narrowing the curriculum being taught—sometimes referred to as
‘teaching to the test’—either by spending more classroom time on tested
subjects at the expense of other non-tested subjects, restricting the breadth
of content covered to focus only on the content covered by the test, or
focusing more time on test-taking strategies than on subject content.”
All these abuses risk giving “data” a bad name. However,
these problems are the natural result, not of using data, but of the poor
responses that inevitably occur in an environment that couples high stakes for
data use with limited guidance for how to use data appropriately. This brief
seeks to provide some level of guidance for schools and teachers interested in
using data to improve students’ mathematics achievement.
Types of Data and
Schools and teachers encounter many types of data in the
typical school year. One prominent type is achievement
data. In the NCLB age, virtually all schools attend to students’ scores on
annual state tests. Many schools also use interim tests of achievement,
including interim “benchmark” tests (see Datnow, Park, & Wohlstetter, 2007;
Goertz, Olah, & Riggan, 2009) and curriculum-based measurements (see Fuchs,
2004), that they can administer more frequently to monitor students’ progress
and screen for difficulties. Other measures, like quarterly grades, schoolwide
assessments and performance tasks, and shared, classroom-level activities and
assessments, may also serve as sources of interim data. Finally, teachers
collect achievement data every day from students’ work, quizzes, and
performance during informal tasks.
Other common types of data include demographic and behavioral metrics that monitor students’
background, attendance, social and behavioral issues, mobility, retention, and
dropout rates (Learning Points Associates, 2004). Less common types of data are
collected on school processes,
including financial, program, and human capital data (Hess & Fullerton,
2009); on teaching, including
instructional logs, lesson plans, and classroom video; and on various perceptions, including surveys of
parents, communities, students, and teachers about school performance and
programs (Ikemoto & Marsh, 2007; Learning Points Associates, 2004).
All these types of data can and should be used together to
create rich analyses of students’ learning and school experience in general.
These data may be used in several ways.
First, schools may use process data, coupled with other
kinds, to evaluate school programs and monitor the effects of expenditures
(Hess & Fullerton, 2009). Second, schools may use data instrumentally (Ikemoto & Marsh, 2007; Murnane, Sharkey, &
Boudett, 2005), to make decisions such as where to target resources, how to
track students, or how to assign students to RtI tiers (see Gersten et al., 2009; National
Center on Response to Intervention, 2010; VanDerHeyden, 2010). Third,
teachers may use data in the classroom,
to make formative changes in instruction, give feedback to students, and
measure progress (see NCTM, 1995, for ways in which teachers use assessments; see
Black & Wiliam, 1998a, 1998b, 2009, for more on formative assessment).
Finally, schools and teachers may use data for inquiry into trends in students’ achievement. In this instance,
schools and teachers ask questions about why
trends occur and make plans for changing instruction and school processes to
improve students’ learning (see Ikemoto & Marsh, 2007; Murnane et al.,
2005; Supovitz & Klein, 2003).
This list of data uses is not exhaustive; the ways schools
use data go by many names, frequently cross boundaries, and are too broad for
this brief to explore deeply. Instead, the remainder of this brief focuses on
the last use of data introduced above: the schoolwide use of data for inquiry into students’ mathematical
learning. The use of data for inquiry is complex (see Ikemoto & Marsh,
2007) and involves school personnel joining together to uncover problems and
plan needed interventions. As Supovitz & Klein (2003, p. 33) summarized,
“Full-fledged inquiry involves a cyclical process whereby organizations focus
on an important problem, devise a strategy to collect data to identify the
particular source of the problem, analyze the data, take action based upon what
is learned, and collect data to see if the action taken has influenced the
identified problem.” Although other types of data use—particularly the
formative use of data in the classroom—may become crucial parts of this
process, the inquiry process may form a basis for all schoolwide data use.
Structural Supports for Data Use
Before teachers and schools can engage in inquiry into data,
three primary supports need to be in place.
Support 1: Goals for
Mathematical Cognition. Assessment data are only as useful as the content
they measure. The crux of a recent National Research Council (2001) report on
assessment was the explication of three elements that compose any educational
assessment: a model of students’ cognition (i.e., knowledge or understanding)
in a domain, tasks that allow observation of that cognition, and guidelines for
interpreting those observations. All too often, teachers and schools seek to
interpret assessment data without a strong understanding of the students’
cognition being measured.
Using data well requires that schools put cognition first,
determining what aspects of cognition are worth assessing and focusing
assessment data collection on those aspects. Schools need clearly delineated
goals for the mathematical content and processes that students should know at
each grade level and across grades (Hamilton et al., 2009; National Research
Council, 2001). Ideally, these goals should align with state and national
standards (e.g., Common Core State Standards; NCTM, 2000) and with the goals
outlined in the textbooks teachers use (see Pellegrino & Goldman, 2008, for
an in-depth discussion of textbook assessments).
Standards documents limit some schools and districts too
much (Datnow et al., 2007), especially those that wish to innovate with more
challenging goals or to provide targeted interventions. Schools may wish to set
goals above and beyond the given standards for higher-level mathematical
thinking and content. Some schools may even wish to develop higher-level
assessments to match these cognitive goals (see Shafer & Romberg, 1999;
Shepard, 2000, for examples). Given the importance of early mathematical skills
to students’ later success in mathematics, schools may also wish to target more
data-based interventions to early number sense and other skills (see Baroody,
Bajwa, & Eiland, 2009; Geary, 2010; Geary et al., 2009; Jordan & Levine,
2009, for more on early number sense).
In general, districts that use data for inquiry have shared
instructional content linked to standards, external tests, and internal
assessments, and they set specific, measurable goals for students, classrooms,
schools, and the district based on that content (Datnow et al., 2007). They use
data to monitor progress toward those goals, and conversations about data focus
on the content being measured, strategies for teaching that content well, and
students’ cognition related to that content.
Support 2: Data Teams.
Schools may establish teams devoted to setting and reviewing learning goals and
to organizing the collection, analysis, and interpretation of data (Boudett,
City, & Murnane, 2005; Ikemoto & Marsh, 2007; Learning Points
Associates, 2004; Murnane et al., 2005). Many high-performing districts
establish such teams as central to the schools’ improvement process and provide
teams with rubrics, protocols, and other tools for making sense of data meaningfully
(Datnow et al., 2007; Ikemoto & Marsh, 2007).
At district level, many stakeholders—including parents,
curriculum specialists, and community members—can serve on such teams. At
school level, teachers can form the core of teams that examine learning goals,
students’ progress, and instructional interventions (Hamilton et al., 2009).
Through their participation in these teams, teachers can learn more about the
content they teach, consider interventions that might improve students’
progress, and support one another in adopting new teaching strategies or school
initiatives. Not all the teams must be devoted explicitly to data examination;
different teams can be established for different tasks related to data (see
Boudett et al., 2005, p. 21, for examples).
Support 3: Strong
Leadership. Establishing a culture of using data well requires strong
leadership. Leaders can create this culture by dedicating time for teachers to
meet about data (Datnow et al., 2007; Hamilton et al., 2009; Ikemoto &
Marsh, 2007) and, more important, time and specialist support for implementing
data-based interventions with students (Goertz, et al., 2009). Leaders should
also recognize that the most crucial data consumers are teachers, whose
interpretations of data can greatly affect how data are used to improve
instruction (Goertz et al., 2009). If teachers are to draw useful conclusions
from data, they will need professional development in pedagogical content
knowledge, data analysis, and formative assessment (Datnow et al., 2007;
Diamond & Cooper, 2007; Firestone, 2009; Heritage et al., 2009; Wiliam
& Thompson, 2008). Leaders can adopt technology-based systems for timely
data capture, management, and analysis (National Research Council, 2001; Office
of Educational Technology, 2010). Highly sophisticated systems can even suggest
ideas for instructional interventions for particular students (see Goertz et
al., 2009). Finally, leaders can promote data use by avoiding using achievement
data to punish or embarrass teachers or students (Firestone, 2009). Where data
become synonymous with blame, teachers will no longer view data as tools for
Steps in Data Use
Using data for school-wide inquiry generally begins with
annual test data, but should not end there. Standardized test data are quite
limited in what information they give about instruction (Black & Wiliam,
1998b; National Research Council, 2001), in that they are usually not timely,
they cannot give students immediate feedback, they frequently focus on
lower-level skills rather than concepts (Shepard, 2000), and they paint an
incomplete portrait of students’ understanding (Supovitz & Klein, 2003).
However, teachers and schools can use annual test data as a starting point to
inquire into data, if they use other types of data, including that of more
fine-grained interim and classroom-level achievement, to confirm and expand
Several groups of researchers have outlined cycles or
systems of data use for the reams of test data schools receive every year (see
Boudett, City, & Murnane, 2005; Halverson et al., 2007; Hamilton et al.,
2009; Ikemoto & Marsh, 2007; Learning Points Associates, 2004; Murnane et
al., 2005). Each system is unique, but all attend at some level to three broad
steps: data collection, data analysis, and intervention. We discuss each of
these three major steps in turn.
Step 1: Data
Collection. Collecting annual test data may seem simple. Schools usually
acquire these data several months after students take the test. However, these
data are often presented in a format that precludes analyzing students’
learning and forming possible instructional interventions. For instance,
teachers may receive summaries of students who are exceeding, meeting, or below
expectations, without attention to numerical scores or growth over the year.
These data must be organized in a manner more conducive to analyzing the
problems students have with mathematics.
A technology-based system is one way to organize data
usefully (National Research Council, 2001; Office of Educational Technology,
2010). Such a system could include numerical scale scores for each student over
time, allowing schools to compare a given student’s growth from year to year
(see Goertz et al., 2009; Murnane et al., 2005, for examples of well-conceived systems).
Internal or external data experts, such as university researchers, can help
collect and organize data for teachers to analyze. Other school staff can
inventory all the school’s data, including and beyond achievement data, and
keep a data “wish list” for teachers and other school staff (Boudett et al.,
Additional achievement data, along with other kinds of data,
must be collected to supplement annual test data and support high-quality
inquiry (Halverson et al., 2007; Hamilton et al., 2009; Ikemoto & Marsh,
2007; Learning Points Associates, 2004). Examining annual test data along with results
from interim (see Goertz et al., 2009) and classroom-level assessments (see
NCTM, 1995) permits deeper analysis of students’ progress on specific mathematical
skills, especially because classroom-level assessments may allow diagnosis of why students miss particular problems.
Other types of data can also enrich the analysis of achievement data. For
instance, teachers can collect data on students’ demographics and behavior,
schools’ and parents’ perceptions of mathematics teaching and curricula, school
expenditures, and attendance at professional development activities. These
types of data allow more sophisticated analyses of relationships, such as that
between students’ achievement on fraction items and their teachers’ attendance
at professional development focusing on fraction instruction.
Step 2: Data Analysis.
Some districts hire statistical experts to help them analyze data. Although
these experts undoubtedly help, no analysis helps school improvement more than
teachers working together to identify and examine patterns in data (Learning
Points Associates, 2004). Teachers may tackle schoolwide issues or work in
grade-level or grade-band teams on more specific issues. To determine what the
issues are, teachers might first examine topics that were deemed schoolwide
strengths or weaknesses in previous years, and then look for other patterns.
Teachers can also examine the performance of subgroups of students, such as
those from lower socioeconomic backgrounds (Lubienski, 2007), and performance
on skills that are particularly important for later school performance. Areas
of strength overall might be areas of weakness for particular groups of
students, and this should be noted. In looking for patterns in topics or
subgroups, various types of graphic organizers or “data overviews” may be
particularly helpful (see Boudett et al., 2005 for illustrations).
After, or sometimes before, finding and documenting patterns
in the data, teachers should ask questions about the data. For instance, a team
might notice that, on a schoolwide level, students are missing questions about
area. A natural question to ask would be why students perform poorly on these
questions. The team would then dig into the data further. They might examine,
for instance, whether some students (in certain grades or classrooms) are
performing better on these questions than others and whether those
high-performing students share any common characteristics.
The team can then begin generating hypotheses to answer the
question (Learning Points Associates, 2004). At this point, teachers can put
all hypotheses on the table for consideration. The process should, however,
encourage teachers to suggest hypotheses that focus on instruction rather than
on factors outside their control, such as parental support (see Boudett et al.,
2005). For instance, one teacher might suggest that students do not have enough
practice on the procedures for finding area. Another teacher might suggest that
the textbook does not support understanding the concept of area, and thus
students do not know when to apply the learned area formula.
The team can then evaluate each hypothesis by examining
other data. For instance, the team might examine the school’s textbook only to
find that the text emphasizes concepts heavily, thereby weakening the argument
that the text is the source of the problem. Another teacher might hypothesize
that teachers do not actually teach area as the textbook suggests, and that
they focus instead on having students memorize and practice applying the area
formula. Given a lack of data to refute this claim, such as classroom videos or
logs, the group might decide to pursue this idea further. Indeed, the team may designate
several hypotheses as needing further exploration.
Step 3: Intervention.
Building on the list of tentative hypotheses, the team can brainstorm
strategies for intervening to improve students’ achievement (Boudett et al.,
2005; Learning Points Associates, 2004). They should consider relevant research
on successful interventions (e.g., Baker, Gersten, & Lee, 2002; Gersten et
al., 2009; Gersten, Jordan, & Flojo, 2005; NMAP, 2008) and how students
learn mathematics (see Newcombe et al., 2009). In the example presented above,
the team might decide to read research on teaching area concepts, to share
specific strategies for teaching these concepts well, and to encourage teachers
to use those strategies. The number of strategies should be manageable for
teachers. If teachers cannot actually enact the intervention, it will certainly
fail. The team might also consider using the data in other ways, such as making
an instrumental decision to focus a
professional development session on teaching area concepts.
After agreeing on strategies to address a problem, the team
should set specific, measurable goals—long-term, medium-term, and short-term—to
determine whether the intervention is working (Boudett et al., 2005). For the
short term, teachers can set goals for students’ performance on
curriculum-embedded assessments and in classroom activities. For the medium
term, teachers can resolve to examine interim assessments for progress on items
related to area or even construct interim assessments on area concepts to give
to all their students. These constructed tests would allow the teachers to
diagnose students’ problems with area more thoroughly using a shared
assessment. For the long term, teachers can set a goal for students to answer a
certain percent of area problems correctly on the next external test. Whatever
the goal, the team must articulate a specific problem to address, two to three
strategies designed to address it, and detailed indicators of progress toward
solving it. The team must also have a plan for collecting further data on the
problem (Boudett et al, 2005).
At this point, the team has made an intervention plan for a
single problem. The team can document and share this plan with the teachers in
the school who will face this problem (Learning Points Associates, 2004), and
those teachers may voice opinions on any needed revisions to the plan.
Distributing the plan, however, does not ensure its implementation. To
implement the plan, teachers can form learning communities (Murnane et al.,
2005; Wiliam, 2007/2008) around each problem. These communities can meet to
share problems and successes in implementing the plan, to encourage and support
adjustments along the way, and to collect data on whether the teachers are
implementing the plan appropriately and faithfully. These communities also
allow teachers to continue conversations about successful instructional
strategies for teaching specific mathematical skills. As much as possible, the
plan should involve strategies for formative assessment and intervention on the
problem at the classroom level, such as encouraging students to monitor their
own progress and to feel a sense of ownership and accountability about their
own data (Black & Wiliam, 2009).
Most researchers have framed data use as a cycle for a very
important reason: making and implementing a data-based plan is only the
beginning. Data teams must begin the cycle anew to see if their interventions
are working and to spot new problems. Teams are responsible for making plans to
collect data on interventions and making recommendations for other kinds of
data that they need to collect for analysis and add to the school’s “data wish
list” (Boudett et al., 2005). In this way, data become part of a continuous
cycle of school improvement.
By Meg Schleppenbach
Center for Elementary
Mathematics and Science Education
University of Chicago
Sarah DeLeeuw, Series Editor
Baker, S., Gersten, R., & Lee, D.
(2002). A synthesis of empirical research on teaching mathematics to
low-achieving students. Elementary School Journal, 103(1), 51–73.
Baroody, A., Bajwa, N. P., &
Eiland, M. (2009). Why can't Johnny remember the basic facts? Developmental
Disabilities Research Reviews, 15, 69–79.
Black, P., & Wiliam, D. (1998a).
Assessment and classroom learning. Assessment in Education, 5(1), 7–74.
Black, P., & Wiliam, D. (1998b).
Inside the black box: Raising standards through classroom assessment. Phi
Delta Kappan, 80, 139–48.
Black, P., & Wiliam, D. (2009).
Developing the theory of formative assessment. Educational Assessment,
Evaluation and Accountability, 21(1), 5–31.
Boudett, K. P., City, E. A., &
Murnane, R. J. (Eds.) (2005). Data wise:
A step-by-step guide to using assessment results to improve teaching and
learning. Cambridge, Mass.: Harvard Education Press.
Bracey, G. W. (2008). Cut scores,
NAEP achievement levels and their discontent. School Administrator, 65(6),
Datnow, A., Park, V., & Wohlstetter, P. (2007). Achieving
with data: How high-performing school systems use data to improve instruction
for elementary students. Los Angeles, Calif.: University of Southern
California, Center on Educational Governance.
Diamond, J. B., & Cooper, K.
(2007). The uses of testing data in urban elementary schools: Some lessons from
Chicago. Yearbook of the National Society for the Study of Education, 106(1),
Firestone, W. A. (2009).
Accountability nudges districts into changes in culture. Phi Delta Kappan,
Fuchs, L. S. (2004). The past,
present, and future of curriculum-based measurement research. School
Psychology Review, 33, 188–92.
Geary, D. C. (2010). Mathematical
disabilities: Reflections on cognitive, neuropsychological, and genetic
components. Learning and Individual Differences, 20, 130–33.
Geary, D. C., Bailey, D. H.,
Littlefield, A., Wood, P., Hoard, M. K., & Nugent, L. (2009). First-grade
predictors of mathematical learning disability: A latent class trajectory
analysis. Cognitive Development, 24(4), 411–29.
Gersten, R., Beckmann, S., Clarke, B., Foegen, A., Marsh, L., Star,
J. R., & Witzel, B. (2009). Assisting students struggling with
mathematics: Response to Intervention (RtI) for elementary and middle schools (NCEE
2009-4060). Washington, DC: National Center for Education Evaluation and
Regional Assistance, Institute of Education Sciences, U.S. Department of
Education. Retrieved from http://ies.ed.gov/ncee/wwc/publications/practiceguides/.
Gersten, R., Jordan, N. C., &
Flojo, J. R. (2005). Early identification and interventions for students with
mathematics difficulties. Journal of Learning Disabilities, 38, 293–304.
Goertz, M. E., Olah, L. N.,
& Riggan, M. (2009). Can interim
assessments be used for instructional change? CPRE Policy Brief. University
of Pennsylvania, Philadelphia, Pa.: Consortium for Policy Research in
Education. Retrieved from http://www.cpre.org/images/stories/cpre_pdfs/
Government Accountability Office
(GAO). (2009). Student achievement: Schools use multiple strategies to help
students meet academic standards, especially schools with higher proportions of
low-income and minority students (GAO Report No. 10-18). Washington, DC:
Government Accountability Office.
Halverson, R., Grigg, J., Prichett,
R., & Thomas, C. (2007). The new instructional leadership: Creating
data-driven instructional systems in school. Journal of School Leadership,
Hamilton, L., Halverson, R., Jackson,
S., Mandinach, E., Supovitz, J., & Wayman, J. (2009). Using student
achievement data to support instructional decision making (NCEE Report No.
2009-4067). Washington, D.C.: National Center for Education Evaluation and
Regional Assistance, Institute of Education Sciences, U.S. Department of
Heritage, M., Kim, J., Vendlinksi,
T., & Herman, J. (2009). From evidence to action: A seamless process in
formative assessment? Educational Measurement: Issues and Practice, 28(3),
Hess, F. M., & Fullerton, J.
(2009). The numbers we need: Bringing balanced scorecards to education data. Phi Delta Kappan, 90(9), 665–69.
Ikemoto, G. S., & Marsh, J. A.
(2007). Cutting through the "data-driven" mantra: Different
conceptions of data-driven decision making. Yearbook of the National Society
for the Study of Education, 106(1), 105–31.
Jordan, N. C., & Levine, S. C.
(2009). Socioeconomic variation, number competence, and mathematics learning
difficulties in young children. Developmental Disabilities Research Reviews,
Learning Points Associates (2004). Guide to using data in school improvement
efforts: A compilation of knowledge from data retreats and data use at Learning
Points Associates. Naperville, IL: Author. Retrieved from
Lubienski, S. T. (2007). What we can
do about achievement disparities. Educational Leadership, 65(3), 54–59.
Murnane, R. J., Sharkey, N. S., &
Boudett, K. P. (2005). Using student-assessment results to improve instruction:
Lessons from a workshop. Journal of
Education for Students Placed at Risk, 10(3), 269–80.
National Center on
Response to Intervention (March 2010). Essential components of RtI – A
closer look at Response to Intervention. Washington, D.C.: U.S. Department
of Education, Office of Special Education Programs, National Center on Response
National Council of Teachers of Mathematics [NCTM].
(1995). Assessment standards for school
mathematics. Reston, VA: NCTM.
National Council of Teachers of Mathematics [NCTM].
(2000). Principles and standards for school mathematics. Reston, VA:
National Council of Teachers of Mathematics.
National Mathematics Advisory Panel [NMAP]. (2008). Foundations
for success: The final report of the National Mathematics Advisory Panel. Washington,
DC: U.S. Department of Education.
National Research Council. (2001). Knowing what students know: The science and
design of educational assessment (J. W. Pellegrino, N. Chudowsky, & R.
Glaser, Eds.). Washington, DC: National Academy Press.
Newcombe, N. S., Ambady, N., Eccles,
J., Gomez, L., Klahr, D., Linn, M., Miller, K., & Mix, K. (2009).
Psychology's role in mathematics and science education. American
Psychologist, 64(6), 538–50.
Office of Educational Technology.
(2010). Transforming American education: Learning powered by technology.
Washington, DC: U.S. Department of Education.
Pellegrino, J. W., & Goldman, S.
R. (2008). Beyond rhetoric: Realities and complexities of integrating
assessment into classroom teaching and learning. In C. Dwyer (Ed.), The
future of assessment: Shaping teaching and learning (pp. 7–52). New York:
Lawrence Erlbaum Associates.
Shafer, M. C., & Romberg, T. A.
(1999). Assessment in classrooms that promote understanding. In E. Fennema
& T. A. Romberg (Eds.), Mathematics
classrooms that promote understanding. Mahwah, NJ: Lawrence Erlbaum.
Shepard, L. A. (2000). The role of
assessment in a learning culture. Educational Researcher, 29(7), 4–14.
Supovitz, J. A., & Klein, V. (2003). Mapping a course for improved student
learning: How innovative schools systematically use student performance data to
guide improvement. Philadelphia: Consortium for Policy Research in
Education, University of Pennsylvania Graduate School of Education.
VanDerHeyden, A. (2010). RTI and
math instruction. Retrieved from
Wiliam, D. (2007/2008). Changing
classroom practice. Educational Leadership, 65(4), 36–42.
Wiliam, D., & Thompson, M. (2008). Integrating
assessment with learning: What will it take to make it work? In C. Dwyer (Ed.),
The future of assessment: Shaping teaching and learning (pp. 53–82). New
York: Lawrence Erlbaum Associates.
The development of this brief was supported by the National Science Foundation under Grant No. 0946875
opinions, findings and conclusions or recommendations expressed in this
material are those of the author(s) and do not necessarily reflect the
views of the National Science Foundation (NSF).