Framing the ACT Report Recommendations as an Opportunity for Critical Conversations
On October 17, ACT reported that the average mathematics score on its annual college entrance exam for the graduating class of 2018 was the lowest in 20 years. The Condition of College and Career Readiness, 2018 reported that the score of 20.5 (on a 1 to 35 scale), followed scores of 20.7 in 2017, 20.6 in 2016, 20.8 in 2015, and 20.9 for the class of 2014.
The decline in recent years in the mathematics score on the ACT exam has many educators and policymakers concerned. There is apprehension about whether these scores suggest a negative impact on science, technology, engineering, and mathematics (STEM) readiness and the potential impact on America's economic, social, and political security. While I understand why these are causes for concern for many, I see the discourse about the ACT mathematics scores as an opportunity to broaden the discussion to include issues of equity, curriculum, and assessment. Nowhere to be found in the reporting by various agencies on the decline in the ACT scores is there any mention of the gross inequities within society that continue to be reflected in students' educational outcomes. Critical conversations are necessary for knowing and understanding not only the indicators for mathematics and STEM readiness but also the inequities that contribute to the factors that offer advantages to some learners while disadvantaging others. These conversations might also include whether students are prepared mathematically to succeed in post-secondary education, participate in a democratic society, and make wise decisions in their personal lives.
While there are conversations and concerns about the ACT mathematics scores, many reporting agencies have omitted the five recommendations found at the end of the ACT report (page 17). The recommendations provide a framework for starting these critical conversations about the distribution of resources, the impact of disparities on outcomes, and learners' social and emotional needs. The ACT report made five recommendations for districts, states, and policymakers:
The recommendations in the ACT report connect with NCTM's Catalyzing Change in High School Mathematics: Initiating Critical Conversations (2018) for a broader dialogue on high school mathematics. Catalyzing Change seeks to initiate critical conversations around the following challenges:
As you engage in critical conversations, consider these questions as starters for framing your discussions:
I encourage you to read Catalyzing Change and to use the questions above to start these critical conversations with your colleagues. I am interested in finding out how your critical conversations progress. Please share your successes and challenges on MyNCTM.org.
Robert Q. Berry, III
ACT. The Condition of College and Career Readiness, 2018. Iowa City, IA: ACT, 2018.
National Council of Teachers of Mathematics (NCTM). Catalyzing Change in High School Mathematics: Initiating Critical Conversations. Reston, VA: NCTM, 2018.
Dear President Berry,
Thank you for bringing these scores to our attention. The 5 recommendations for improving the scores are plausible, but they are significantly inadequate.
1. Where is the numerical cost analysis to predict how many dollars are needed for improving a score of 20.5 to 20.9?
2. Recommendation #4 sounds good, but how many dollars will it take for professional development to transform current conventional teachers into holistic teachers whose mathematics classes will reach a level of culturally responsive pedagogy?
3. Where is the research to determine the educational validity of the ACT mathematics test, for all applicants to a college or university, who will be studying in the humanities?
In short, besides the qualitative recommendations, I want to see quantitative costs associated with each of them. Only then does the issue of improving scores properly move more toward state departments of education's responsibility, and move proportionately away from teachers.
Upon examining the sample ACT mathematics test questions, it appears that they do not reflect the Standards for Mathematical Practice and are very much skills only type questions.
Additionally, this is a timed, multiple choice test which research indicates is one of the least effective measures of student understanding.
So maybe the report should have included an examination of the tool and its correlation with the current mathematics standards.
Dr. Terry Souhrada
Secondary Mathematics Education
Salish Kootenai College