Engaging Students in Survey Design and Data Collection

  • Engaging Students in Survey Design and Data Collection

    Marla A. Sole
    By piloting their own survey on texting while driving, students learn that only unambiguous questions can produce credible results.

    Every day, people use data to make decisions that affect their personal and professional lives, trusting that the data are correct. Many times, however, the data are inaccurate, as a result of a flaw in the design or methodology of the survey used to collect the data. Researchers agree that only questions that are clearly worded, unambiguous, free of bias, and worded in such a way that respondents are motivated to answer truthfully produce credible results (Salant and Dillman 1994; Dillman 2007; Nardi 2003). However, many students seem not to fully understand or appreciate this fact. For example, when given survey questions accompanied by graphs and descriptive statistics to critique, students in my class Quantitative Reasoning focused almost all their attention on the results. They erroneously believed that the questions and methodology of the survey mattered less than the results and failed to realize that vague questions or those open to interpretation produce inaccurate, unusable information.

    These misunderstandings may arise because question construction and data collection are often not part of the content covered in introductory mathematics classes. Even beginning mathematics classes that focus on the development of statistical literacy and quantitative reasoning skills often omit these essential elements. One problem may be the textbooks used. Typically, textbook exercises relate questions to accompanying data sets (Hogg 1992); as researchers have noted, it is both difficult and time-consuming to have students develop their own questions and collect their own data (Aliaga et al. 2005). However, students need to have the opportunity to form research questions and collect data that will be used to answer those questions (Aliaga et al. 2005; Franklin et al. 2005; Hogg 1991; NCTM 2000; Snee 1993). Affording students the opportunity to design and pilot survey questions and generate data helps clarify the misconception that these steps are of little consequence. Going through this process also sheds light on difficulties that may arise. Students may be unaware of potential problems if they work only with prewritten questions that come with data sets.

    To help clear up these misunderstandings, this lesson focused on developing students’ abilities to pose, pilot, and critique survey questions and to understand how data are collected and used. These skills are essential to a wide range of mathematics courses at all levels (NCTM 2000). In particular, the ability to construct clearly worded questions is an important starting point of research investigations in all fields. Guidelines for Assessment and Instruction in Statistics Education (GAISE) Pre-K–12 and College reports both state that students need to understand how to formulate questions and that they benefit by having the opportunity to collect data (Aliaga et al. 2005; Franklin et al. 2005). Formulating questions and collecting data are also key aspects of the data analysis and probability standard for grades 9–12 (NCTM 2000). Collecting and interpreting data is also an important part of the Common Core State Standards for Mathematics in the area of Statistics and Probability (CCSSI 2010). Accordingly, these skills should be developed in both high school and college courses, especially those focused on statistical literacy, modeling, quantitative reasoning, and introductory research investigations.

    CONSTRUCTION OF A SURVEY

    A recent study found that 41.4 percent of students had texted or emailed at least once while driving during the last thirty days (CDC 2013). Although all distractions endanger the lives of drivers, passengers, and bystanders, texting is of the most concern because it simultaneously requires the use of one’s visual, manual, and cognitive skills (NHSTA n.d.). In addition, those most likely to text or email while driving are young new drivers (NHTSA 2012). These alarming statistics made this an ideal topic for my students, many of whom had recently started driving.

    Students were instructed to design a survey question to measure how prevalent texting and driving is among students. The first survey question that they constructed was “Do you text while driving?” Students were asked to comment on the question’s wording and clarity. The class was of the opinion that the question was relatively simple and straightforward. Some students stated that since the question required only a yes or no response, it was not open to interpretation. Next, students were asked to raise their hands if they text while driving; 63.6 percent raised their hands.

    I assumed that the first question constructed would need to be revised. I wanted students to discover that constructing a well-designed, unambiguous survey question is an art.

    To investigate whether the question had been interpreted the same way by everyone, students were grouped according to their responses. They were asked to discuss in detail their behavior when texting while driving and investigate whether the survey had captured any differences in their descriptions. They discovered that one of this question’s biggest flaws was that the majority of students who had responded no to the survey question did not drive. These students had selected this response because it was the best available response choice, although it did not accurately represent their behavior.

    Students were instructed to revise the survey. The new question had to be easy to read and interpret and had to be unambiguous. Having just seen how the wording of a question that at first seemed clear influenced the results, students were more attentive to the language used. Some of the questions that students proposed but ruled out were these:

    • For drivers, do you text and drive?

    • If you drive, how frequently do you text and drive?

    • If you drive, do you text when stopped?

    Potential survey questions were evaluated using an assessment rubric; an evaluation of the first question above is shown in figure 1.

    Teachers can help students critically examine their initial survey questions by asking key questions such as those listed in figure 2.

    The students’ final version of the survey included two questions:

    • Do you drive?

    • If you drive, have you ever read, sent, or both read and sent a text while the car was in motion?

    The percentage of students who answered yes to the second question was 93.8 percent.

    This activity illustrated to students that the wording of a question affects the results. Students attempted to estimate the prevalence of all young novice drivers who text while driving using the results of the survey given to a small sample. They commented that without seeing first-hand responses to the two questions they would not have thought that piloting was an important step. Students trusted the results of the first survey question until they saw the results of the revised survey question. They came to appreciate that language and mathematics are connected. A couple of students volunteered that they had previously thought statistics was all about “working with numbers” but now saw that it was a bit more “creative” than they had imagined.

    This activity helped students understand why researchers might ask for feedback or pilot survey questions before administering questionnaires to the larger desired target audiences. It also clarified a common misconception—that piloting a survey is unnecessary and of little benefit.

    DATA COLLECTION

    Simulating Driving and Texting

    To assess the risk of driving and texting, the students needed to approximate the distance traveled and assess how attentive drivers were while texting. No relevant statistics are readily available, so students collected their own data—simulating the act of driving while texting. Researchers have suggested that there are benefits to having students generate their own data (Cobb 2013; Hogg 1991). The main benefit is that classroom-generated data afford students the opportunity to experience the unique challenges of this process. Another benefit is that being involved in the creation of data increases students’ level of engagement and motivation to analyze the data (Cobb 2013).

    To approximate the distance traveled while texting, students would need to know the time spent texting and the speed at which the driver was traveling while texting. To estimate the time, students were grouped in threes—a driver and two observers. The driver received, read, and responded to a text message. One observer sent the text message to the driver, and the second observer recorded the amount of time that the driver spent reading and responding to the text message. According to the model, drivers spent between 4 and 19 seconds texting.

    The class discussed the data collection process. One student pointed out that a number of drivers had been holding their phones before having received the text. I noticed that almost all drivers were texting with two hands and asked, “Is this how you drive?” It was not. These issues helped students understand the types of unforeseen problems that can arise when attempting to simulate an activity to generate data.

    To better simulate the situation, students collected data a second time. Cellphones were kept out of sight until the messages were sent. Water bottles substituted for steering wheels, and at all times drivers had to keep one hand on the “steering wheel.” This time, drivers spent between 7 and 22 seconds texting. Reflecting on the data collection process again, one student pointed out that the time spent texting may have been underestimated because the drivers had their cellphones unlocked at the start of the activity.

    The observers were asked to also assess whether the drivers had been paying attention to their surroundings and intermittently looked up while texting. The drivers thought that they had been more aware of their surroundings than the observers did. To test whether drivers had been paying attention to their surroundings, the exercise was repeated. Students were randomly placed into either a treatment group, which texted while simulating driving, or a control group, which did not text (see fig. 3).

    This time, the observers were instructed to briefly hold in the drivers’ line of sight between one and five road signs or images that might be encountered when driving (see fig. 4). After the activity concluded, only one driver in the treatment group had been able to correctly recall all the signs and images shown, whereas all drivers in the control group were able to recall all the signs. Although the sample was too small to test the claim that there were significant differences between the groups, this result certainly suggests that distracted drivers are not as aware of their surroundings as they may believe.

    Calculating the Distance Traveled

    To approximate the distance traveled, students were asked how fast they had ever driven when texting. Speeds ranged from 30 to 70 miles per hour. Almost all students admitted to having driven at speeds in excess of 50 miles per hour while texting, so students calculated the actual distance a distracted driver would have traveled in 15 seconds traveling at a rate of 65 miles per hour. Students who suspected that they might have traveled a few feet were surprised to find that in 15 seconds they would have actually traveled approximately 3/10 of a mile, or 1584 feet, or 6 city blocks.

    Extending the Analysis

    The goal of this lesson was to demonstrate that refining survey questions and the data collection process yields more accurate responses. These data could be used to investigate statistical questions that are more advanced:

    • Do the data collected from the survey on texting while driving provide sufficient evidence to conclude that for novice drivers an association exists between distracted driving habits and gender?

    • Find a 95 percent confidence interval for µ, the mean approximate distance that all young novice drivers travel while texting. Interpret your answer in words.

    • Do the data provide sufficient evidence to conclude that the mean distance that young novice drivers estimate they would travel while texting is lower than the mean distance found by simulating texting while driving?

    • Do the data from the student survey on texting while driving provide sufficient evidence to conclude that the percentage of young, novice male drivers who have ever texted and driven differs from the percentage of young, novice female drivers who have ever texted while driving?

    • Do the data from the student simulation on driving while texting provide sufficient evidence to conclude that the percentage of signs noticed by people who do not text while driving is smaller than the percentage of signs noticed by people who do text while driving?

    THE LESSON’S GOALS AND IMPACT

    Reflecting on the lesson, some students commented on mathematical gains they had made and on their understanding that the language used affects the validity of the resulting statistics. One student said, “I didn’t realize how much the wording of the questions mattered. It seemed simple . . . really easy. But, until you collect data, you don’t see that there are problems.” Other students commented on the lesson’s personal significance. One student wrote, “Now, armed with real estimates on how dangerous answering text messages [is] while driving is to my life and others, I will not be doing it . . . your texting and driving lesson did really inspire me to think more about how dangerous it is. I think it is an important lesson for everyone to learn.”

    This lesson gave students the opportunity to see that surveys can be powerful tools when they are well designed and implemented. Students were enthusiastic and highly motivated to examine their own behavior behind the wheel by analyzing data they had generated. They came to appreciate that the results from surveys can be unintentionally misleading or purposefully deceptive. They also realized that the strength of the evidence used to answer a question depends on the quality of the data collected. This exploration strengthened their ability to write and implement well-designed surveys and also led to a welcome bit of skepticism and critical thinking when reading the results of surveys. For students, this was a valuable mathematics lesson and a potentially life-saving experience.

    REFERENCES

    Aliaga, Martha, George Cobb, Carolyn Cuff, Joan Garfield, Rob Gould, Robin Lock, Tom Moore, Allan Rossman, Bob Stephenson, Jessica Utts, Paul Velleman, and Jeff Witmer. 2005. Guidelines for Assessment and Instruction in Statistics Education: College Report. Alexandria, VA: American Statistical Association.

    Center for Disease Control and Prevention (CDC). 2013. Youth Risk Behavior Surveillance—United States, 2013. Atlanta, GA: CDC, Department of Health and Human Services. http://www.cdc.gov/mmwr/pdf/ss/ss6304.pdf

    Cobb, George W. 2013. “What Might a Twenty-Year-Old Conference Tell Us about the Future of Our Profession?” Journal of Statistics Education 21 (2). http://www.amstat.org/publications/jse/v21n2/cobb.pdf

    Common Core State Standards Initiative (CCSSI). 2010. Common Core State Standards for Mathematics. Washington, DC: National Governors Association Center for Best Practices and the Council of Chief State School Officers. http://www.corestandards.org/wp-content/uploads/Math_Standards.pdf

    Dillman, Don A. 2007. Mail and Internet Surveys: The Tailored Design Method. 2nd ed. New York: John Wiley and Sons.

    Franklin, Christine, Gary Kader, Denise Mewborn, Jerry Moreno, Roxy Peck, Mike Perry, and Richard Scheaffer. 2005. Guidelines for Assessment and Instruction in Statistics Education (GAISE) Report: A Pre-K–12 Curriculum Framework. Alexandria, VA: American Statistical Association.

    Hogg, Robert V. 1991. “Statistics Education: Improvements Are Badly Needed.” The American Statistician 45 (4): 342–43.

    ———. 1992. “Workshop on Statistical Education.” In Heeding the Call for Change: Suggestions for Curricular Action, edited by Lynn Arthur Steen, pp. 34–46. MAA Notes and Reports Series. Washington, DC: Mathematical Association of America.

    Nardi, Peter M. 2003. Doing Survey Research: A Guide to Quantitative Methods. Boston: Pearson Education.

    National Council of Teachers of Mathematics (NCTM). 2000. Principles and Standards for School Mathematics. Reston, VA: NCTM.

    National Highway Traffic Safety Administration (NHTSA). n.d. Washington DC: NHTSA. http://www.distraction.gov/stats-research-laws/facts-and-statistics.html

    ———. April 2012. Traffic Safety Facts: Young Drivers Report the Highest Level of Phone Involvement in Crashes or Near-Crash Incidents. Technical report no. DOT HS 811 611. Washington, DC: NHTSA. http://www.distraction.gov/download/811611.pdf

    Salant, Priscilla, and Don A. Dillman. 1994. How to Conduct Your Own Survey. New York: John Wiley and Sons.

    Snee, Ronald D. 1993. “What’s Missing in Statistical Education?” The American Statistician 47 (2):149–54.

    MARLA A. SOLE, marla.sole@guttman.cuny.edu, is an assistant professor of mathematics at CUNY Guttman Community College in New York City. Her interests include STEM persistence, financial literacy, math modeling, and survey design and data analysis.
  • Leave Comment


    Please Log In to Comment