Contextualizing for Motivation

What we know thus far is that beginning computing students learn far less about design and programming than we might predict, and they fail their first course at a pretty high rate. We find that changing from textual to visual modalities does not consistently result in better results, which suggests that another variable may be at play. The previous section explored the possibility that differences in how visualizations are used may be one such variable. What other variables might we manipulate in order to improve student success and learning?

In 1999, Georgia Tech decided to require introductory computing of all undergraduate students. For the first few years of this requirement, only one course satisfied this requirement. Overall, the pass rate in this course was 78%, which is quite good by Bennedsen and Caspersen’s analysis [Bennedsen and Caspersen 2007]. However, that number wasn’t so good when we start disaggregating it. The pass rate for students from the Liberal Arts, Architecture, and Management colleges was less than 50% [Tew et al. 2005a]. Women failed at nearly twice the rate of men. A course aimed at all students, but at which mostly males in technical majors pass, highlights general problems in teaching computing.

In 2003, we started an experiment to teach students in those classes a different kind of introductory course, one contextualized around manipulating media [Forte and Guzdial 2004]. Students worked on essentially the same kinds of programming as in any other introductory computer science course. In fact, we worked hard to cover all the topics [Guzdial 2003] that were recommended by the then-current ACM and IEEE Computing Standards [IEEE-CS/ACM 2001]. However, all the textbook examples, sample code in lectures, and homework assignments had students manipulating digital media as the data in these programs. Students learned how to iterate through the elements of an array, for instance, by converting all the pixels in a picture to grayscale. Instead of concatenating strings, they concatenated sound buffers to do digital splicing. Iteration through a subrange of an array is introduced by removing red-eye from a picture without messing up any other red in the picture.

The response was positive and dramatic. Students found the new course more relevant and compelling, particularly women, whose pass rate rose above men’s (but not with any statistical significance) [Rich et al. 2004]. The average pass rate over the next three years rose to 85%, and that applies even to those majors who were passing at less than 50% per semester before [Tew et al. 2005a].

Sounds like a big success, but what do these papers really say? Do we know that the new approach is the only thing that changed? Maybe those colleges suddenly started accepting much smarter students. Maybe Georgia Tech hired a new, charismatic instructor who charmed the students into caring about the content. Social science researchers refer to these factors that keep us from claiming what we might want to claim from a study threats to validity.[13] We can state, in defense of the value of our change, that the [Tew et al. 2005a] paper included semesters with different instructors, and the results covered three years’ worth of results, so it’s unlikely that the students were suddenly much different.

Even if we felt confident concluding that Georgia Tech’s success was the result of introducing Media Computation, and that all other factors with respect to students and teaching remained the same, we should wonder what we can claim. Georgia Tech is a pretty good school. Smart students go there. They hire and retain good teachers. Could your school get better success in introductory computing by introducing Media Computation?

The first trial of Media Computation at a different kind of school was by Charles Fowler at Gainesville State College in Georgia. Gainesville State College is a two-year (not undergraduate and post-graduate) public college. The results were reported in the same [Tew et al. 2005a] paper. Fowler also found dramatically improved success rates among his students. Fowler’s students ranged from computer science to nursing students. However, both the Georgia Tech and Gainesville students were predominantly white. Would this approach work with minority students?

At the University of Illinois-Chicago (UIC), Pat Troy and Bob Sloan introduced Media Computation into their “CS 0.5” class [Sloan and Troy 2008]. Their class was for students who wanted to major in computer science but had no background in programming. A CS 0.5 class is meant to get them ready for the first (“CS1”) course. Over multiple semesters, these students’ pass rate also rose. UIC has a much more ethnically diverse student population, where a majority of their students belong to minority ethnic groups.

Are you now convinced that you should use Media Computation with your students? You might argue that these are still unusual cases. The studies at Georgia Tech and Gainesville were with nonmajors (as well as majors at Gainesville). Although Troy and Sloan are dealing with students who want to major in computer science, their class is not the normal introductory computer science course for undergraduate majors.

Beth Simon and her colleagues at the University of California at San Diego (UCSD) started using Media Computation two years ago as the main introductory course (“CS1”) for CS majors [Simon et al. 2010]. More students pass with the new course. What’s more, the Media Computation students are doing better in the second course, the one after Media Computation, than the students who had the traditional CS1.

Is that it? Is Media Computation a slam dunk? Should everyone use Media Computation? Although my publisher would like you to believe that [Guzdial and Ericson 2009a]; [Guzdial and Ericson 2009b], the research does not unambiguously bear it out.

First, I haven’t claimed that students learn the same amount with Media Computation. If anyone tells you that students learn the same amount with their approach compared to another, be very skeptical, because we currently do not have reliable and valid measures of introductory computing learning to make that claim. Allison Tew (also in [Tew et al. 2005a]) first tried to answer the question of whether students learn the same in different CS1s in 2005 [Tew et al. 2005b]. She developed two multiple-choice question tests in each of the CS1 languages she wanted to compare. She meant them to be isomorphic: a problem meant to evaluate a particular concept (say, iteration over an array) would be essentially the same across both tests and all languages. She used these tests before and after a second CS course (CS2) in order to measure how much difference there was in student learning between the different CS1s. Tew found that students did learn different things in their CS1 class (as measured by the start-of-CS2 test), but that those differences disappeared by the end of CS2. That’s a great finding, suggesting that the differences in CS1 weren’t all that critical to future success. But in future trials, she never found the same result.

How could that be? One real possibility is that her tests weren’t exactly the same. Students might interpret them differently. They might not measure exactly the kind of learning that she aimed to test. For instance, maybe the answer to some of the multiple-choice questions could be guessed because the distractors (wrong answers) were so far from being feasible that students could dismiss them without really knowing the right answer. A good test for measuring learning should be reliable and valid—in other words, it measures the right thing, and it’s interpreted the same way by all students all the time.

As of this writing we have no measure of CS1 learning that is language-independent, reliable, and valid. Tew is testing one now. But until one exists, it is not possible to determine for sure that students are learning the same things in different approaches. It’s great that students succeed more in Media Computation, and it’s great that UCSD students do better even in the second course, but we really can’t say for sure that students learn the same things.

Second, even if Georgia Tech, Gainesville, UIC, and UCSD were all able to show that students learned the same amount in the introductory course, what would all that prove? That the course will work for everyone? That it will be better than the “traditional” course, no matter how marvelous or successful the “traditional” course is? For every kind of student, no matter how ill-prepared or uninterested? No matter how bad the teacher is? That’s ridiculous, of course. We can always imagine something that could go wrong.

In general, curricular approaches offer us prescriptive models, but not predictive theories. Media Computation studies show us evidence that, for a variety of students and teachers, the success rate for the introductory course can be improved, but not that it inevitably will improve. Being able to promise improvement would be a prescription. It is not a predictive theory in the sense that it can predict improvement, not without knowing many more variables that have not yet been tested with studies. It also can’t be predictive because we can’t say that not using Media Computation guarantees failure.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset