Making the Tools Better by Shifting to Visual Programming

How do we make the tools better? One obvious possible answer is by moving to a more visual notation. Since David Smith’s icon-based programming language Pygmalion emerged [Smith 1975], the theory has been that maybe visual reasoning is easier for students. There certainly have been a lot of studies showing that visualizations in general helped students in computing [Naps et al. 2003], but relatively few careful studies.

Then, Thomas Green and Marian Petre did a head-to-head comparison between a dataflow-like programming language and a textual programming language [Green and Petre 1992]. They created programs in two visual languages that had been shown to work well in previous studies and in a textual language that had also tested well. Subjects were shown a visual program or a textual program for a short time, and then asked a question about it (e.g., shown input data or output results). Understanding the graphical language always took more time. It didn’t matter how much experience the subject had with visual or textual languages, or what kind of visual language. Subjects comprehended visual languages more slowly than textual languages.

Green and Petre published several papers on variations of this study [Green et al. 1991]; [Green and Petre 1996], but the real test came when Tom Moher and his colleagues [Moher et al. 1993] stacked the deck in favor of visual languages. Tom and his graduate students were using a visual notation, Petri Nets, to teach programming to high school students. He got a copy of Green and Petre’s materials and created a version where the only visual language used was Petri Nets. Then, Tom reran the study with himself and his students as subjects. The surprising result was that textual languages were more easily comprehended again, under every condition.

Are we wrong about our intuition about visual languages? Does visualization actually reduce one’s understanding of software? What about those studies that Naps et al. were talking about [Naps et al. 2003]? Were they wrong?

There is a standard method for comparing multiple studies, called a meta-study. Barbara Kitchenham describes this procedure in Chapter 3 of this book. Chris Hundhausen, Sarah Douglas, and John Stasko did this type of analysis on studies of algorithm visualizations [Hundhausen et al. 2002]. They found that yes, there are a lot of studies showing significant benefits for algorithm visualizations for students. There are a lot of studies with nonsignificant results. Some studies had significant results but didn’t make it obvious how algorithmic visualizations might be helping (Figure 7-1). Hundhausen and colleagues found that how the visualizations were used matters a lot. For example, using visualizations in lecture demonstration had little impact on student learning. But having students build their own visualizations had significant impact on those students’ learning.

Summary of 24 studies in Hundhausen, Douglas, and Stasko paper [Hundhausen et al. 2002]

Figure 7-1. Summary of 24 studies in Hundhausen, Douglas, and Stasko paper [Hundhausen et al. 2002]

A few studies varied the use of visualization while holding all other variables constant (e.g., type of class, type of student, teacher, subject). Although Hundhausen and colleagues suspect the way visualization is used is important after their analysis of 24 studies, that result is not the same as testing the suspicion. One thing we have learned from studies in education is that it is actually quite difficult to predict the outcomes. Humans are not as predictable as a falling projectile or chemical cocktails. We actually have to test our suspicions and hypotheses, and sometimes test them repeatedly under different conditions, until we are convinced about some finding.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset