Individual or Environment

We now turn to the third question with which we started this chapter, which we can now formulate a bit more precisely. If you know that it isn’t trivial to determine software developer expertise, should you focus on tools and techniques instead? Imagine a policy maker in charge of reducing the number and severity of road traffic collisions, the leading cause of death among children worldwide 10‒19 years old and the sixth leading preventable cause of death in the United States. Should she prioritize efforts on increasing driving skills and driver awareness, or should she spend more money on environmental measures, such as improving the road standard, lowering speed limits, and lobbying for more safety features in cars? Both, you might say, but then, within a limited budget, how much on each?

Research on road traffic collisions is, unfortunately, in a fortunate position: there is a tremendous amount of data worldwide available for analysis. It is therefore actually possible to make rational decisions on where to spend resources. Software engineering is not in a position to make such clear-cut decisions. We don’t yet have enough data, and our tasks are extremely diverse.

Skill or Safety in Software Engineering

Programming skill is becoming measurable. This means we may get a better grasp on both the task of programming and what it means to be an expert programmer. We’ve made progress on construct validity (that is, our measures consistently reflect aspects of programming skill and task difficulty), and we’re increasing our content and criterion validity (that is, we’re working on expanding our scientific grasp on the concepts of programming skill and task difficulty, as well as validating the constructs with regards to real-life programming success). This makes it possible to develop and improve our training programs for programmers.

Not so with software effort estimation. Forecasting how much effort a team or a project will spend on developing some part of a system is in general not within our grasp of understanding, to the extent that we can reliably measure task difficulty or estimation skill—which also means that we don’t yet know how to train people to become better at it.

We do know, though, that facilitating the environment seems to help. A few examples are: deleting irrelevant information from requirements documents, avoiding distortions of thinking by discouraging tentative base estimates, and asking people to estimate ideal effort prior to estimating most likely effort (see also [Jørgensen 2005]). These measures are intended to alter the environment in which the judgment (i.e., estimation) process occurs, and the purpose is to counter known psychological biases in those who perform the judgment. Other environmental measures are group estimation, which is on average more accurate than single estimates, and using the appropriate process model: estimates associated with iterative development are often better than those associated with relay-style development [Moløkken-Østvold and Jørgensen 2005]. It is possible to develop tools and techniques that support all these environmental measures.

Pair programming (examined in Chapter 17 by Laurie Williams, and to a lesser extent in Chapter 18 by Jason Cohen) is an example of an environmental measure taken to improve code production. Rather than improving the programmers themselves, one relies on social processes that are meant to enhance productivity of both the individual programmer and the team. It’s important to know under which circumstances pairing people up is beneficial. For example, pairing up seems to be most beneficial for novices on complex programming tasks [Arisholm et al. 2007]; see also the meta-analysis in [Hannay et al. 2009] or [Dybå et al. 2007].

Collaboration

Teamwork and collaboration are in vogue now. For example, it is generally seen as beneficial to involve and collaborate with various stakeholders throughout the stages of the software development process. Thus, introducing more explicit collaboration is meant as a major environmental facilitator.

It should be worth looking into exactly what successful collaboration is. So instead of (or in addition to) looking at, say, personality, one should look at how people collaborate. There is extensive research into collaboration, team composition, and group processes. In fact, there is almost too much! I once tried to wrap my brain around relevant basic group processes for pair programming (see Figure 6-4), but it soon became apparent that unless you know what sort of collaboration that is actually going on, it is impossible to decide which theory to apply. Quite often researchers apply some theory or another as a post hoc explanation for their observations [Hannay et al. 2007]. And quite often, this is easy. There is always some theory that fits! For example, social facilitation might explain why a team performs well, while social inhibition might explain why a team doesn’t perform well. So you can use either theory to explain whatever outcome you might observe. This is, of course, possible only if you don’t really know what’s going on, because if you did, you’d probably see that the underlying mechanisms of at least one of the theories do not fit with what’s going on.

Too much theory! (If you don’t know what’s really going on.)

Figure 6-4. Too much theory! (If you don’t know what’s really going on.)

To find out what kind of collaboration goes on in pair programming, we listened to audio tapes of 43 professional pair programmers solving a pair programming task. We classified their mode of collaboration according to their verbal interaction [Walle and Hannay 2009]. The classification scheme we used was developed in a boot-strapping manner, both top-down from existing schemes and bottom-up from the specific collaboration that was apparent in the audio recordings. You can see the resulting scheme in Figure 6-5.

Classification scheme for verbal collaboration

Figure 6-5. Classification scheme for verbal collaboration

The scheme has two levels. The first level distinguishes according to the main focus of the discourse—for example, whether the pair is discussing the task description, or trying to comprehend the code, or writing code together, or talking about something else, like last night’s soccer game. The second level analyzes so-called interaction sequences [Hogan et al. 2000]. The central element here is the interaction patterns. For example, verbal passages are classified as Elaborative if both peers contribute substantive statements to the discussion and the speakers make multiple contributions that build on or clarify another’s prior statement.

One would think that the presence of elaborative verbal collaboration would be beneficial for performance. However, we didn’t find evidence for this in our data. Tentative results are that spending more time discussing the task description leads to less time spent overall in solving the task. It is too early to reach a conclusion, though, so you should not take this as evidence, but rather as input to your reflective process as a practitioner. Efforts to understand the collaboration that goes on in software effort estimation are also ongoing [Børte and Nerland 2010], [Jørgensen 2004].

Personality Again

Personality inevitably enters the scene when it comes to collaboration. For example, in ethnographic studies focusing on personality issues and disruption in software engineering team collaboration, it was found that disruption is bad, but lack of debate (which is a mild form of disruption) is worse [Karn and Cowling 2006], [Karn and Cowling 2005]. It is argued that pairs or teams with personalities that are too alike will lead to lack of debate. This finds empirical confirmation in [Williams et al. 2006] and [Walle and Hannay 2009]. In particular, differences in Extraversion has the largest effect: pairs whose peers have different levels of Extraversion collaborate more intensely (that is, engage in more discussion) than those with more similar levels.

A Broader View of Intelligence

People who score high on IQ tests are not necessarily good at planning or at prioritizing when to plan and when to move forward and take action [Sternberg 2005]. Planning is important. Remember our tentative findings that the quickest pair programmers were those who spent relatively more time to understand the task description, which can be taken as spending more time planning. One might further argue that “classical” intelligence tests, with their emphasis on processing speed, show merely that high scorers are good at solving IQ test items, but fail to measure their ability to find the best solution for actual real-life needs [Schweizer 2005].

There has been concern that the content validity of the traditional intelligence constructs are not adequate. Based on input from cognitive psychologists specializing in intelligence, Robert Sternberg and his colleagues found that intelligence should include the capacity to learn from experience, to use metacognitive processes (i.e., to plan) to enhance learning, and the ability to adapt to the environment. Also, what is considered intelligent in one culture might be seen as silly in another culture [Sternberg 2005]. The “classic” constructs of intelligence, thus, may not be extensive enough. See next.

Triarchic Theory of Intelligence

Figure 6-6. Triarchic Theory of Intelligence

According to Sternberg, intelligence manifests itself strongly in the ability to be street-wise and to exhibit adaptive behavior. Adaptive behavior consists of practical problem-solving ability, verbal ability, and social competence. The latter includes things such as accepting others for what they are, admitting mistakes, and displaying interest in the world at large. Invaluable stuff for collaboration!

Adaptive behavior is also the tenet of Gerd Gigerenzer and the Adaptive Behavior and Cognition research group. When faced with complex tasks, Western science and engineering disciplines usually teach us to analyze and get a full overview of all relevant factors and then to take appropriate action. However, many tasks, especially ill-defined ones such as software effort estimation, are beyond our capabilities of a thorough enough analysis for ever reaching an overview of all such relevant factors. By the sheer effort we put into it, we might think we’re cataloging a lot of relevant factors, but we’re probably missing out on more factors than we’re including. Thus, the argument goes, such an analytical approach is bound to fall short of the target. Man has adapted to and survived complex situations not by thorough analysis, but by eliminating irrelevant factors and focusing on a few critical ones [Gigerenzer 2007], [Gigerenzer and Todd 1999]. Thus, adaptive behavior means to acknowledge your shortcomings and to choose your strategy accordingly. This bears relevance for software development and software effort estimation, because it is impossible to specify and analyze everything beforehand. Approaches to focus on the few most critical factors for software effort estimation include analogical reasoning and top-down estimation [Li et al. 2007], [Jørgensen 2004], [Hertwig et al. 1999].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset