Thomas Zittel
Goethe-University Frankfurt
The concept of electronic democracy implies the use of electronic media of communication to facilitate political participation and thus to enhance democracy (→ Participatory Communication). It should be kept distinct from ’electronic participation.’ The latter focuses on the communication strategies of individual citizens while the former emphasizes the media choices of political authorities and related governmental policies. Research on electronic democracy is built upon a number of core research questions asking about the specific uses of electronic media in democratic contexts, their prerequisites, and their larger ramifications for the democratic process (Coleman & Blumler 2008; Chadwick & Howard 2010).
Research on the specific uses of electronic media stresses three basic models of electronic democracy. A first model perceives new electronic media as a cost-effective means for direct decision-making via electronic voting. From this perspective, electronic media provide citizens with the necessary information on the issues at stake. Individual choices could then be taken via a simple push of a button in private homes, could be electronically transferred, and could be aggregated in a central computer controlled by voting authorities. A second model portrays electronic media primarily as a means to better link representatives with their constituents and to reform the electoral process. According to this view, electronic media enable political representatives to augment the transparency of the parliamentary process, to solicit public opinion on particular policy issues, to implement new forms of citizen consultations, and to allow citizens to elect their representatives via electronic media.
A third model pictures electronic media primarily as a means to strengthen public dialogue prior to political decision-making. From this perspective, the functioning of the formal decision-making process – be it representative or direct – is dependent upon informed and enlightened citizens and their opinions. According to proponents of this model of electronic democracy, horizontal electronically mediated communication could serve as a means to organize reasoned public dialogues prior to governmental decisions.
Dramatic changes in media technology are considered a crucial prerequisite for electronic democracy (→ Communication Technology and Democracy). Generally, this concerns a process that spans a long period of time, encompassing a multitude of technological developments from the first telegraph wire connecting both sides of the Atlantic Ocean in 1862 through to the diffusion of the Internet in the 1990s. However, with regard to electronic democracy, digital media and especially the Internet are considered to be of key importance. The Internet opens up new opportunities for mass communication and also decreases the costs of mass communication. Most importantly, from this perspective, the Internet determines new cultural contexts and related demands that put pressures on governments to adapt and to take advantage of new technological opportunities to communicate and to interact with citizens (Castells 2001).
A fair number of students of electronic democracy contradict notions of crude technological determinism. Instead, they emphasize particular social or institutional factors serving as necessary prerequisites for electronic democracy. For example, levels of economic development and social changes in established democracies such as the ‘cognitive’ and ‘postmaterialist’ revolutions are considered to spur new participatory needs and thus frame and influence developments in electronic communication and technology (Norris 2001; → Digital Divide). Also, political institutions such as electoral systems or the types of government are considered to constrain political authorities in their strategic media choices and thus their approaches towards electronic democracy (Margolis & Resnick 2000).
The larger ramifications of electronic democracy for the democratic process is a last major issue in academic and public debate. Cyber-optimists perceive electronic democracy as a magic bullet that will right all wrongs in current democracies by solving , e.g., current crises in political participation, by allowing for more political pluralism, and by facilitating more reasoned political debate. In contrast, Cyber-skeptics emphasize technological risks and malfunctions and especially the risk of political authorities abusing electronic media to jeopardize civil liberties (Mozorov 2012).
See also: Communication Technology and Democracy Digital Divide Information Society Participatory Communication
Rebecca B. Rubin
Kent State University
Educational communication is an umbrella term that encompasses all speaking, listening, and relational constructs and concepts that relate to learning. In the past, researchers have been interested in characteristics of teachers that enhance or hinder learning; student characteristics that increase or inhibit learning; teaching strategies that augment learning; how best to give criticism of student writing and speeches; how best to evaluate student work; how public speaking is best taught; and what should be taught in speech communication and media curricula. More recent work has expanded to the effects of media on children, child development processes, and the use of pedagogical methods and newer technologies to facilitate classroom or distance education.
The speech communication discipline began as a group of teachers interested in how best to instruct students in the basics of public speaking. Interest in how to teach new and different facets of the field emerged on a regular basis in the academic journals as interest grew in public speaking, rhetoric, persuasion, and debate, and later in group, interpersonal, nonverbal, intercultural, health, organizational, and family communication. Scholarly concern about K-12, undergraduate, and graduate curricula, as well as the effectiveness of the basic college communication course and speech across the curriculum, also abounded.
Likewise, early interest in journalism education later extended to radio, television, electronic media, → advertising, → public relations, and new technologies. This latter area expanded over the past few decades into using television as an instructional device and the effects of media content on children (→ Educational Media; Educational Media Content; Instructional Television). More recent interest has been in the use of new technology in the classroom or in place of a classroom. Paralleling these interests were studies focused on how teachers can communicate better in the classroom and contribute scholarship in the education area (→ Pedagogy, Communication in).
Most undergraduate and graduate programs offer classes in how best to teach various communication classes in K-12, college undergraduate, graduate, speaking across the curriculum, and basic course settings. Recent advances in technology have seen past interest in the use of television in instruction move to interests in distance learning and computer-assisted instructional technology in the classroom. Although lectures enhanced by visual technology have consistently produced greater learning, results are mixed for the superiority of traditional vs web-based vs web-assisted instruction.
Whereas ‘communication education’ focused on how to teach speech and related communication classes, ‘instructional communication,’ a broader term, concentrated on how teachers can better communicate with students in the classroom, no matter what the subject.
Research in the rhetorical approach found, for instance, that teachers who are clear, make their content relevant, and structure their messages achieve greater understanding. Several different types of questions asked by teachers lead to different types of assessments. For instance, recall questions assess memory, whereas summary questions assess ability to synthesize. Teachers can punish students (i.e., use coercive strategies) for misbehaving, reward them for behaving in acceptable ways, enact legitimate power in classroom management, use referent power to enhance student identification with them, and increase expert power through increased credibility and authority. Teachers use various classroom-management techniques to take charge of the learning environment, and reward-based techniques tend to work far better than punishment-based ones.
Research in the relational tradition found that teacher verbal (e.g., use of ‘we,’ more self-disclosure, informal names, etc.) and nonverbal (e.g., use of smiling, head nods, eye contact, touch, etc.) immediacy tends to result in greater student motivation and liking for the teacher and subject taught. Recent research has even examined effects of teacher self-disclosure on → Facebook on student learning. Teacher humor must also be seen as appropriate in order to be effective at motivating students to learn. These and other teacher behaviors can lead to feelings of significance, value, and confirmation in students, by which students feel empowered to learn just through teacher–student interaction.
As mentioned earlier, one of the main goals of communication education has been to increase communication skills. Educators have attempted to identify important skills – message construction, persuading, informing, relating – that can be enhanced through instruction and that can be reliably assessed (→ Student Communication Competence). Through this feedback, students can later reflect upon and critique their own communication outside the classroom.
Two major lines of research have examined the impact of educational media on learning. First, interest in all forms of educational media has led to examination of the programs shown and how learning occurs from this content (e.g., Sesame Street; → Educational Media Content). In addition, media literacy programs have been created in school systems to teach children (K-12) and college students how best to critique media messages and understand the commercial nature of the media (→ Media Literacy). The second line of research has examined the use of media in education. Today, PowerPoint, interactive whiteboards, LCD projectors, and other software and hardware technologies are commonplace in instruction, and concern focuses on whether the technology enhances or diverts learning efforts. Often, social networking sites substitute for face-to-face interpersonal interaction.
Computer technology at all levels of education has changed the nature of the communication classroom. Journalism students no longer pound out news stories on manual typewriters, and speech students are expected to enhance their presentations with electronic media products. Furthermore, when teachers move from the role of information presenter to that of guide, coach, motivator, or facilitator, the nature of communication will change, especially when highly evolved interactive multimedia technology is involved. Effectiveness of distance education most likely will emerge as a related topic, again as the variety of interactive channels increases for interaction with students.
Other trends have been to examine the interactive teaching/learning environment and the use of teams in the classroom (Shelton, et al. 1999). Much of the research in the past 20–25 years has actually examined the teacher–student communication environment, identifying communication behaviors that can enhance learning. This interaction has become more mediated through use of email, bulletin boards, chatrooms, blogs, Facebook, Skype, tweets, and other out-of-class opportunities for interaction.
See also: Advertising Classroom Student–Teacher Interaction Communication Apprehension: Intervention Techniques Educational Media Educational Media Content Facebook Instructional Television Media Literacy Pedagogy, Communication in Public Relations Student Communication Competence Teacher Communication Style Teacher Influence and Persuasion
Shalom M. Fisch
MediaKidz Research & Consulting
The idea of using mass media for educational purposes is by no means a new one. Books, songs, games: all of these are forms of media that have served as effective educational tools for centuries. There is a tremendous range of processes through which educational media are produced. At one end of the spectrum, some producers create media based entirely on their own creative instincts. At the other end of the spectrum lies the model used by production companies such as Sesame Workshop (formerly Children’s Television Workshop) where production staff, educational content experts, and researchers collaborate closely at all stages of production (→ Educational Media Content).
Perhaps the most prominent – and certainly the most extensively researched – example of an educationally effective television series is Sesame Street. Studies demonstrate that extended viewing of this series produces significant immediate effects on a wide range of academic skills among preschool children (e.g., knowledge of the alphabet, vocabulary size, letter–word knowledge, math skills, sorting and classification). Comparable effects have been found for international co-productions of Sesame Street in countries such as Mexico, Turkey, Portugal, and Russia. In addition, several longitudinal studies have found long-term effects of the series as well. In the longest-term study to date, even high-school students who had watched more educational television – and Sesame Street in particular – as preschoolers had significantly higher grades in English, mathematics, and science in junior high or high school. They also used books more often, showed higher academic self-esteem, and placed a higher value on academic performance (→ Educational Television, Children’s Responses to).
Because interactive media are newer than educational television, less research is currently available to evaluate their impact on children’s learning. Students’ use of the Internet as an informational resource opens the opportunity for unprecedented access to material on a tremendous range of topics. Yet students’ reliance on online information is also a matter of some concern, because much of the information posted on the Internet is not subject to the same sort of review or validation as material published in a book or newspaper. As a result, information found online may be subject to blatant inaccuracy or bias that children fail to recognize. Although the area has not yet been researched extensively, several empirical studies have shown that well-designed computer games and applications can serve as useful tools for both formal and informal education (→ Computer Games and Child Development).
In contrast to the extensive empirical research literature, there have been far fewer attempts to construct theoretical models of the cognitive processing responsible for such effects. Fisch’s (2004) capacity model predicts that comprehension of educational content will be stronger, not only when the resource demands for processing the educational content are low, but when the resource demands for processing the narrative content are low as well. Mayer and Moreno’s (2003) cognitive theory of learning from media (CTLM) approach to high-tech multimedia is also grounded in the integration of information and the limitations of working memory. According to this model, users learning from multimedia must attend to and acquire information from multiple sensory modalities (visual, auditory, tactile, etc.), and their ability to do so is constrained by the limitations of working memory.
Today, it is increasingly common for projects to span more than one media platform, so that (for example) an educational television series might be accompanied by a related website, hands-on outreach materials, or even a museum exhibit or live show. This raises questions as to how learning from combined use of related, multiple-media platforms (known as cross-platform learning) compares to learning from a single medium. Perhaps the most important impact of research lies in its ability to inform the production of new programming. By identifying what ‘works’ research can help producers build on the most effective techniques as they create new material.
See also: Computer Games and Child Development Educational Media Content Educational Television, Children’s Responses to Information Literacy Information Processing Infotainment Instructional Television
Jennings Bryant
University of Alabama
Educational media content refers to mediated messages designed to teach or provide opportunities for learning. The nature of mediated education varies greatly, ranging from formal curriculum-based message systems designed for classroom consumption to informal or pro-social media messages with the potential for producing incidental learning or pro-social change (→ Educational Media).
In historical perspective, education has been an important goal and function of print media from their earliest formulations. Whether the words and other significant instructional symbols were carved into clay or written on papyrus or vellum, many of the earliest extant media message systems were educational in nature, with contents ranging from an ancient Egyptian pharaoh’s instructional manual on effective communication to canons of early religious communities (e.g., Bible, Torah, Koran), which were employed both to help convert unbelievers and to instruct the converted. Even today, print media are an invaluable portion of the lesson plans for teachers from primary school through postgraduate education. The story is similar for the place of education and instruction in the history of film, although the transition from the celluloid medium to diverse forms of electronic educational media is more complete than for its print cousins. Educational radio began when, in 1922, the British Broadcasting Company (later the British Broadcasting Corporation; i.e., the → BBC) was to enable independent British broadcasters to utilize wireless communication to educate, inform, and entertain all British citizens, free from political interference and commercial pressure. The concept of educational television programming was mainly developed in the middle of the twentieth century in the US.
Major developments in both computing and telecommunications fused to inaugurate the information age by digital media that was to truly revolutionize the role and form of educational media (→ Digital Media, History of). The introduction of educational videos and DVDs, the maturing of distance education, and the explosion of video and computer games, as well as the remarkable diffusion and adoption of the Internet as a vehicle for teaching/learning, have created an educational media landscape that is undergoing remarkably rapid evolution and reformulation.
It is difficult to examine educational television, especially children’s educational programming, without focusing on the US. The programming of the Children’s Television Workshop (CTW, now Sesame Workshop) revolutionized children’s educational television, in part because CTW developed an innovative program-development model that brought together producers, educational advisors, and researchers to create innovative and effective educational programming. The CTW team programmed for the developmental level of their programs’ targeted audiences, and they harnessed the formal features of the television medium to meet specific instructional goals, while also addressing critical societal needs. Such needs encompassed getting young children reading for school (e.g., Sesame Street), improving the reading skills of school-age children (e.g., The Electric Company), and guiding girls toward career interests in science (e.g., 3–2–1 Contact).
Many of the most popular children’s educational programs in other countries are co-productions with US agencies and corporations like PBS Kids, Sesame Workshop, Nickelodeon, and Disney. Moreover, other successful educational television-program production and distribution houses with a worldwide reach, such as the Discovery Channel and National Geographic Television, are based in the US. Given these caveats, most countries with any sort of public-service broadcasting initiative have developed educactional television programs (→ Public Broadcasting Systems). More general examples of advancements in children’s educational television abound worldwide. For example, in 1997 the German public broadcasting stations founded the KinderKanal (or KiKa – the children’s channel), a channel devoted exclusively to programs for children, including educational programs.
The twenty-first century has witnessed an explosion in electronic educational media, including media with quality production values that target preschoolers and are designed for home use. The use of such products by young children typically countervails the American Academy of Pediatrics’ recommendations that babies under age 2 have no screen time whatsoever, and that other preschoolers have no more than two hours per day of screen time. As the twenty-first century unfolds, rarely is it valid to consider one educational medium in isolation, because convergence has produced an amalgamation of educational media message systems of all types, typically anchored on the backbone of the Internet.
See also: BBC Digital Media, History of Educational Communication Educational Media Educational Television, Children’s Responses to Federal Communications Commission (FCC) Learning and Communication Media Use and Child Development Public Broadcasting System Video Games
Jennings Bryant
University of Alabama
Wes Fondren
Coastal Carolina University
One of the most immediate ways children can respond to educational programming is simply by choosing to watch a particular program rather than all others available. A question that has arisen is to what degree children are deliberately selecting what they are attending to when watching television. Attention to progamming is another research stream. Not surprisingly, the levels of activity and passivity appear to be dependent on the age of the child and developmental status. Young children, such as preschoolers, tend to focus their attention on shows that contain frequent changes in visual and auditory stimuli in brief segments presented at a rapid pace. As children age, they attend to slower-paced programming that is more plot driven than stimulus driven. This shift is believed to reflect the increasing cognitive sophistication of older children, which allows for more goal-oriented viewing (→ Attending to the Mass Media).
A key factor in comprehension is the level of prior salient knowledge the child brings to the content. Comprehension is not solely a function of age, but also of environmental factors that impact development (e.g., educational emphasis in the home, school curricula). Similarly, as children develop cognitively there is an increased ability to make inferences. This ability can help children make sense of situations where necessary information may not be explicit. Further, over time learned behavior becomes automatic, requiring less conscious effort, and allowing more mental energy and attention to be spent on → information processing and assimilation (automaticity).
Numerous studies of children who view educational television have revealed that they show increased abilities in reading, writing, mathematics, science, and knowing current events (Fisch 2002). Frequent goals of educational programs have been to present positive social lessons and promote pro-social attitudes and behavior in children. Research has shown that pro-social benefits for children often increased when educational messages were accompanied by similar activities separate from television viewing. A related goal often is to reduce antisocial attitudes and behaviors.
See also: Attending to the Mass Media Educational Communication Educational Media Educational Media Content Information Processing Media Use and Child Development Parental Mediation Strategies Selective Perception and Selective Retention Selective Exposure
Helen Margetts
University of Oxford
E-government may be defined as the use by government of information and communication technologies, internally and to interact with citizens, firms, nongovernmental organizations, and other governments. E-government in practice, therefore, consists of both complex networks of information systems within government organizations and a huge range of websites with which citizens can communicate, interact, and transact. The term ‘e-government’ only came into common usage in the 1990s as societal use of the Internet became widespread. While earlier technologies were largely internally facing, the Internet for the first time provided government with the possibility to interact electronically with individuals and organizations outside government (→ E-Democracy). With the Internet, the possibilities for cost reduction increased. They also create new possibilities for new policy ‘windows’ in terms of innovation. Digital technologies open up potential for innovation in the use of all four of the ‘tools’ of government policy: nodality (the capacity to collect and disseminate information), authority, treasure (money, or other exchangeable benefits), and organizational capacity.
Until the 2000s, academic visions of e-government tended to range from the highly utopian to the severely dystopian, with a lack of sustained empirical research filling the middle of the spectrum. ‘Hyper-modernists’ have argued that as the Internet and associated technologies become ubiquitous, government will become more and more efficient and therefore smaller, until eventually governmental organizations themselves will become increasingly irrelevant. Toffler (1990) argued that the decentralization afforded by information technology would inevitably lead to political decentralization as well, so that eventually bureaucracy itself would become irrelevant. While also believing in transformation for government through technology, ‘anti-modernists’ concentrated on the negative effects, believing that e-government would be more powerful, and more intrusive in the lives of citizens, than traditional bureaucracy and lead to a ‘computer state’ or a ‘control state.’
Only in the 2000s did a sustained body of research begin to develop; however, the minority of it in communications. The study of e-government has become heavily populated by management consultancies, IT corporations, and international organizations, which have produced a number of reports and rankings of countries in terms of their e-government development (United Nations, 2012). Neither the wildest dreams of the hyper-modernists, nor the worst nightmares of the anti-modernists appear to have materialized. The use of the Internet varies widely across societies and also across governments, but use of information and communication technologies has not in any deterministic way caused liberal democratic governments to become less liberal or less democratic. Neither have authoritarian governments necessarily become more authoritarian, although the nature of their authoritarian techniques has changed. But in all these countries the new capacity for their subjects to develop social networks outside their country and to be aware of developments in the rest of the world can outweigh the increased control that technologies internal to government permit (→ China: Media System).
As for the hyper-modernists, their visions are challenged by the trend for governments in most countries to lag behind commercial organizations, and indeed society in general, in capitalizing on the potential benefits of Internet technologies. Others have identified a number of cultural barriers to e-government development, both from the supply side (in terms of civil servants being resistant to new electronic channels) and the demand side (in terms of citizens being willing to interact with government in new ways). However, the potential for e-government to look different from traditional notions of government remains, with the opening up of a wide range of new options for policymakers. Most of these potential applications of e-government have a key distinguishing feature: Governments can identify and treat groups of citizens differently according to their circumstances and need. For nodality, information can be targeted at specific groups, through websites used by specific age groups or group-targeted emails. Treasure, in terms of social welfare benefits or tax credits, for example, can be easily means-tested according to other financial information held by government. Authority can also be targeted, through ‘fast-track’ border control systems, for example. Even physical organization can be group-targeted, e.g., through barriers that respond to transponders fitted to police and emergency vehicles but not to normal cars, as used in city centers in several parts of the world. Furthermore, as Internet-based technologies themselves developed toward Web 2.0 applications, where users generate content through recommender and reputational systems, blog, wikipedias, social networking sites, and user feedback systems, governments too have to innovate if they want to take advantage of applications like these.
See also: Censorship China: Media System E-Democracy Exposure to the Internet Information Society Social Media
Daniel O’Keefe
Northwestern University
The elaboration likelihood model (ELM) of → persuasion suggests that important variations in the nature of persuasion are a function of the likelihood that receivers will engage in elaboration of (that is, thinking about) information relevant to the persuasive issue. Depending on the degree of elaboration, two different kinds of persuasion process can be engaged. These two persuasion processes are called the “central route” and the “peripheral route” (Petty & Cacioppo 1986; → Media Effects).
The central route represents the persuasion processes involved when elaboration is relatively high. Where persuasion is achieved through the central route, it commonly comes about through extensive issue-relevant thinking: careful examination of the information contained in the message, close scrutiny of the message’s arguments, and so on. The peripheral route represents the persuasion processes involved when elaboration is relatively low. Where persuasion is achieved through peripheral routes, it commonly comes about because the receiver employs some simple decision rule (some heuristic principle) to evaluate the advocated position. For example, receivers might be guided by whether they find the communicator credible (→ Information Processing).
The amount of elaboration in a given situation (and hence which route is activated) is influenced by a number of factors, which can be classified broadly as influencing either elaboration motivation or elaboration ability. Elaboration motivation can be influenced by the relevance of the topic (greater personal relevance leads to greater elaboration motivation) and by the receiver’s level of ‘need for cognition,’ a personality characteristic reflecting the tendency to enjoy thinking (→ Personality and Exposure to Communication). Elaboration ability can be influenced by the presence of distraction in the persuasive setting or the amount of relevant background knowledge.
Because central-route and peripheral-route persuasion have different underlying processes, the factors determining persuasive success correspondingly differ. In central-route persuasion, persuasive effects depend upon the predominant valence (positive or negative) of the receiver’s issue-relevant thoughts. To the extent that the receiver is led to have predominantly favorable thoughts about the advocated position, the message will presumably be relatively successful. The predominant valence of elaboration is influenced by whether the message’s advocated position is pro-attitudinal or counter-attitudinal for the receiver (everything else being equal, pro-attitudinal messages will likely evoke predominantly favorable thoughts, counter-attitudinal messages predominantly unfavorable thoughts) and by the strength of the message’s arguments (better-quality arguments lead to more positive thoughts).
By contrast, in peripheral-route persuasion, receivers use heuristic principles, simple decision procedures activated by peripheral cues. For example, in the credibility heuristic, rather than carefully considering the message’s arguments, receivers simply rely on the communicator’s apparent expertise as a guide to what to believe. Other heuristics are based on the receiver’s liking for the communicator and on the reactions of others to the message. As elaboration increases, the influence of such heuristics diminishes – but where receivers are unable or unmotivated to engage in message scrutiny, these shortcuts are relied upon.
The ELM emphasizes that any given variable can influence persuasion in three ways. It might affect the degree of elaboration (and thus influence the degree to which central-route or peripheral-route processes are engaged), it might serve as a peripheral cue (and so influence persuasive outcomes when peripheral-route persuasion is occurring), or it might influence the valence of elaboration (and so influence persuasive outcomes when central-route persuasion is occurring). For example, credibility might activate the credibility heuristic or it might influence the amount of elaboration (as when receivers decide that the communicator’s expertise makes the message worth attending to closely). Because variables can play different roles in persuasion, a variable might have very different effects on persuasion from one situation to the next.
See also: Attitude–Behavior Consistency Attitudes Information Processing Involvement with Media Content Media Effects Personality and Exposure to Communication Persuasion
Holli A. Semetko
Emory University
Election campaigns are among the most important events in the lives of democracies and societies in transition. Campaigns often constitute the high points in public debate about political issues (→ Political Communication). Election campaign communication is shaped by different national, cultural, and regional contexts, party and media systems, candidate characteristics, and regulatory environments (→ Political Communication Systems). The balance of party and media forces in shaping the news agenda has been forever changed by the increasing role played by citizens and interest groups in generating messages and news about parties, leaders, and issues through a variety of traditional and new online media, and the latest popular social media platforms.
As parties and candidates focus on staying ‘on message’ in the increasingly complicated campaign environment, venues for → strategic communication abound (Norris et al. 1999). The transformation of campaign communication in the traditional news media over the past few decades reveals a greater tendency toward personalized reporting on the top party leaders or candidates, greater emphasis on campaign rhetoric as opposed to information about party policies, and increasingly negative as opposed to neutral or favorable reporting on the candidates in the campaign (→ Political Persuasion). Published → election surveys and forecasts, often commissioned by the media, have given high prominence to the question of who is ahead and who is trailing behind. While negative news may diminish turnout in some elections in the US (Patterson 2002), in other national contexts voters have been found to be both “cynical and engaged” (de Vreese & Semetko 2006; → Political Efficacy).
Election campaigns often serve as a ‘laboratory’ for developing and testing media effects models (→ Media Effects). Several key concepts of media effects research, such as the selectivity principle and the opinion leader concept, originate from election campaign research (→ Selective Exposure; Selective Perception and Selective Retention; Opinion Leader; Two-Step Flow of Communication; Media Effects, History of). The concept of agenda-setting, which is one of the most widely used in contemporary communication research, has been advanced in election campaign studies (→ Agenda-Setting Effects). Framing refers to the context in which an issue or a problem is discussed, that provides a certain evaluation or interpretation (→ Framing of the News). Framing effects refer to when the important attributes of a message lead us to make judgments about a problem or issue (→ Framing Effects). This is particularly relevant during election campaigns when parties and candidates are trying to frame the issue in their favor (→ Strategic Framing; Issue Management in Politics).
Candidates and parties have more to gain and more to lose in the new media environment. Politically interested citizens can have a potentially greater voice and impact on the day-to-day campaign agenda simply by consistently offering their opinions and developing a reputation online (→ E-Democracy; Twitter). A key characteristic of media convergence is an emphasis on visuals. As television and the Internet become more graphic and visual and less text-driven, there will be new forms of political learning (→ Political Knowledge; Visual Communication; Television, Visual Characteristics of).
Technology and the new communication platforms created in the past decade have taken campaigning to a new level in countries around the world. Soon after the first internet elections emerged, in which candidates and parties designed effective online campaigns using new social media, the shift to mobile occurred. Campaign advertising and news now operate in “hybrid media systems” (Chadwick 2013), in which traditional and new media, and popular social media, together orchestrate campaign agendas.
See also: Agenda-Setting Effects E-Democracy Election Surveys Framing Effects Framing of the News Issue Management in Politics Media Effects Media Effects, History of Opinion Leader Political Advertising Political Cognitions Political Communication Political Communication Systems Political Efficacy Political Knowledge Political Marketing Political Media Use Political Persuasion Selective Exposure Selective Perception and Selective Retention Strategic Communication Strategic Framing Televised Debates Television, Visual Characteristics of Twitter Two-Step Flow of Communication Visual Communication
Thomas Petersen
Allensbach Institute
Election research has played a decisive part in the development of the methods of survey research from the beginning. More than market and media research, election research has awoken, in a special way, the curiosity and the ambition of researchers, and thus strongly affected empirical social research. It is significant that the breakthroughs of both the modern method of the representative survey and the empirical methods in communication research are connected with studies in electoral choice (→ Public Opinion Polling; Quantitative Methodology).
The first attempts at analyzing elections using statistical methods can be traced back to the early twentieth century. In 1905, the German researcher R. Blank published a detailed social science analysis of the social democratic party’s electorate. A decade later, attempts began to predict future electoral behavior on the basis of pre-election polls. In 1936, George Gallup, Elmo Roper, and Archibald Crossley first applied face-to-face surveys based on representative population samples to the prediction of election results in the US presidential election. They correctly predicted Franklin D. Roosevelt’s victory, whereas other political analysts had expected a victory of the Republican candidate Alf Landon.
Very soon after George Gallup’s success in 1936, election surveys became a fixed quantity in the media’s election coverage, at first in the US and after World War II in Western Europe. Today, election surveys are conducted in almost all democratic states. From the start, the institutes conducting election surveys were confronted with the allegation that the publication of their results might affect the democratic process and thus interfere with the voters’ free decision. Studies, however, have clearly shown that the effect of published election survey results is in fact weak (Hardmeier 2008). For this reason, and because of fundamental judicial considerations, repeated attempts to ban the publication of election survey results before elections have failed in several countries (Donsbach 2001).
There are two basically different types of election surveys: exit polls and pre- election surveys, with the former usually dominating media coverage on election day, while the latter are more important for in-depth analyses of the background of elections. In ‘exit polls’ a representative sample of voters is interviewed immediately upon leaving the polling station. Results are published before the first returns come in on election night. They mostly allow a very precise estimate of the election result. In countries where it is doubtful whether the vote count is conducted correctly, exit polls are also a good control on the official electoral process, provided they can be conducted independently (Frankovic 2008). In contrast, ‘pre-election surveys’ use the same methods as surveys on other subjects. Pre-election polls offer a much more detailed analytical potential than exit polls.
Spectacular errors in election forecasts, such as in the US presidential elections in 1948, have always produced, aside from intensive scientific research of the causes of the failure, a public discussion of the uses of survey research as a political information medium and its quality. Partly, these discussions are based on a misunderstanding of the possibilities and limits of survey research. There are several ways to measure the deviation of election forecasts from election outcomes. Most of these use either the difference between survey and vote shares of individual parties or candidates in percentage points or the gap between the two leading candidates as the basis of their computation. The latter are especially suited to the US political system, in which elections are almost always dominated by two parties or candidates (Traugott 2005). More complex modern computation models, which try to connect the various advantages of traditional methods, also assume an electoral or party system that is at least similar to the US system (e.g., Martin et al. 2005). In countries with a more complex party system, the average deviation of prognosis from actual outcome across all parties, multiplied by the largest deviation for any single party, usually provides a practical basis for analysis.
See also: Public Opinion Polling Quantitative Methodology
Robert Hassan
University of Melbourne
Suneel Jethani
University of Melbourne
Electronic mail (‘email’) is a primarily text-based form of communication exchanged between computing devices that incorporates hyperlinks, or file attachments. Originally developed and used by the military, computer scientists, and other specialists, email has grown into one of the most common forms of human communication (→ Digital Media, History of). An estimated 150 billion emails are generated each day, and this figure grows exponentially.
The popularity of email has been hailed as contributing to the creation of a “virtual community” (Rheingold 2000). It has also been argued that email is a creative medium where self-expression has taken new forms, such as the use of ‘emoticons,’ where emotions may be expressed in text by arranging printable characters into icons, such as :-) to symbolize happiness, and acronyms such as ‘LOL’ (laugh out loud; → Self-Presentation). In the workplace, email communication can link an organization with its customers and increase productivity to allow workers in multiple locations to communicate, share documents, and work collaboratively without the need for face-to-face or telephone communication.
From a negative perspective, an estimated 88 percent of total daily email volume occurs in the form of junk email or ‘spam.’ Another related concern is identity theft through practices such as ‘phishing’, where individuals posing as representatives of institutions such as banks make fraudulent attempts to gain passwords and credit card details. Some argue that email also degrades the ‘art of conversation’. A perennial issue facing people today is ‘email overload’ (Bellotti et al. 2005). This trend has led to the development of a range of email management techniques such as ‘Inbox Zero’ (Mann 2007). Most commentators see the growth of email continuing as a central feature of networked communications (→ Network Organizations through Communication Technology). However, in some countries younger people use email less and SMS and IM more and view email as an ‘older’ and more ‘formal’ mode (Lee 2005).
Future developments in email are likely to bring more functional user interfaces, new ways of automatically sorting and prioritizing emails, better ways to handle attached files, more advanced methods for filtering and assessing the credibility of emails received, and new ways to integrate email with other forms of online social communication.
See also: Digital Media, History of Facebook Network Organizations through Communication Technology Self-Presentation
Sarah J. Tracy
Arizona State University
Emotions and feelings are important components of organizational communication. ‘Emotions’ refer to the external display of the affected state, the meaning of which is negotiated and constructed through organizational norms. ‘Feelings,’ in contrast, are considered subjective experiences that reside in individuals.
Burnout research has examined emotional exhaustion at work, depersonalization or a negative shift toward others, and a decreased sense of personal accomplishment. Communication researchers have differentiated between healthy empathic concern and counterproductive emotional contagion that leads to burnout. Workplace bullying is a toxic combination of unrelenting emotional abuse, social ostracism, interactional terrorizing, and other destructive communication that erodes organizational health and damages employee well-being (Lutgen-Sandvik & Tracy 2012). Studies of ‘emotional labor’ examine employees who create emotion – in the form of smiling, caring, or disciplining others – as part of the organizational product.
In contrast to early views of emotion as either pathology or something to be managed, the concept of ‘bounded emotionality’ suggests an alternative approach toward organizing which highlights nurturance, caring, community, and supportiveness (Mumby & Putnam 1992). Compassion is an emerging emotion concept in → organizational communication. Conceptualized as a three-part process of recognizing, responding, and (re)acting to another’s pain (Way & Tracy, 2012), research has found that caring interactions at work dramatically improve people’s workplace experience. The connection of positive moods and organizational effectiveness has spurred research on humor, laughter, and joking in the workplace. These final two areas of research reflect a trend toward positive organizational scholarship. Scholars in positive psychology argue that research has spent too much time studying emotion’s dark side (e.g., bullying, emotive dissonance, burnout) and that attention should be turned to positive emotions (well-being, happiness, compassion, love, and humor).
See also: Appraisal Theory Organizational Communication Organizational Communication: Critical Approaches Organizational Communication: Postmodern Approaches Organizational Conflict Organizational Culture Social Support in Interpersonal Communication
Dolf Zillmann
University of Alabama
Arousal is commonly construed as the experience of restlessness, excitation, and agitation. It manifests itself in heightened overt and covert bodily activities that create a readiness for action. Acute states of such arousal characterize all vital emotions, and the subjective experience of these acute states is part and parcel of all strong feelings. Emotional arousal is consequently seen as an essential component of such experiences as pleasure and displeasure, sadness and happiness, love and hate, despair and elation, gaiety and dejection, rage and exultation, exhilaration and grief, frustration and triumph, merriment and fear, anger and joy, and so on.
Based on Schachter’s (1964) two-factor theory of emotion Zillmann (1996) proposed a three-factor theory of emotion that retains the distinction between energization by arousal and guidance by cognition. A dispositional factor integrates ontogenetically fixed and acquired dispositions in accounting for the autonomic mediation of excitatory reactivity and the guidance of immediate, deliberate, overt behaviors. An experiential factor entails the cognitive evaluation of prevailing circumstances, including the appraisal of bodily feedback (→ Sensation Seeking).
As both the evocation of emotions and the modification of moods are essential factors in the appeal and effects of media presentations, and as the intensity of both emotions and moods is largely determined by excitatory reactivity, it is imperative to consider arousal in the context of media influence. Intense excitement is sought via exposure to the communication media as much as through overt individual or social actions. The fact that the evocation of diverse emotions can be compacted in media presentations or in interactive media formats, such as games, actually provides optimal conditions for the creation of arousal escalations and, ultimately, for intense experiences of joyous excitement (Zillmann 2006; → Excitation and Arousal; Media Effects).
Arousal influences permeate numerous other effects of media exposure too. It has been shown, for instance, that exposure to highly arousing pleasant erotica can facilitate social aggression more than can somewhat less arousing exposure to violence (→ Violence as Media Content, Effects of; Media Effects; Fear Induction through Media Content).
See also: Appraisal Theory Excitation and Arousal Fear Induction through Media Content Information Processing Media Effects Mood Management Sensation Seeking Violence as Media Content, Effects of
Toby Miller
University of Cardiff/Murdoch
Encoding and decoding have been key concepts in communication for over fifty years, in keeping with the idea that language is a → code, and how it is received is as significant as how it is conceived. Its most prominent place, however, is in media and → cultural studies, where it has been used to integrate the analysis of texts, producers, technologies, and audiences by thinking of them as coeval participants in the making of → meaning.
Encoding-decoding within media and cultural studies derives from the rejection of psychological models of → media effects. In the 1960s, the ethnomethodologist Harold Garfinkel coined the notion of a “cultural dope,” a mythic figure who supposedly “produces the stable features of the society by acting in compliance with pre-established and legitimate alternatives of action that the common culture provides” (Garfinkel 1992, 68). In the mid-1960s, Umberto Eco developed the notion of encoding-decoding, open texts, and aberrant readings by audiences (Eco 1972). Eco looked at the ways that meanings were put into Italian TV programs by producers and deciphered by viewers, and the differences between these practices. His insights were picked up by the political sociologist Frank Parkin (1971), then by cultural studies theorist Stuart Hall (1980).
There have been two principal methodological iterations of the encoding-decoding approach: → uses and gratifications (U&G) and ethnography/cultural studies. Uses and gratifications operates from a psychological model of needs and pleasures; cultural studies from a political one of needs and pleasures. U&G focuses on what are regarded as fundamental psychological drives that define how people use the media to gratify themselves. Conversely, cultural studies’ ethnographic work has shown some of the limitations to claims that viewers are stitched into certain perspectives by the interplay of narrative, dialogue, and image. Together, they have brought into question the notion that audiences are blank slates ready to be written on by media messages.
See also: Audience Research Code Cultural Studies Ethnography of Communication Meaning Media Effects Text and Intertextuality Uses and Gratifications
Gabriel Weimann
University of Haifa
A common focus of communication research has been the public’s perceptions of reality as based on mass-mediated contents and images (→ Media and Perceptions of Reality). Social reality perceptions are best defined as individuals’ conceptions of the world. They include perceptions of others’ opinions and behavior, social indicators such as crime, wealth, careers, professions, sex roles, and more (→ Reality and Media Reality).
An important element of the modern mass-mediated world is the integration of news and entertainment, facts and fiction, events, and stories into a symbolic environment in which reality and fiction are almost inseparable. Thus, the news becomes storytelling while soap operas become news. They present to us realities from other cultures, other social strata – and despite their fictional nature – are seen and interpreted as realities. The so-called → ‘infotainment’ narrative of the modern media affects us all. How can one make the distinction between fictional representation and factual ‘real-world’ information when both are so well integrated into our mediated environments?
Living in a mass-mediated world is the result of several processes: our reliance on media sources to know and interpret the ‘world out there,’ the distorting effect of the selection process in the media and the practice of writing news as ‘storytelling,’ and the mixture of information and fiction where real and fictional worlds become a homogeneous, synthetic reality.
The most important work on the impact of mass-mediated realities on audiences’ perceptions has been done within the tradition of George Gerbner’s cultivation theory. Essentially, the theory states that heavy exposure to mass media, namely television, creates and cultivates perceptions of reality more consistent with a media-conjured version of reality than with what actual reality is. It began with the “Cultural Indicators” research project in the mid-1960s, aiming to study whether and how watching television may influence viewers’ ideas of what the everyday world is like. Cultivation theorists argue that television has long-term effects which are small, gradual, indirect, but cumulative and significant (→ Cultivation Effects).
One of the major constructs of cultivation theory is ‘mainstreaming,’ the homogenization of people’s divergent perceptions of social reality into a convergent mainstream. This apparently happens through a process of construction, whereby viewers learn ‘facts’ about the real world from observing the world of television.
Several researchers have attempted to refine the notion of cultivation by examining closely the cognitive processes involved. A key distinction suggested by these studies is that there are two stages of the cognitive process: ‘first-order’ effects (general beliefs about the everyday world, such as about the prevalence of violence) and ‘second-order’ effects (the resulting specific attitudes, such as fear of strangers or of walking alone). Other studies revealed the psychological processes involved in cultivation of reality perceptions. Thus, some argued that ‘source confusion’ (the tendency of individuals to confuse events from news stories with those from fictional contents) is promoting stronger cultivation effects while others explained the cognitive mechanism of cultivation in terms of accessing information in the memory.
The important assets of computer-mediated communication (CMC) are its ‘vividness’ and speed; easy access to everywhere in cyberspace with no time or distance limits. What happens when virtual reality becomes more appealing than ‘real’ reality? Will large numbers of us abandon socially relevant pursuits for virtual travel in the media world? As computers’ capabilities to develop increasingly complex and realistic images advance, the illusion of reality will become even more convincing. When virtual realities in computer-mediated entertainment are the only source of information people can use to experience places, situations, and actions, one can expect a powerful impact. Williams (2006) found that participants in an online game changed their perceptions of real-world dangers. However, these dangers only corresponded to events and situations found in the game world and not to real-world crimes. Computer-mediated communication is giving new meaning to the idea of ‘mediated realities’ and should be related to cultivation theory. Although the cultivation paradigm highlighted the role of television, the basic argument seems even more valid in the case of CMC.
See also: Construction of Reality through the News Cultivation Effects Infotainment Media and Perceptions of Reality Media System Dependency Theory Reality and Media Reality
Miles L. Patterson
University of Missouri–St Louis
Every face-to-face → interaction occurs in a specific location. For example, where people live affects social behavior. Urban dwellers typically initiate less eye contact with strangers and help them less than suburbanites or small-town residents do. This decreased sensitivity to others may be the product of “social overload,” leading automatically to filtering out less important events (Milgram 1970).
Advances in the technological environment over the last 75 years have changed patterns of interaction. Prior to the common availability of television and air conditioning, people spent more time outside, interacting with their neighbors. Although technological advances, such as the Internet and mobile phones, permit convenient remote communication, increased dependence on these technologies adversely affects the frequency and quality of face-to-face communication (Bugeja 2005; → Interpersonal Communication). People select settings, but settings also select the people who use them. For example, a church service or a school board meeting each attract people with relatively similar interests and expectations into a setting. The combination of these influences in any setting promotes relatively homogeneous behavior across people. The physical characteristics of settings also affect interactions. In the business world, higher-status individuals have larger, better-furnished offices. These features reinforce the office holders’ power in interacting with subordinates. In home settings, the furniture in most living rooms is usually arranged to accommodate the easy viewing of a television, not to facilitate more comfortable facing positions for conversations.
In conclusion, the physical environment not only constrains our behavioral options, but also primes specific actions and social judgments about others, often automatically and outside of awareness (Loersch & Payne 2011).
See also: Interaction Interpersonal Communication
Robert J. Griffin
Marquette University
Sharon Dunwoody
University of Wisconsin–Madison
‘Environmental communication’ refers to communication about the natural environment and ecosystem, commonly focusing on the relationships that human beings and their institutions have with the nonhuman natural environment. Much of this communication, historically, has been generated by concern about various environmental problems and issues (e.g., global warming, energy, smog, extinction of species, land uses, population growth, water quality).
Environmental communication can take many forms and can occur through a diverse set of communication channels. Thus, communication scholars of various stripes might readily find environmental communication applicable to their interests in various media. Most content analyses on environmental issues examine print – not broadcast or Internet – outlets and most offer descriptions of patterns of coverage across media or over time (→ Content Analysis, Quantitative). Although there exists no meta-analysis of these studies, we isolate a few of the more common patterns below. Coverage of a single environmental issue will be erratic, not sustained, over time. Journalism tackles individual topics in relatively brief, discrete bits and does so only when events or processes coincide with news values such as timeliness or magnitude (→ News Routines; News Values).
Journalistic norms may be prominent drivers of coverage strategies (→ Journalists’ Role Perception; Ethics in Journalism). For example, since journalists cannot be arbiters of scientific truth when that truth is contested (a common situation in science), they instead aim to include in stories a variety of truth claims, often ‘balancing’ these viewpoints in an effort to convey to audiences a sense of the range of views (→ Objectivity in Reporting). Stories, by definition, are dominated by interpretive frameworks, and scholars argue that these frameworks can be important predictors of the ‘take-home message’ that a reader or viewer will derive from a journalistic piece.
Studies of frames employed in stories about risks to the environment suggest that they can stem from a complex welter of factors, including a journalist’s a priori knowledge of an issue, a willingness to buy into the first frame that is offered as an issue comes to light, and even the social structure of the community in which the media organization operates (Campbell 2014; Lakoff 2010).
Kahlor et al. (2006) investigated some of the factors that could increase the likelihood that people would seek and process information about impersonal risks, that is, risks not to oneself but to others or to the environment. Specifically, they used elements of the risk information seeking and processing (RISP) model (see Griffin et al. 2013) to examine how residents of two cities dealt with information about risks to an ecological issue. The RISP model proposes that more active seeking and processing of risk information is facilitated, directly or indirectly, by combinations of some key variables, including (1) information insufficiency; (2) a person’s capacity to seek and process the risk information; (3) a person’s beliefs about communication channels that carry the information; (4) informational subjective norms (felt social pressures to be informed about the risk); and (5) affective responses to the risk (→ Affective Disposition Theories).
Today, much environmental communication can be discovered on the Internet, such as bloggers’ opinions on global warming, local environmental groups’ interactive websites mapping hazardous chemicals stored in the community, etc. Among the challenges to environmental communicators, and among the key topics of interest to those who study environmental communication, are the communication of risk and uncertainty to lay audiences, the interpretation of the attendant technical and scientific information for non-experts, differences in orientation to the environment based on various cultural, structural, and social factors, and issues related to public concern about the ‘impersonal’ environment.
See also: Affective Disposition Theories Attitudes Content Analysis, Quantitative Ethics in Journalism Framing Effects Framing of the News Information Processing Information Seeking Journalists’ Role Perception News Routines News Values Objectivity in Reporting Planned Behavior, Theory of Reasoned Action, Theory of Risk Communication Risk Perceptions Social Conflict and Communication Uncertainty and Communication
Christoph Klimmt
Hanover University of Music, Drama, and Media
Escapism was introduced as an explanation for people’s use of entertainment media in the 1950s. The original understanding of escapism was rooted in the assumption that many working-class people in western mass societies were ‘alienated’ and suffered from poor life satisfaction (→ Media Use by Social Variable). Alienation was assumed to breed the desire to evade everyday sorrows and troubles by involving oneself in fantasy worlds that offer relief and distraction.
As the typical content of 1950s and 1960s entertainment programming indicated a sharp contrast to the stipulated social reality of ‘the masses’ (e.g., radio soap operas), involvement with such programming was theorized to serve the function of making people forget temporarily about their troublesome life circumstances by ‘diving’ into mediated worlds of (more) happiness and luck (→ Involvement with Media Content). In addition to the motivational dimension of escapism, the notion was also discussed in terms of the effects of escapist media use on people’s life and performance in social roles (→ Media Effects; Entertainment Content and Reality Perception).
While the notion of escapism has not been addressed by much research since the public debate in the 1950s and 1960s, its motivational component (i.e., a diversion and relief motivation as driver of media use) has been picked up in various lines of research. For instance, Henning and Vorderer (2001) elaborated a specific escape motivation, namely the desire to avoid thinking about oneself.
In terms of communication theory, elements of escapism are reflected in most approaches to media selection, for instance → mood management theory and contemporary accounts of → uses and gratifications. The assumptions about a dysfunctional impact of escapism on people’s life performance has been counterbalanced by the fact that some scholars consider the use of media entertainment and the accompanying ‘escape’ from real-life stressors as a benign contribution to well-being (i.e., as ‘vacation’ instead of ‘flight’ from real-life circumstances).
See also: Entertainment Content and Reality Perception Exposure to Print Media Exposure to the Internet Exposure to Radio Exposure to Television Involvement with Media Content Media Effects Media Use by Social Variable Mood Management Selective Exposure Uses and Gratifications
Clifford G. Christians
University of Illinois Urbana–Champaign
Journalism ethics is a branch of applied philosophy. Beginning with moral issues in medicine, the field expanded from the mid-twentieth century to include such professions as law, business, journalism, and engineering. Applied ethics has developed over the decades from merely describing actual moral behavior to establishing principles that guide decision-making. Journalism ethics retains an interest in the concrete, everyday challenges of professional practice, but considers it crucial to integrate those principles as well.
In its ideal forms, news serves the public interest, that is, the interests not of readers and viewers but of citizens (→ Quality of the News). From this perspective, social responsibility theory has become the most common form of journalism ethics in democratic societies around the world (→ Journalists’ Role Perception). With the same core ideas but different nuances in countries across the globe, for social responsibility ethics the major issue facing journalism today is the principle of truth.
In the US, the Commission on Freedom of the Press published its report A Free and Responsible Press in 1947. Named for the commission chairman, Robert Hutchins of the University of Chicago, the report insisted that the news media have an obligation to society, instead of promoting the interests of government or pursuing private prerogatives to publish and make a profit. In 1980, the MacBride report, Many Voices, One World, put social responsibility in explicitly international terms (→ UNESCO; International Communication; Development Communication; Development Journalism). Most of Europe takes social responsibility for granted as the dominant policy in journalism practices and media structures, including public service broadcasting (→ Public Broadcasting Systems). Since the 1990s, civic or community journalism has been restyling the press toward greater citizen involvement and a healthier public life (→ Participatory Communication; Citizen Journalism). In Latin America, for example, more public journalism projects have been carried out than on any other continent.
Ethics is not a question of personal choices but a matter of social and cultural duties. Humans have a moral obligation to one another; therefore journalism ought to appeal to listeners and readers about human values and conceptions of the good. The press’s obligation to truth is standard in journalism ethics. Truth-telling is the generally accepted norm of the media professions, and credible language is pivotal to the very existence of journalism. But living up to this ideal has been virtually impossible. Budget constraints, deadlines, and self-serving sources complicate the production of truth in news writing. Sophisticated technology accommodates almost unlimited news copy and requires choices without the opportunity to sift through the intricacies of truth-telling (→ Truth and Media Content).
There are different notions of ‘truth’. The mainstream press has defined itself overall as objectivist, so that the facts seem to mirror reality and genuine knowledge is scientific. Here, news corresponds to accurate representation and precise data, and professionalism stands for impartiality. Journalistic morality is equivalent to unbiased reporting of neutral data (→ Bias in the News; Neutrality; Objectivity in Reporting; Reality and Media Reality). Another concept of truth is disclosure, getting to the heart of the matter. Reporters seek what might be called interpretive sufficiency. The best journalists understand from the inside the attitudes, culture, language, and definitions of the persons and events that enter news reporting. In addition, ethical diversity offers a challenge to journalism ethics. Only specific social situations that nurture human identity can determine what is worth preserving. In the era of cultural diversity, when the truth principle is honored in journalism, particular cultures, ethnicities, and religions will flourish (→ Ethnic Journalism; Minority Journalism).
Social responsibility is explicitly cross-cultural in character. The canon of journalism ethics has been largely western, gender-biased, and monocultural. To succeed under current conditions, professional ethics must instead be international, gender-inclusive, and multicultural. The global reach of communication systems and institutions requires an ethics commensurate in scope. Thus the current efforts toward a diversified ethics of social responsibility journalism build on a level playing field that respects all cultures equally. Because every culture has something important to say to all human beings, a journalism ethics in the interactive, transnational mode is the greatest challenge today worldwide.
See also: Bias in the News Citizen Journalism Development Communication Development Journalism Ethnic Journalism International Communication Journalists’ Role Perception Minority Journalism Neutrality; Objectivity in Reporting Participatory Communication Public Broadcasting Systems Quality of the News Reality and Media Reality Truth and Media Content UNESCO
Anahí Lazarte-Morales
Our Lady of Grace School
Ethnic journalism is the practice of journalism by, for, and about ethnic groups. Ethnic journalism involves ethnically differentiated groups living within a dominant culture. These groups are often disenfranchised and have limited access to media production. Ethnic journalism ideally constructs representations in harmony with how the group sees itself, as a strategy for political advocacy and cultural preservation (→ Advocacy Journalism). Ethnic media and journalism emerge during times of political and economic stress, to denounce discrimination and energize mobilization. In some cases, they link to movements for political autonomy. Catalan and Basque news media in Spain, for instance, are part of the historical struggles for political independence.
Audiences use ethnic news media to help them navigate and engage mainstream society, find resources and services, understand the larger political events that affect them, and maintain a connection to their ethnic group and identity. Noncommercial ethnic media struggle to operate with limited resources, e.g., through competition from mainstream media companies targeting ethnic minorities. A profit-driven logic may suppress diversity within the ethnic group to reach the largest portion of the audience (→ Media Economics).
Research on ethnic journalism emerged with the growth of the foreign-language press during times of increased immigration in the first half of the twentieth century. In a context of cultural globalization, studying ethnic journalism is a way to explore cultural identity as a resource for political mobilization (→ Globalization Theories). Ethnic news media production is also relevant to research about groups historically at the political and economic margins. The institutional histories of ethnic media, which often apply political pressure to make communication policies and media production more inclusive, receive considerable attention. Researchers are shifting their focus to the levels of production, content, and consumption.
See also: Advocacy Journalism Globalization Theories Media Economics Minority Journalism Social Stereotyping and Communication
Osei Appiah
Ohio State University
Ethnic media are media vehicles (e.g., specific programs, publications, promotional pieces) that carry culturally relevant messages designed for and targeted to a particular ethnic group. Studies have demonstrated the rapid growth and success of ethnic media in North America and throughout the world. Studies found that culturally relevant media reach the greater portion of ethnic minorities like blacks, Hispanics, and Asian-Americans in the US. Also, a majority says that they prefer ethnic media to mainstream media, particularly for news information.
Concerning underlying theoretical foundations persuasion literature suggests that audiences are more likely to be influenced by a message if they perceive it as coming from a source similar, rather than dissimilar, to themselves. Identification theory states that individuals automatically assess their level of similarity with a source which drives them to choose a source based on perceived similarities between themselves and the source. This notion is supported by distinctiveness theory, which states that a person’s own distinctive traits (e.g., black, red-headed) will be more salient to him or her than more prevalent traits (e.g., white, brunette) possessed by other people in his or her environment.
In general, the use of ethnic media results in a stronger perceived similarity with the group and a greater identification with characters in ethnic media (Appiah 2004).
There is also a growing body of work that has investigated how white audiences respond to ethnic-specific media and characters. Studies clearly show that these media and characters have succeeded in attracting and persuading mainstream audiences. White respondents’ more favorable responses to culturally relevant media and characters have been explained, in part, by the term ‘cultural voyeurism’ that is conceptualized as the process by which a viewer seeks knowledge about and gratification from ethnic minority characters by viewing them using a specific medium. This notion implies that white audiences may seek, observe, and emulate ethnic minority characters in ads, in music, and on television to gain general information about their dress, music, and vernacular primarily because these characters are perceived to possess certain socially desirable traits.
See also: Advertising Audience Segmentation Ethnicity and Exposure to Communication
Holley A. Wilkin
Georgia State University
Ethnicity is socially constructed. Aspects deemed as important (e.g. religion, race) in defining ethnic groups vary between countries and research studies. Ethnicity is often a co-variate in message exposure and/or effects studies (→ Exposure to Communication Content; Audience Research; Media Effects). It is often implicated in → Knowledge Gap Effects and → digital divide research.
Early media use research concentrated on → Exposure to television. In the US, the consensus was that blacks and Latinos watch more television than whites. Ethnic minorities in the UK – e.g., Indian, Pakistani, Bangladeshi, Black Caribbean, Black African, and Chinese – often watch less television (OfCom 2007). Recent research has compared new immigrant groups (e.g., Chinese vs Korean Americans) and ‘geo-ethnic’ groups (interaction of ethnicity and geographical space). Acculturation plays a role in whether immigrants of various ethnic backgrounds prefer media in their native language to that of their host country.
Several researchers have stressed the value-added of examining media in context (→ Media Ecology). People construct different communication ecologies – the web of interpersonal and media (new and old or mainstream, and local and/or ethnic) connections—in order to achieve their everyday goals. There is no question that ethnic differences exist in exposure to communication, and these differences can have implications for disparities between groups. With increasing diversity of communication resource options, ethnic and intra-ethnic exposure studies need to take place within an ecological framework.
See also: Audience Research Digital Divide Exposure to Communication Content Exposure to Television Knowledge Gap Effects Media Ecology Media Effects
Donal Carbaugh
University of Massachusetts Amherst
What are the means of communication used by people when they conduct their everyday lives; and what → meanings does this communication have for them? These are central questions guiding the ethnography of communication (EC), which is an approach to the study of culturally distinctive means and meanings of communication. EC has been used to produce hundreds of research reports about locally patterned practices of communication, and has focused attention primarily on the situated uses of language (→ Language and Social Interaction). It has also explored various other means and media of communication including traditional media, the Internet and → social media, oral and printed literature, writing systems, sign languages, various gestural dynamics, silence, or visual signs (→ Sign).
Research topics of ethnography of communication include (1) the linguistic resources people use in context, not just grammar in the traditional sense, but the socially situated uses and meanings of words and their relations, including sequential forms of expression; (2) the various media used when communicating, and their comparative analysis, such as online ‘messaging’ and how it compares to face-to-face messaging; (3) the way verbal and nonverbal signs create and reveal social codes of identity, relationships, emotions, place, and communication itself (→ Nonverbal Communication and Culture). Reports about these and other dynamics focus on particular ways a medium of communication is used (e.g., how Saudis use online communication, or how the Amish use computers), on particular ways of speaking (e.g., arranged by national, ethnic, and/or gendered styles), on the analysis of particular communication events (e.g., political elections, oratory, deliberations), on specific acts of communication (e.g., directives, apologizing, campaigning), and on the role of communication in specific institutions of social life (e.g., medicine, politics, law, education, religion).
In addition, the ethnography of communication is a theoretical as well as a methodological approach to communication. As a theoretical perspective, it offers a system of concepts that can be used to conceptualize the basic phenomena of study, and a set of components for detailed analyses of those phenomena. The phenomena of study are communication event, communication act, communication situation, and speech community. The components of each are the setting or scene, the participants, act sequence, key, instruments, norms for interaction and interpretation, and genre. As a methodology, it offers procedures for analyzing communication practices as formative of social life. The methodology typically involves various procedures for empirical analysis, including participant observation in the contexts of everyday, social life, as well as interviewing participants about communication in those contexts (→ Qualitative Methodology).
Collections of research reports were published in the 1970s that helped move such study from the periphery of some disciplinary concerns in linguistics, anthropology, sociology, and rhetoric (→ Rhetorical Studies) to more central concerns in the study of communication and culture (→ Culture: Definition and Concepts). These studies explored aspects of communication that were often overlooked, such as gender role enactment, the social processes of litigation, marginalized styles, social uses of verbal play, and culturally distinctive styles of speaking. By the late 1980s and 1990s, a bibliography of over 250 research papers in the ethnography of communication had been published. Recent ethnographies of communication have examined mass-media texts in various societies, political processes at the grassroots and at national levels, interpersonal communication in many cultural settings, → organizational communication in various contexts from medicine to education, intercultural communication around the globe, processes of power, advantaged and disadvantaged practices, and so on (→ Health Communication; Educational Communication; Intercultural and Intergroup Communication). Ethnographers of communication thus demonstrate how communication is formative of social and cultural lives, comparatively analyzing both the cultural features and the cross-cultural properties of communication.
See also: Culture: Definition and Concepts Discourse Comprehension Educational Communication Health Communication Intercultural and Intergroup Communication Intergroup Contact and Communication Language and Social Interaction Meaning Nonverbal Communication and Culture Organizational Communication Qualitative Methodology Rhetorical Studies Sign Social Media
Amit M. Schejter
Ben-Gurion University of the Negev and Pennsylvania State University
As of 2014 the European Union (EU) consists of 28 countries (http://europa.eu/about-eu/countries/index_en.htm). It resembles a conventional federal state (Tsebelis & Garrett 2001), although it allows each of its member states to maintain its national sovereignty. The Council of Ministers, which directly represents the member states, and the European Parliament, which is elected directly by the citizens of these states, both resemble a traditional bi-cameral legislature. The Commission of the European Communities, the EU’s administrative branch, is in charge of drafting bills and enforcing legislation. The European Court of Justice (ECJ) functions as the judicial branch of the EU. Among the binding legal instruments of the Union are: (1) regulations, which apply to all EU citizens; (2) directives, which apply to the member states and aim to harmonize the goals of national laws across the Union, while leaving individual member states with the means of achieving these goals at the national level; and (3) decisions, which apply to specific situations.
The basic assumptions underlying each European nation’s communications law and policy were historically quite similar, believing that broadcasting was too important to be left to the whims of the free market (Levy 1999), and creating national public service broadcasters (→ Public Broadcasting Systems). Because they considered telecommunications a natural monopoly and a public utility as well, they maintained control over telecommunications through state-owned post, telegraph, and telephone monopolies, governed by the public service principles (Sandholz 1998). The first and most significant initiative of the EU in its audiovisual policy was the 1989 “Television without Frontiers” directive (TVWF; Hirsch & Petersen 1998), which was substantially revised in 1997 and in 2007, in response to the changing political and technological realities of Europe. The TVWF aimed not only to harmonize legislation across member states, but, more importantly, to unify the rules for television broadcasts across national borders, as “without frontiers” was seen as a basic element of European unity (Wheeler 2004; → Television Broadcasting, Regulation of).
In December 2008, the Audiovisual Media Services Directive (AVMSD) came into force, replacing the TVWF. The new directive additionally distinguishes “linear audiovisual media services” (analog and digital television, live streaming, webcasting, and near-video-on-demand) from “nonlinear audiovisual media services” (on-demand services). Because “nonlinear services” are distinct from “linear services” in the user’s choice and control, and in their impact on society, the directive imposes lighter restrictions on them. The new directive eases some of the restrictions on advertising at the same time as requiring more regulation of food advertising in children’s programming and for ‘product placement’ (Schejter 2006).
A major effort of European Union policy has been to insure public service broadcasting’s independence and to secure its appropriate funding framework, which enables it to fulfill its mission. The legal dispute between commercial and public broadcasters centered on the articles of the treaty that called for fair competition in the Common Market and the meaning of “state aid,” under Article 87 of the Treaty, which prohibits it if competition is undermined or is likely to be so. The commercial television companies argued that the license fees collected by the states to support PSBs constituted such aid. The Commission and the ECJ adopted a balanced approach to this issue, limiting the allocation of “state aid” to television programming that fulfilled the public service remit of the PSBs and served the “democratic, social, and cultural needs of each society.”
In the area of telecommunications a new regulatory framework of 2003 allows rejection of the legacy regulation that created different legal arrangements for different technologies, in favor of a new framework that first identifies what services are provided by the technologies and then harmonizes the rules regarding those services, regardless of the technologies involved (‘technological neutrality’; → Technology and Communication). The EU’s option for competition law limits specific communications law provisions (ex ante regulations) only to those product markets that are deemed uncompetitive. In November 2009, the Union agreed on a package of reforms in the telecommunications sector. A new European telecommunications regulator called the Body of European Regulators of Electronic Communications (BEREC) was set up. See all legal documents under: http://europa.eu.int/eur-lex.
See also: Communication Law and Policy: Europe Public Broadcasting Systems Television Broadcasting, Regulation of Technology and Communication
Gary Bente
University of Cologne
Diana Rieger
University of Cologne
A thrilling movie well describes the type of media stimuli that come to our mind when we use the terms excitation and arousal in everyday language. In scientific terms arousal can be defined as a state of alertness and physiological activation elicited by external or internal stimuli, which demand an adaptive response of the organism. Although vigorous actions like ‘fight’ or ‘flight’ might be dysfunctional or inappropriate in the daily life of civilized humans, evolution has preserved basic physiological alarm mechanisms, leaving the organism with a new type of adaptive task: to cope with arousal and excitation without launching behavioral programs.
A broadly accepted multidimensional definition of arousal was introduced by Lacey (1967), differentiating between cortical, autonomous, and behavioral arousal. Cortical arousal is associated with the ascending reticular activating system (ARAS) located in the brainstem. It receives input from the sensory receptors and projects nonspecifically into the cerebral cortex, producing a general cortical activation. The ARAS is responsible for tonic activation (i.e., being awake, drowsy, or sleepy) as well as for phasic activation (momentary alertness). Autonomous arousal is associated with the activity of the autonomous nervous system (ANS). The ANS consists of two antagonistic parts: the sympathetic and the parasympathetic parts. Arousal is associated with the activation of the sympathetic subsystem, while the parasympathetic part mainly serves inhibitory functions. Behavioral arousal describes the activation of the motor system, which can be observed as agitation or measured as muscular innervation using electromyography (EMG).
Schachter and Singer (1962) formulated the influential two-factor theory of emotion, in which arousal represents the unspecific intensity component of emotions, while the specific hedonic quality of an emotion depends on the cognitive appraisal of the situation. Based on this assumption, Zillmann (1983) formulated the excitation transfer theory. It holds that due to slower physiological processes, the unspecific arousal part of emotions has a longer decay time than the cognitive appraisal of the situation. Thus, arousal stemming from a thrilling scene can pertain and affect the intensity of joy experienced during the happy ending. The model has also been applied to effects of media violence, suggesting that it is not necessarily the content, but the transfer of excitation effects which contributes to aggressive behavior after media exposure (→ Violence as Media Content, Effects of).
Both hedonic quality and arousal can be moderated by cognitive processes. Lazarus and Alfert (1964) could show that intellectualizing commentaries accompanying or preceding most unpleasant images (a film of genital surgery), could significantly reduce autonomous arousal (sympathetic activation) during stimulus exposure. The prominent role of cognitive processes in the genesis and perception of emotions has been stated in so-called appraisal theories of emotion (→ Affects and Media Exposure; Emotional Arousal Theory).
In many cases communication content aims at providing information and thus primarily addresses the cognitive system of the recipient. Arousal can be an important determinant of → information processing, including attention, comprehension, learning, and → memory. There are two mechanisms thought to determine how attention is allocated to media stimuli. One is the orienting response (OR), which occurs whenever the organism is confronted with a new, unexpected, or salient stimulus and which is accompanied by the allocation of cognitive resources.
A second mechanism is Lang’s (2009) Limited Capacity Model of Motivated Mediated Message Processing which assumes two basic motivational systems responsible for resource allocation: the appetitive system is activated through positive media messages in order to approach those contents and facilitate information intake. In contrast, the aversive system responds to negative media stimuli and prepares the organism to defend against potential harm and threat. In neutral environments with rather low levels of arousal the appetitive system is more active which is referred to as the positivity offset.
From an evolutionary perspective, the positivity offset enables the individual to explore the environment and be creative. Whenever the environment provides negative cues, the aversive system is activated more quickly, called the negativity bias. This bias is considered to enable vigilance and prevent harm and loss for the individual.
An important motivation for media use is mood regulation and in particular recovery from stress and work strain. Mood Management Theory posits that mood repair can be achieved through the media’s excitatory potential; i.e., their impact on arousal. Assuming a homeostatic principle, the theory predicts that bored individuals choose arousing media content whereas stressed individuals prefer calm, relaxing content (→ Mood Management).
See also: Affects and Media Exposure Educational Media Content Emotional Arousal Theory Exposure to Communication Content Information Processing Learning and Communication Memory Mood Management Physiological Measurement Sensation Seeking Violence as Media Content, Effects of
Gregor Daschmann
Johannes Gutenberg University of Mainz
The term ‘exemplification effect’ describes the influence of illustrating case descriptions in media presentations on the recipients’ perceptions of issues. General claims (e.g., ‘growing poverty in society’) often are illustrated by presenting single-case information describing individual experiences or testimonials (e.g., testimonials of the homeless). The single cases serve as examples that illustrate (i.e., ‘exemplify’) the general claim. In the news media, the use of examples is on the increase because journalists have to make their contributions vivid and comprehensive. A biased selection of examples in the media is a particular problem (→ Bias in the News). However, as a rule, the recipients’ conceptions are strongly influenced by the number and type of the exemplars, whereas the general information often is ignored.
Most studies on the impact of exemplification have investigated their effect on → social perception and → climate of opinion. The more dramatic, extreme, and emotional the displayed cases are, the stronger are the effects. Presentation features such as personalization, vividness, or direct speech increase these effects, too. Exemplification effects are reproduced in different kinds of media and for different types of issues. Recent research shows that statements in → social media may also trigger this effect (Peter et al. 2014). There are no systematic relationships with age, gender, and education, or with several psychological traits or states, e.g., empathy, involvement, or knowledge.
Findings in social psychology can help to explain the effects. It is assumed that the exemplification effect is rooted in a basic cognitive mechanism of inductive learning (“episodic affinity”) from episodic case information (Daschmann 2001). The mechanism increases the ability to draw general conclusions based on everyday experiences. The ‘heuristic’ may be seen as a product of evolutionary development which is reasonably correct when general conclusions are drawn from typical cases (→ Elaboration Likelihood Model). However, if applied to untypical cases as they occur in media coverage, misperceptions are the rule.
See also: Bias in the News Climate of Opinion Elaboration Likelihood Model Framing Effects Information Processing Social Media Social Perception
Laura K. Guerrero
Arizona State University
According to Expectancy Violations Theory (EVT; Burgoon & Hale 1988), people have expectations about how others will act in a given situation, based on social and cultural norms as well as personal experiences. When receivers perceive that a sender has violated these expectations, an expectancy violation occurred. Behavior that confirms people’s expectations generally goes unnoticed, whereas unexpected behavior captures people’s attention and heightens arousal. Sometimes arousal change is aversive, leading to a fight-or-flight response. At other times arousal takes the form of an orientation response that leads people to scan the environment for information to help them interpret and evaluate the unexpected behaviour (→ Interpersonal Communication).
Evaluations are largely made based on valence and reward value. Valence refers to how positive or negative an expectancy violation is compared to the expected behavior. Negative expectancy violations occur when the unexpected behavior is worse than the expected behavior, whereas positive expectancy violations occur when the unexpected behavior is better than the expected behavior. Some expectancy violations are clearly positive or negative. For example, receiving extra affection from a loved one is almost always valenced positively; being ignored by a loved one is almost always valenced negatively.
Reward value refers to the level of regard a person has for someone based on characteristics such as physical attractiveness, social attractiveness, and status. When the meaning of an expectancy violation is ambiguous, reward value helps determine whether the behavior is valenced positively or negatively. For instance, receiving an unexpected hug from an acquaintance could be valenced positively or negatively depending on the degree to which the acquaintance is attractive, popular, or has high status (→ Interpersonal Attraction).
Valence and reward value work together to predict responses to expectancy violations, including reciprocity and compensation. Reciprocity occurs when a person responds to an unexpected behavior with a similar behavior (e.g., a hug is met with a smile). Compensation occurs when a person responds to an unexpected behavior with a dissimilar behavior (e.g., a person pulls back when hugged). Intimacy levels are also connected to reciprocity and compensation. People engage in reciprocity when they welcome the change in intimacy that an expected behaviour represents. When the unexpected behavior is perceived as representing more or less intimacy than wanted, compensation usually follows.
See also: Interpersonal Attraction Interpersonal Communication Uncertainty Reduction Theory
James B. Weaver, III
Centers for Disease Control and Prevention, Atlanta,
Research utilizing experimentation is increasingly being conducted in venues outside the research laboratory (→ Experiment, Laboratory). Such projects, when they involve the manipulation of an independent variable in realistic circumstances, are called ‘field experiments.’
Conceptually, the differences between the laboratory experiment and the field experiment are slight; ideally, both are structured on one of the true experimental designs and consequently incorporate randomization and manipulable experimental treatments or interventions (i.e., independent variables) as fundamental components (→ Sampling, Random). However, field experiments, because they are undertaken in circumstances not radically different from everyday life, can afford the researcher greater external → validity. Following the initial experimental treatment, for example, research participants in field experiments typically continue functioning in their everyday social settings with little investigator interaction until outcomes assessment (i.e., dependent measures). This can significantly reduce the reactive or interactive influence on subsequent outcomes resulting from participants’ awareness of the research procedure and enhance external validity.
At the same time, however, undertaking experimentation in the field can involve complications rarely seen in laboratory experiments. Field experiments, for instance, typically entail a significantly longer time frame (e.g., weeks and months rather than hours and days) and often engage a substantially larger number of research participants. The process of identifying eligible research participants in field experiments can be difficult, and the failure to retain research participants for follow-up (i.e., outcomes assessment) can be a serious threat to the generalizability of research results. Additionally, field experimentation commonly occurs in settings permeated with systematic and random noise where achieving an adequate degree of measurement precision or accuracy can be difficult (→ Measurement Theory). Some threats to internal validity (e.g., compensatory rivalry and the Hawthorne effect) can be particularly problematic in field experiments.
Generally, field experiments appear most commonly in → health communication research, with such projects typically operationalized as randomized controlled trials. Guidelines and practices incorporated in the randomized controlled trial, which is a refinement of the basic pre-test/post-test control group design, assist the researcher in overcoming many of the common limitations encountered in field experiments. Consequently, the randomized controlled trial, if effectively implemented, can yield the strongest evidence of causality of all research undertaken in realistic environmental and situational circumstances.
See also: Communication Skills across the Life-Span Experiment, Laboratory Health Communication Measurement Theory Media Effects Sampling, Random Validity
James B. Weaver, III
Centers for Disease Control and Prevention, Atlanta
Research utilizing experimentation is undertaken in a variety of contexts and settings. Overwhelmingly, the most common setting for experimentation in communication is the laboratory experiment. Laboratory experiments, when effectively operationalized (→ Operationalization) and carried out, afford strict experimental control by allowing for isolation of the research situation from the variety of extraneous influences that can impact both experimental treatment or intervention (i.e., independent variable) and the subsequent outcome (i.e., dependent variable). Accordingly, laboratory experiments are typically structured on the more rigorous ‘true experimental designs’ and, consequently, yield the strongest evidence of causality (Wimmer & Dominick 2011).
An array of locales can be utilized in staging laboratory experiments, ranging from general purpose accommodations such as conference rooms, classrooms, lecture halls, and theatres to facilities specifically designed for experimentation. It is the researcher’s ability to structure and manipulate the experimental environment (e.g., lighting, temperature, soundproofing, seating arrangement of research participants), not the specific locale, that is the defining characteristic of experiments in the laboratory setting.
Beyond situational and environment control, the laboratory setting allows substantial control over all aspects of the research process. Working in a laboratory setting, for example, significantly enhances the researcher’s ability to accurately identify eligible research participants, insure their random assignment to treatment conditions, and extensively observe their progression through the research activity. The level of specificity achievable in the operationalization of independent variables – or, in other words, the extent and certainty with which treatment manipulations can be accomplished – and the consistency of their re-enactment are both extremely high in laboratory experiments. Equally important, the degree of precision possible in outcomes (i.e., dependent variables) assessment, which promotes measurement reliability, is a key aspect of experimentation in laboratory settings (Kerlinger 1986).
Experiments in laboratory settings can also involve potential disadvantages. Perhaps the most obvious shortcoming is operational and environmental artificiality. Laboratory experiments facilitate the precise and systematic observation of human reactions under controlled conditions; but sometimes the experimental situation and/or experimental procedure is rather sterile and unnatural. Some behavioral and perceptual outcomes observed under such circumstances can have little direct application to those occurring in natural surroundings.
Because laboratory experiments typically involve extensive interaction between the researcher and research participants, the potential is great that researcher biases can emerge as threats to internal and external validity. ‘Experimenter bias’ is introduced into the experimental process when the researcher subtly communicates expectations about outcomes to the research participants. ‘Observer bias’ occurs during outcome measurement when the researcher overemphasizes expected behaviors and ignores unanticipated ones. Blinding methods are frequently incorporated into experimental procedures to avoid such distorting influences.
Laboratory experiments have proven instrumental in almost all areas of communication research, enlightening both our understanding of basic communication phenomena and informing theory construction. Examples of such research areas are the theories of deceptive message production, interaction adaptation theory, → selective exposure to communication, attitude accessibility, excitation transfer theory, and many other fields of → media effects. For instance, Bryant and Zillmann (1984) put their subjects in the two experimental groups in mood stages of either boredom or stress (by having them calculate mathematical tasks of different demands). Subsequently, they measured the time subjects in both groups spent with either exciting or calming video material. The study revealed the (subconscious) motivation to use media content for moderating one’s mood stages.
See also: Attitudes Emotional Arousal Theory Experiment, Field Media Effects Mood Management Operationalization Selective Exposure Validity
Peter Vorderer
University of Mannheim
Leonard Reinecke
Johannes Gutenberg University of Mainz
‘Exposure to communication content’ describes one of the most recent areas of specialization within the communication discipline. It is located at the intersection of → media effects research and → audience research. This new perspective is primarily, but not exclusively, ‘psychological’ in its theorizing; it focuses on micro-level analyses but also describes macro-structures in explaining what happens during exposure. It looks at new technology as much as it looks at more traditional media, but most importantly, it studies what happens before people become exposed to media content, what happens while they are exposed, and finally what happens right after exposure, i.e., as immediate consequence of it, thereby reaching into the realm of media effects research.
In an attempt to not only describe and explain the final effects of communication but also to include the processes involved during and even before exposure, scholars have defined new concepts, models (→ Models of Communication), and even theories from different disciplines and academic backgrounds, such as psychology and sociology but also from the humanities. Scholars in psychology primarily have formed a more differentiated understanding of how and why different (groups of) individuals approach specific media contents, whereas those with a humanistic background primarily have examined the media content itself and its (often social, socio-economic) context. In doing so, humanist scholars complicated what social scientists have often oversimplified. Thus, the overall picture of what is believed to happen during exposure to media content has become rather complex.
Most of the theoretical constructs that researchers have developed to describe the specific processes that precede exposure to communication and have an impact on it concern individual processes that lead to exposure (see Hartmann 2009 for an overview), such as personality (→ Personality and Exposure to Communication) or the specific individual motives and interests guiding media exposure, such as → escapism, → information seeking, or → sensation seeking. The main differences between these concepts lie in their theoretical complexity, and their specificity. Constructs like → mood management reduce the complexity of a media user’s decision process to a single dimension (i.e., the maximization of positive mood). The more inclusive concept ‘behind’ mood management, → selective exposure, claims that the selection of specific media content follows some psychological regularity, mood management being the most important one.
Examining what has been thought to occur during exposure, ethnicity and → media use by social variables are considered to affect how the audience deals with content. In addition to this, research has also addressed how individuals relate cognitively and affectively to characters on the screen by engaging in so-called → parasocial interactions and relationships with them. Due to many new technologies emerging recently, researchers have applied additional constructs such as → presence and → involvement with media content, → computer–user interaction, and physiological processes like → excitation and arousal. Some of these constructs are defined on the basis of what we know from psychology and from cognitive science about → perception (see Lang 2009), while others refer to the affective quality of such experiences (see Bryant & Vorderer 2006).
Compared to the many processes and constructs studied during and before exposure, only a few that occur after exposure were addressed – probably because they have most often been linked to the area of media effects (see Zillmann & Oliver 2009). One example of such processes is addiction and exposure, which does not limit itself to any step in the process, but refers to an effect of exposure that leads someone to constantly reinitiate the process again and again and without much awareness of its coercive nature.
The majority of available theories and models have tried to analyze exposure by referring to what precedes it, i.e., by linking exposure to the reasons media users may have for acting in a particular way. Those theories focus on either cognitive or affective processes. According to → cognitive dissonance theory, exposure to communication content is a function of whether a message is consistent with the users’ attitudes. Affective disposition theories suggest that media users are primarily interested in witnessing protagonists succeed and antagonists fail, having developed affective dispositions toward them – which explains exposure to entertainment content (→ Affective Disposition Theories). More generally, the → uses and gratifications perspective asserts that media users choose content that promises to gratify their interests and needs. Looking at the immediate consequences of exposure to communication content, → social cognitive theory is arguably the most influential attempt to describe and explain why exposure to a specific content may lead to certain consequences.
Another way of systematizing the field of exposure to communication content is to distinguish between various types of content. In that respect, research on exposure to print media, television, and radio follows a theoretical tradition that is well embedded in the discipline of communication (→ Exposure to Print Media; Exposure to Radio; Exposure to Television). In contrast, more recent lines of research, such as those that study → exposure to the Internet, are more interdisciplinary in nature.
The final perspective that is taken here to systematize the various contributions to the field of exposure to communication content is one that distinguishes between the different audiences. Over recent years, empirical research on the audience has grown in size but also diversified itself to address not only the general public but also the audiences of specific media offerings. While being studied, the audience itself has changed, thus, → audience segmentation has attracted a lot of interest within the discipline.
Where will this development lead, and what may we expect? In a situation like this, scholars often suggest the integration of loose ends, i.e., a synergistic approach to integrate various perspectives. However, this expectation is probably unrealistic, at least for the near future, given the variety of theoretical approaches within this research context. In addition to the presence of competing theoretical paradigms, research in the field of exposure to communication content has also been characterized by numerous changes in the concept of the individual (or user) throughout its development (e.g., the concept of a weak audience in early communication research vs. the idea of a strong and active audience in the uses and gratifications tradition; cf. Potter 2009).
In sum, it thus can be suggested that the area of exposure to communication content is expanding in different directions and differentiating its view on the media user and the processes involved in media exposure. As a consequence, increased theoretical coherence and integration in the area may be a rather long way off.
See also: Affective Disposition Theories Affects and Media Exposure Audience Research Audience Segmentation Cognitive Dissonance Theory Computer–User Interaction Escapism Ethnicity and Exposure to Communication Excitation and Arousal Exposure to Print Media Exposure to Radio Exposure to Television Exposure to the Internet Information Seeking Involvement with Media Content Media Effects Media Effects, History of Media Equation Theory Media Use, International Comparison of Media Use by Social Variable Models of Communication Mood Management Parasocial Interactions and Relationships Perception Personality and Exposure to Communication Presence Selective Exposure Selective Perception and Selective Retention Sensation Seeking Social Cognitive Theory Social Comparison Theory Uses and Gratifications
Wiebke Möhring
Hanover University of Applied Sciences and Arts
Beate Schneider
Hanover University of Music, Drama and Media
People use periodically published printed mass media in many different ways. Print media serve as sources of orientation and → information, provide models of behavior, and serve as frames of reference for possible dissociation and identification, differentiation, and participation. Additionally, they provide content for personal communication (→ Interpersonal Communication), relaxation, and emotional relief. International comparative research on the use of print media has to take into account that motivation for using print media, as well as their circulation and availability, is embedded in the cultural, political, and societal structures of the national systems in question, and that it is also dependent on economic conditions (→ Media Use, International Comparison of).
Level of education and literacy are significant socio-cultural indicators for print media use in a given country (→ Media Use by Social Variable). Reading skills and motivation, both essential conditions for reading, are complexly interrelated. Reading is a cognitive process (→ Cognitive Science). The reader actively and constructively incorporates the content of the text into pre-existing knowledge structures, based on the reader’s familiarity with language, the used media, and knowledge of the world (→ Information Processing). In addition, the reader’s motivation and interests play a pivotal role. Compared to electronic media, reading print media requires complete attention and focus, excluding all other activities (→ Attending to the Mass Media). On the other hand, the mode of exposure allows to use print media irrespective of place and time and thus offers higher accessibility and flexibility to be interrupted and to resume reading at the user’s discretion. A number of factors influence newspaper use: income, age, sex, level of education, race, length of residence in a community, mobility, number of children in a household, marital status, housing condition, and interest in politics. Respondents name lack of time, lack of interest, preference for a different medium, and cost as reasons for not reading newspapers.
Generally speaking, we can identify three subject areas that stimulate the use of print media: ‘hard news’ (e.g., politics and business; → News), ‘soft news’ (e.g., people or society), and sports (→ Sports and the Media, History of), while local interest topics cut across these three subject areas. Notwithstanding the continuing strength of the newspaper market in many countries, a downward trend in newspaper reading can be observed worldwide. Consequently circulation and coverage of dailies have been decreasing for years. Online distribution can hardly absorb the losses (→ Internet News; Online Media; van der Wurff & Lauf 2005; Mögerle 2009).
In contrast to newspapers, magazines in their very special variety are particularly sensitive to trends and fashions, both national and global. Magazines have to adapt quickly and strongly to the altered needs of their readership. Consequently, fluctuation in the magazine market is high. This trend is reinforced by the readers’ behavior. Nowadays, readers use a broader range of titles, and traditional target groups are losing validity (→ Audience Segmentation). Reader motivation and the functions of magazines vary widely and cover the whole spectrum, from specific information to distraction or entertainment. At the same time, magazines represent certain images and thus can also take over functions of social identification for the user.
Measuring print media use and impact presents a methodological challenge. Readers often use different sources of information, talk about content, and understand reports differently. The general problem in readership research is the respondents’ recall of sources and the given fact that sources of information are usually more easily forgotten than the information itself (→ Audience Research).
See also: Attending to the Mass Media Audience Research Audience Segmentation Cognitive Science Information Information Processing Internet News Interpersonal Communication Longitudinal Analysis Meaning Media Economics Media Use, International Comparison of Media Use by Social Variable News Online Media Sports and the Media, History of
Holger Schramm
University of Würzburg
Radio is the medium with the highest relevance for media users in daily life – at least with respect to the amount of exposure time (→ Radio: Social History). People in western industrialized countries listen to radio for about three hours each day, with about 80 percent of daily reach.Radio consumption has decreased massively since the beginning of the twenty-first century, especially among people under the age of 40, due to the increasing use of mobile music media like MP3 players (Schramm 2006). About 90 percent of radio consumption occurs while people pursue other activities at the same time, such as eating, working at home (e.g., cleaning, cooking, ironing), working outside home (e.g., gardening, office work), or car driving (MacFarland 1997).
People use radio for several motives. Its central function is accompanying other activities in order to ease workload, to abridge time, and to compensate monotony. Further, several emotional motives can be identified in radio use: stimulation of excitation, activation versus damping/catalyzing of excitation, abreaction, relaxation (→ Excitation and Arousal), wallowing in memories, distraction, day dreaming (→ Escapism), social belonging, affiliation (→ Social Identity Theory), distinction, social comparison (→ Social Comparison Theory), → parasocial interactions and relationships, social alternative, and → information seeking and life assistance (MacFarland 1997; Schramm 2006; → Audience Research; Affects and Media Exposure; Mood Management). The primary content of most radio programs is music with a music portion, on average, of about 70 percent. In order to create music programs compatible with large groups of people, the degree of complexity of radio music must remain rather low (Ahlkvist & Fisher 2000).
See also: Affects and Media Exposure Audience Research Audience Segmentation Escapism Excitation and Arousal Information Seeking Mood Management Parasocial Interactions and Relationships Radio: Social History Social Comparison Theory Social Identity Theory
Uwe Hasebrink
Hans Bredow Institute for Media Research at the University of Hamburg
Research on exposure to television deals with the question, what do people do with television? The television industry has an existential interest in finding out how many people watch its programs, in order to sell these data to the advertising industry (→ Media Economics). Beyond this, information on exposure to television is a necessary condition for statements on the role of television in people’s everyday lives and its social and individual consequences (→ Exposure to Communication Content).
The dominant line of research aims to describe and explain the viewing behavior of aggregate audiences. The industry has developed sophisticated mechanisms to construct the “mass audience” as the dominant model of research (Webster & Phalen 1997). In most countries this research relies on electronic meter systems that register any screen-related activity (→ Audience Research). The most common indicator for TV exposure is ‘reach,’ the percentage of the population that had at least one contact with the particular television offer of interest. Over the last years in most developed countries the reach of television on an average day has been stable on a high level (75 to 85 percent of the population; IP Networks 2013). The viewing time indicator reflects the average duration of use; recent figures for industrialized countries are between three and five hours per day for every person (IP Networks 2013). Particular attention is paid to channel-related indicators. The share of a channel or program expresses the percentage of viewing time devoted to this channel or program compared to the total viewing time. In recent years the average share of channels has been decreasing substantially, a finding that is interpreted as → audience segmentation (Webster 2005, 367).
Another practically interesting line of research is called audience duplication research, because it is empirically based on the percentage of viewers of a certain program who also watch a certain other program at another time (Cooper 1996). The concept of ‘channel loyalty’ means that viewers tend to select programs on a particular channel. More specifically, the ‘inheritance effect’ means that viewers of a program are likely to watch the next program on the same channel. ‘Repeat viewing’ is defined as the degree to which viewers are likely to watch two different episodes of the same program.
Some other lines of research, mainly in the academic area, examine exposure to television as individual behavior. This kind of research is more interested in the psychological processes linked to the selection, interpretation, and appropriation of televised content, in interindividual differences between viewer groups, and in intraindividual differences between situations. With regard to interindividual differences between (groups of) viewers, a general finding is that elderly people watch substantially more television than younger people (→ Media Use across the Life-Span), and people with less formal education more than better-educated people (→ Media Use by Social Variable). Another explanation for stable interindividual differences in exposure to television refers to traits. In particular; → sensation seeking is one factor that explains differences in the extent to which people watch exciting action and violence-oriented programs.The broad research on → selective exposure to television has provided strong evidence of how viewers, based on their individual needs and interests, selectively compose their personal television repertoire. Finally, affects and moods have been shown to be important determinants of viewing behavior (→ Affects and Media Exposure; Mood Management).
One of the future conceptual challenges of research on television exposure will be how to identify and classify the increasing number of audiovisual services that are similar to television but not (yet) regarded as television. Due to these challenges of new media environments Napoli (2011, 149ff) even questions the role of exposure as the former key concept of audience research, and points to the increasing importance of alternative audience conceptualizations, e.g., interest, appreciation, and engagement.
See also: Advertising Affects and Media Exposure Audience Research Audience Segmentation Exposure to Communication Content Media Economics Media Use across the Life-Span Media Use by Social Variable Mood Management Selective Exposure Sensation Seeking
Robert J. Lunn
FocalPoint Analytics, Oxnard, CA
This entry refrains from presenting rapidly changing descriptive data on the use of the Internet (for international data see Internet World Stats 2014; Pew Research 2014), but with the factors explaining growth of and differences in exposure to the Internet between countries.
Numerous studies have established that the diffusion of Internet access follows an S-shaped curve. What is not well understood are the factors responsible for different levels of Internet access among different countries (→ Digital Divide; Media Use, International Comparison of). Many theories of the diffusion of innovations, such as the Bass model, focus on individual factors, such as perceived need (Bass 1969; → Diffusion of Information and Innovation; Media Use by Social Variable). The inadequacy of this theoretical stance becomes readily apparent when we consider that cultures exhibiting low levels of gender empowerment deny Internet access to half of their populations. This example of the influence of culture (→ Culture: Definitions and Concepts) also illustrates that adoption of the Internet is affected by factors beyond simple exposure to the technology.
A common misconception, termed the ‘pro-innovation bias,’ occurs when researchers assume that innovations, such as the Internet, will eventually be adopted by all members of a social system. In reality, the degree to which innovations diffuse through a population is a complex function of many different factors, some of which act to impede the diffusion process. Pro-innovation bias is inadvertently created when researchers plot percent adoption of an innovation at a single time point, using multiple countries. The resultant curve is indeed often S-shaped but it carries with it the implicit assumption that all of the plotted countries will follow a universal diffusion trajectory, and that, in time, countries at the lower left of the curve will eventually ‘catch up’ with the countries on the top right of the curve.
Several researchers have suggested that the primary reason for cross-national inequalities in Internet access resides in differential economic development (e.g., Norris 2001). In this regard, findings that implicate public investment in human capital and infrastructure are important because they associate aspects of economic influence beyond the concept of GDP per capita with a country’s degree of Internet access. Increases in life expectancy and literacy require long-term investments in large-scale public services and facilities, such as public health, telecommunications, or schools. Aspects of wealth such as education and infrastructure take a considerable amount of time to develop, and consequently policies designed to enhance Internet access through interventions of short-term economic aid are questionable.
Norris (2001) considers individuals who prescribe an economic interventionist policy as “cyber-optimists.” In conjunction with short-term economic aid, a cyber-optimist would expect Internet access to eventually diffuse throughout a country’s entire population. This diffusion pattern (the “normalization” pattern) might typically be expected to occur in wealthy countries with cultures that value and reward innovative behavior. Alternatively, “cyber-pessimists” would expect that, regardless of economic aid, digital technology would more likely amplify existing inequalities of power, wealth, and education, creating deeper divisions between the advantaged and disadvantaged (→ Technology and Communication). This stratification pattern of Internet diffusion suggests that individuals who do adopt are subject to country-specific cultural and economic restrictions rather than the simple fulfillment of individual needs and exposure to the technology through mass media. Both adoption patterns yield S-shaped curves that can be fitted by mathematical formulations such as the Bass model. However, results following a stratification pattern are difficult to explain in terms of social contagion theory or Bass model coefficients.
Given the normalization and stratification diffusion patterns, it is natural to ask whether the factors responsible for the diffusion of Internet access are amenable to ‘quick-fix’ treatments, such as the insertion of technology or short-term economic aid, or whether the degree of Internet access is shaped by more deep-seated forces, such as culture. Norris reports that affluent Middle Eastern countries have relatively low Internet access levels, which challenges the assertion that economic factors are solely responsible for degree of Internet access. Notably, the culture of these countries acts to inhibit Internet access for half of their population, i.e., females (→ Feminist and Gender Studies). This is decidedly not a small effect and points to the danger of making generalizations when predictive factors are causally entangled.
A second point of consideration is that explanations, based on social contagion theory, should work best in countries that support a normalization pattern of Internet adoption (e.g., the US and western European countries). In these countries, we would expect the adoption decision to be largely under an individual’s control moderated by their exposure to mass media. However, social contagion models should fail when attempting to predict Internet diffusion in countries that follow the stratification pattern of diffusion. When examined from this perspective, social contagion explanations for the diffusion of innovations and the use of classical Bass model coefficients appear to be an artifact of early diffusion research that predominantly focused on the US and western European countries. These are countries where the individual-oriented normalization diffusion pattern is found and inhibiting factors such as fear, diminished gender empowerment, and low levels of long-term economic development are minimized.
The implication of the existing explanations for cross-country differences in Internet exposure is that models need to utilize a mixed-level hierarchical modeling approach, where one level defines the moderating influence of country-specific factors and another level deals with individual needs and communication channels.
See also: Affects and Media Exposure Culture: Definitions and Concepts Diffusion of Information and Innovation Digital Divide Ethnicity and Exposure to Communication Exposure to Communication Content Feminist and Gender Studies Interpersonal Communication Involvement with Media Content Longitudinal Analysis Media Use by Social Variable Media Use, International Comparison of Regression Analysis Sampling, Random Technology and Communication Two-Step Flow of Communication Uses and Gratifications Validity
Kim Witte
Michigan State University
According to the Extended Parallel Process Model (EPPM), when people are faced with a threat they either control the danger or control their fear about the threat. The variables that cause individuals to either control the danger or control their fear are defined as follows.
Perceived threat, or the degree to which we feel susceptible to a serious threat, is composed of two dimensions. The first refers to the perceived seriousness of a threat or the magnitude of harm we think we might experience if the threat occurred (e.g., injury, loss, death, disgrace, etc.). The second dimension, susceptibility to threat, is the perceived likelihood of experiencing a threat. Perceived efficacy, or the degree to which we believe we can feasibly carry out a recommended response to avert a threat, is also composed of two dimensions: our beliefs about whether or not a recommended response works in averting a threat (response efficacy) and beliefs about our ability to perform the recommended response (self-efficacy).
Overall, the EPPM suggests that when people feel at-risk for a significant threat they become scared and are motivated to act. Perceptions of self-efficacy and response efficacy determine whether people are motivated to control the danger or control their fear. When people feel able to perform an action that they think effectively averts a threat (strong perceptions of self-efficacy and response efficacy), then they are motivated to control the danger and engage in self-protective health behaviors. In contrast, when people either feel unable to perform a recommended response and/or they believe the response to be ineffective, they give up trying to control the danger. Instead they control the fear by denying their risk, defensively avoiding the issue, adopting a fatalistic attitude, or perceiving manipulation.
The EPPM has been used to guide education entertainment radio dramas, worker notification programs for beryllium exposure. The EPPM can be used to tailor messages to promote danger control responses via interpersonal channels (as in counselor–client, doctor–patient, or peer educator encounters), mass media channels, or → social media.
See also: Applied Communication Research Health Campaigns, Communication in Health Communication Information Processing Persuasion Risk Communication Risk Perceptions Social Media
Wolfgang Donsbach
Dresden University of Technology
The term “extra-media data” describes a methodological approach to assessing the quality of media content. The phrase was coined in the early 1970s by Swedish scholar Karl Erik Rosengren (1970) during a controversy about the criteria needed to assess → bias in the news. Rosengren suggested that researchers should evaluate the performance of news media, for instance the influence of → news factors on → news value, by comparing media coverage to external, primarily statistical indicators. In communication research today we can find at least three different approaches to assessing the quality of media content by sources from outside.
Funkhouser (1973) used statistical data and compared the number of news articles on several political issues in the USA in the 1960s with statistical indicators for their real salience, for instance the number of US soldiers fighting in Vietnam. In their seminal “McArthur-Day study” Lang and Lang (1953) compared the impressions of an event when seen on television with the impression of the same event when participating in it. The authors attributed the discrepancy between these impressions to a → reciprocal effect created by the presence of the television camera itself. Many years later Donsbach et al. (1993) applied the same approach in an experimental study on a campaign speech. Lichter et al. (1986) surveyed experts on nuclear energy about the potential risks of this technology and compared the result to the opinions expressed by experts cited or interviewed in the media.
These examples show that the use of such reality indicators for an assessment of media coverage by extra-media data is problematic. In most cases the concrete indicators used either represent only a certain aspect of the issue or event they are supposed to indicate, or may be biased themselves. Further, there are many areas where no such extra-media data exist. Nevertheless, if one assumes that some scientific measure of the quality of reality representation in the media is important, the comparison of media content with extra-media data is probably the strongest tool.
See also: Bias in the News Media Effects: Direct and Indirect Effects News Factors News Values Objectivity in Reporting Reciprocal Effects