p.176

9

EVALUATING ONLINE HEALTH INFORMATION SYSTEMS

Gary L. Kreps and Jordan Alpert

Introduction

Powerful new online health information systems have been shown to hold tremendous promise for enhancing the delivery and use of risk and health communication programs (Kreps, 2015; 2011b). Tremendous growth and widespread adoption of online health information systems, such as health information websites, interactive health decision support systems, and mobile health devices, have demonstrated great potential to enhance response to health and risk issues by supplementing and extending traditional channels for communication (Kreps, 2015). The use of new health information technologies can enable broad dissemination of relevant health information that can be personalized to the unique information needs of individuals facing health risks (Kreps, in press; Neuhauser & Kreps, 2008). These e-health communication channels can provide health care consumers and providers with the relevant health information they need to respond to serious health and risk issues exactly when and where they need the information (Krist, Nease, Kreps, Overholser, & McKenzie, 2016).

Unfortunately, many of the enthusiastic predictions about the amazing contributions of digital health programs for promoting response to risk and health issues have not reached fruition and the great potential of health information systems has resulted in limited returns (Kreps, 2014a; 2014b). Too often, health information technologies fail to communicate effectively with users due to problems with the ways the systems are designed and implemented with different audiences (Kreps, 2014b; Neuhauser & Kreps, 2010; 2008; 2003). To enhance the quality of online health and risk information systems, rigorous evaluation research needs to guide the design and refinement of these systems (Alpert, Krist, Aycock, & Kreps, 2016b; Kreps, 2002; 2014a; 2014c). It is necessary to conduct regular, rigorous, ongoing, and strategic evaluation of health and risk communication programs to guide development, refinement, and strategic planning (Green & Glasgow, 2006; Rootman et al., 2001).

p.177

Failure to engage in careful and concerted evaluation research is likely to doom the success of online health and risk communication systems (Kreps, 2002; 2014a). Evaluation research answers important questions about the specific influences online health communication programs have on different audiences, identifying which audiences are paying attention to the programs and what they are learning from the programs (Kreps, 2014a; Kreps & Neuhauser, 2013). Rigorously collected evaluation data can help identify when online health and risk communication programs are having any unintended influences, including boomerang and iatrogenic (negative) effects (Cho & Salmon, 2007; Rinegold, 2002). Poorly designed health and risk communication programs have had negative influences on key audiences, such as with the infamous National Youth Anti-Drug Media Campaign, which instead of combating the risk of widespread youth drug abuse, actually served to increase at-risk youths’ interest in using illegal drugs (Hornik, Jacobsohn, Orwin, Piesse, & Kalton, 2008). Well-conducted evaluation research can help explain why health and risk communication programs work and don’t work, as well as which parts of these programs work most effectively (Kreps, 2014a).

Formative Research

Formative evaluation research is conducted prior to the introduction of health and risk communication information systems to guide the design of these programs (Kreps, 2002; 2014a). Formative evaluation helps health information system designers answer key questions about the goals and purposes of the programs they are developing. The process of revealing formative evaluation data can clarify what system designers want to accomplish with specific health and risk communication programs, which audiences they want to reach and influence, and what they want audience members to do in response to these digital information system programs.

Formative data can provide essential information to system designers about the audiences they want to reach, such as what health and risk issues audience members are likely to be interested in, what audience members currently know about key health and risk issues, and which messages are likely to make sense to and resonate with different audiences. Formative evaluation data can be used to establish measurable goals and outcomes for online health and risk information programs. Formative data can be used to establish baselines for establishing current levels of knowledge about health and risk issues, as well as to identify key health and risk activities to track over time. Formative evaluation research can guide the adoption of relevant theories and intervention strategies to guide development and implementation of health and risk information systems. Furthermore, good formative evaluation research can also help ensure that health and risk communication programs are sensitive to unique audience needs, cultural orientations, literacy levels, and expectations (Kreps, 2014a; Neuhauser & Paul, 2011).

p.178

There are two primary and interrelated forms of formative evaluation research that are critically important to the design of health and risk information systems: needs analysis and audience analysis. Needs analysis is conducted to help systems designers develop a full understanding of the scope of health and risk issues, relevant behaviors, and current levels of knowledge about specific health and risk issues confronting different audiences. Needs analysis data helps system designers focus on the most relevant health and risk issues with their programs and provide audiences with the most useful and up-to-date health and risk information. It helps system designers determine the gaps between what is currently being done to respond to serious health and risk issues within different communities and what needs to happen to promote improved health and well-being.

Needs analysis data can often be most effectively collected through use of multiple research methods. Sometimes it is best to begin by observing existing data sets. For example, the use of archival analysis of existing data sets and materials, such as review of relevant epidemiological studies about disease incidence and outcomes, evaluation of previously conducted key audience surveys about knowledge and experiences concerning health and risk issues confronting different communities, and content analysis of public and private health utilization records and research reports concerning best practices for addressing specific health issues, can provide a wealth of relevant information for guiding the design and implementation of relevant digital health and risk communication programs (Alpert, Krist, Aycock, & Kreps, 2016c, Kreps, 2011a; 2014a). Sometimes, when there is insufficient data already collected about specific health and risk issues, new data needs to be collected to fully evaluate the health issues within specific communities. New needs analysis data can be collected using multiple methods, including use of self-report and observational measures, such as conducting surveys, interviews, and direct observations. Both quantitative and qualitative needs data can help health and risk information system designers develop a full understanding about relevant health issues.

Situation analysis is a form of needs analysis that examines the history and extent of specific health and risk issues within communities, focusing on how widespread the health and risk issues are, who the health issues affect, how the issues have been responded to in the past, and which recommendations have been made for ideal ways to address the health and risk issues. Channel analysis is a form of needs analysis that focuses on examining the current health and risk information systems being used within communities and how effective these channels have been in disseminating relevant health and risk information. SWOT analysis (strengths, weaknesses, opportunities, and threats) is a needs analysis framework that focuses on identifying and analyzing the internal and external factors that can have an impact on addressing community health and risk issues (van Wijngaarden, Scholten, & van Wijk, 2010). Needs analysis is essential for helping systems developers understand the nature of the health and risk issues that information systems are designed to address. It also indicates the kinds of information needed to address key health and risk issues.

p.179

Audience analysis is another form of needs analysis that focuses on providing information about the different key populations that health and risk information systems designers want to reach and influence. Audience analysis should provide system designers with information about which groups of people are at greatest risk for different health threats, what they currently know about key health threats, and what these audiences still need to know. It tells system designers what beliefs, attitudes, and values key audiences hold relevant to the health and risk issues that need to be addressed, how the audiences have responded to similar health and risk issues in the past, which different channels of communication they use for accessing health and risk information, and how effective these channels have been at providing them with accurate, relevant, and up-to-date health and risk information. Audience analysis also provides data about the most relevant communication characteristics of different key audiences, such as the primary languages they speak, their health literacy levels, their levels of trust in different information sources, and their receptivity to information about different health and risk issues. Audience analysis data are essential for guiding the design of responsive health and risk information systems.

Audience analysis data help system designers segment the most relevant and homogenous audiences for different health and risk information systems, so the information systems can be designed to be meaningful and influential for these key populations. This means that health information systems are typically best designed for specific audiences, illustrating the popular maxim that “one size does not necessarily fit every audience” (Kreps, 2012). Audience analysis data are generally collected by conducting interviews, focus groups, and surveys to gather self-report information from different populations. Sometimes key documents are analyzed, such as websites, online posts, letters, and newspapers through content analysis to examine key audience beliefs and attitudes about salient health and risk issues. Secondary analysis of relevant surveys, such as examination of data from the Health Information National Trends Survey (HINTS), can also provide relevant audience analysis data (Finney Rutten, Hesse, Moser, & Kreps, 2011; Hesse et al., 2005). In addition, observational data can provide insightful audience analysis data for guiding design of health and risk information systems. Formative evaluation research provides rationale and direction for the design of health and risk information systems that address important issues, provides relevant and up-to-date health and risk information, and reflects the unique cultures, communication orientations, and health information needs of intended audiences (Kreps, 2014a).

p.180

As the use of social media has become a vital channel for conducting online health communication campaigns, these media can also be leveraged as a powerful tool during formative evaluation research. In general, web-based tools and channels, including the use of popular social media systems like Facebook and Twitter, have advantages over in-person methods because these digital channels can transcend normal physical barriers (geographical distance and time constraints), making searchable content convenient to access and encouraging interactivity (Chu & Chan, 1998; Kreps, in-press). Social media reaches large and specific audiences that share common interests. Evidence suggests that up to 60% of public health departments employ at least one social media application, with nearly 90% using Twitter and 56% using Facebook (Thackeray, Neiger, Smith, & Van Wagenen, 2012). Facebook is the most popular social network for individual use, including use by 62% of adults 65 and older (Greenwood, Perrin, & Duggan, 2016). Other applications like Twitter, Pinterest, and Instagram are gaining popularity and tend to be used more by online adults ages 18–29, showing increasing potential for use to communicate health and risk information (Greenwood et al., 2016).

Eliciting feedback or input from social media users can be collected by reviewing websites, blogs, or social networking groups (Burke-Garcia, Berry, Kreps, & Wright, 2017). For instance, collecting needs or audience analysis from a key population segment that possesses relevant experience with a particular health and risk topic can be accomplished by posting a question on a Facebook wall to trigger a discussion (Neiger et al., 2012). This method is particularly effective to gather insights from hard-to-reach populations or when stigmatized issues are concerned. For instance, a discussion was created on a popular social networking website to understand teenagers’ HIV risk prevention strategies (Levine et al., 2011). This technique enabled the researchers to capture the exact language used by teenagers to use in creating appropriate risk prevention messages, and was a convenient and low cost means for collecting valuable health communication insights.

Process Evaluation Research

To ensure that online health information systems achieve their health communication goals, it is important to test key program components during the roll-out and use of digital health and risk communication interventions process evaluation research (Moore et al., 2014). Health and risk information programs need to be carefully assessed to determine their suitability and effectiveness with different audiences for addressing specific health and risk issues. User responses can be tracked to determine whether health and risk information programs are working well with different audiences. Tests are often conducted to determine how effective the message strategies and the communication channels used are for disseminating health and risk messages. Field tests are often conducted to determine how well the digital health and risk intervention programs have been implemented in key settings. User responses to programs can be tracked over time, especially after refinements are made to the programs to illustrate program usage trends. Digital health and risk information programs can also be tested experimentally to determine how acceptable and usable they are for key audience representatives. These tests often generate user recommendations for refining health and risk communication program features that can be implemented to improve intervention programs. Process evaluation is essential for identifying strategies to improve the quality and delivery of online health and risk communication systems.

p.181

Process evaluation data can be collected with user-response systems, such as questionnaires, interviews, or focus groups, that ask representative program participants about their experiences using the health information system, as well as elicit their evaluations of the strengths and weaknesses of program components. These tools are sometimes referred to as user satisfaction surveys. The Critical Incident Method is an especially useful and sophisticated qualitative user-response system for process evaluation that asks representative users about the best and worst elements in health and risk information systems, providing insightful data leading to in-depth recommendations for emphasizing the strongest parts and refining the weakest elements of health and risk information systems (Alpert, Kreps, Wright, & Desens, 2015).

Message testing experiments are also often used to assess user responses to health and risk messages, examining how much users liked the messages, as well as how informative, how believable, and how influential the messages were. System users are typically asked to provide suggestions for revising the health and risk messages to make them clearer, more interesting, and more influential. A/B testing is a message testing strategy that compares two versions of a webpage or digital health and risk communication application against each other to determine which one performs better. Sometimes eye-tracking tests are conducted to determine which messages respondents focus on and which messages they find most arousing. There are also standardized text analysis programs that are used to assess readability levels of health and risk communication system content, such as use of the CDC’s Clear Communication Index (Alpert, Desens, Krist, & Kreps, 2016a).

p.182

Usability tests are often conducted to determine how well different representative users can navigate online health and risk information systems (Kreps, 2014a; Nielsen, 1994; 1999). Representative system users are asked during usability tests to demonstrate how they use the system, showing how they can navigate health and risk communication systems to find specific information on the systems. Researchers often will ask system users to comment on how easy or difficult it is for them to find information and navigate the health and risk communication systems during the usability tests, inviting respondents to suggest better ways to design the information systems to make them easier and more effective to use. The data provided from the usability tests can reveal hidden system flaws and suggest strategies for refining health and risk communication system design. For instance, a digital exercise simulation called BringItOn, which was designed by Albu, Atack, and Srivastava (2015) to increase users’ physical activity levels to promote health, recovery, or rehabilitation, was subjected to usability testing that included heuristic expert analysis and think-aloud verbal protocol. Heuristic analysis involves having an expert, in this case a software engineer, evaluate an application and compare it to industry best practices (Nielsen, 1994). The think-aloud verbal protocol involves asking a user to describe decision-making criteria used during a problem-solving task (Fonteyn, Kuipers, & Grobe, 1993). Based on these data gathered through these methods, BringItOn revised the software to better fit the needs and wants of participants.

In addition to usability tests, system usage data are tracked to identify who uses the health and risk information system, how often they use the system, and how much time they spend interacting with the system (Kreps, 2002). Tracking data can often be collected unobtrusively through analysis of system use and billing records. Also, website usage metrics and surveys can be tracked to measure levels of reach and engagement. This type of process evaluation tracking was utilized in the FaceSpace Sexual Health Promotion Project, with the metrics providing objective data about audience usage of the system and the timing of their engagement with the system (Nguyen et al., 2013). Survey data were gathered to explain users’ online and sexual behaviors, while team meeting notes kept records of the challenges associated with conducting a sexual health promotion program using social media (Nguyen et al., 2013). While usage data are interesting, it is often necessary to question users directly to find out why they use the system, how well the system works for them, and whether the information they accessed from the system influenced their health decisions, behaviors, and outcomes (Kreps, 2014a; Webb, Campbell, Schwartz, & Sechrist, 1972). Process evaluation research is critically important for tracking user responses to health and risk information systems over time and for providing evidence for refining system components to effectively meet the needs of system users (Kreps, 2002).

p.183

Summative Evaluation Research

Summative evaluation research is used to measure overall influences and outcomes from online health and risk information systems. Summative research is conducted after the health and risk information system has been in use for a substantial period of time to document the positive and negative influences the information system has had on addressing key health issues. Many of the evaluation research methods that were used in conducting both formative and process evaluation research on health and risk information systems are conducted again to compare system performance over time. By comparing baseline (pre-test) data on audience member’s beliefs, attitudes, knowledge, behaviors, and health status with outcomes (post-test) data on these same factors, a quasi-experimental, pre-post field test can be conducted to assess changes that have occurred during use of the health and risk information system. These changes can be compared to measures of comparison groups that did not have access to the health and risk information system to illustrate whether changes that occurred with the test group were related to system use. The summative evaluation data that are collected can provide important measures about the overall usefulness of online health and risk communication programs for addressing important issues and promoting public health (Kreps, 2002; 2014a; Nutbeam, 1998).

Summative data are collected to examine overall patterns of health and risk information program use, user satisfaction with programs, message exposure and retention from the programs, changes in key outcome variables (such as learning, relevant health behaviors, health services utilization, and health status) related to the intervention, as well as to provide economic analyses of program costs and benefits (cost-benefit analysis). Summative research also identifies strategies for sustaining the best health and risk communication intervention programs over time. Strong summative evaluation data can be very influential in determining the overall value of the health and risk information systems, identifying specific directions for improving these digital systems, and securing support for establishing program sustainability and institutionalization (Kreps, 2014a).

A good way to bolster summative evaluation of online social media–based health communication programs is to utilize social media tracking web analytics to identify key program performance indicators (KPIs). KPIs are metrics that assess pre-established goals of a social media program (Sterne, 2010). Metrics such as the number of clicks, shares, mentions, and followers can be used to gauge a variety of KPIs, like improved levels of interaction and awareness. Other KPIs include exposure (the number of times content on social media is viewed), reach (the number of people who have contact with the social media application), and engagement (participation in creating, sharing, and using content) (Neiger et al., 2012). Based on a campaign’s goals, KPIs should be identified and defined during the formative and process evaluation research stages. To monitor KPIs, typically a social media performance dashboard is used, which is an insight tool that monitors media performance and provides guidance for digital health and risk communication program enhancement and optimization (Murdough, 2009).

p.184

Social media provides a wealth of information that could be evaluated both quantitatively and qualitatively. Summative evaluation dashboards can be used to evaluate reach, discussions, and general outcomes. Reach focuses on several factors, including: the volume of mentions, where mentions are occurring (e.g., Twitter, social networks, blogs, discussion forums) and the social influence of individuals discussing the issue (Murdough, 2009). Discussions identify the main topics or themes, the tone of discussions (e.g., positive or negative), and whether sentiment concerning the health and risk issues has changed (Murdough, 2009).

Conclusion and Future Directions

Evaluation research should be an indispensable part of the development and refinement of every online health and risk communication information system (Kreps, 2014a; Rootman et al., 2001). Such research enables system developers to utilize user experience in designing and refining health information systems. This process is known as participatory or user-centered design (Nuehauser, 2001; Neuhauser et al., 2007; Neuhaser & Kreps, 2011; 2014). User-centered design not only helps direct the development of sophisticated, user-friendly digital health and risk communication systems, but it also encourages overall user involvement with the information systems (Neuhauser, 2001). The best health and risk communication information systems are designed to involve intended system users, reflecting the experiences and insights of these system users (Neuhauser et al., 2007).

Evaluation researchers should carefully identify available sources of audience analysis data when assessing health and risk communication systems. What do we already know about key audiences for these health and risk programs? Are there natural sources of information about key events that can be used to inform health and risk system evaluation efforts, such as the use of medical billing records, public records, or message transcripts? Health and risk information system designers can often fruitfully design and build-in user-response mechanisms for online programs to provide regular user feedback about program use. Researchers should carefully identify relevant data about key audience attributes and behaviors to use as benchmarks for later comparisons after the use of health and risk communication programs, both from established data sources or from newly collected data, to establish key baselines and track changes (hopefully improvements in these key indicators) over time. Usability tests should be conducted regularly to determine the suitability of digital health and risk communication programs for different groups of users. Researchers should also work closely with key representatives from targeted audiences to conduct user-centered design and community participative evaluation research to examine audience responses to digital health and risk communication programs (Neuhauser et al., 2007). Data from evaluation research should be applied to refining and improving all digital health and risk communication programs.

p.185

References

Albu, M., Atack, L., & Srivastava, I. (2015). Simulation and gaming to promote health education: Results of a usability test. Health Education Journal, 74(2), 244–254.

Alpert, J. M., Kreps, G. L., Wright, K. B., & Desens, L. C. (2015, May). Humanizing patient-centered health information systems: Critical incidents data to increase engagement and promote healthy behaviors. Presented to the International Communication Association conference, San Juan, Puerto Rico.

Alpert, J., Desens, L., Krist, A., & Kreps, G. L. (2016a). Measuring health literacy levels of a patient portal using the CDC’s Clear Communication Index. Health Promotion Practice, 18(1), 140–149. doi: 10.1177/1524839916643703.

Alpert, J. M., Krist, A. H., Aycock, B. A., & Kreps, G. L. (2016b). Designing user-centric patient portals: Clinician and patients’ uses and gratifications. Telemedicine and e-Health, advance online publication. doi:10.1089/tmj.2016.0096.

Alpert, J. M., Krist, A. H., Aycock, B. A., & Kreps, G. L. (2016c). Applying multiple methods to comprehensively evaluate a patient portal's effectiveness to convey information to patients. Journal of Medical Internet Research, 18(5), e112. doi: 10.2196/jmir.5451.

Burke-Garcia, A., Berry, C., Kreps, G. L., & Wright, K. (2017). The power and perspective of mommy-bloggers: Formative research with social media opinion leaders about HPV vaccination. Proceedings of the Hawaii International Conference on System Sciences, HICSS-50, pp. 1932–1941. IEEE Computer Society Digital Library. URI: http://hdl.handle.net/10125/41388.

Cho, H., & Salmon, C. T. (2007). Unintended effects of health communication campaigns. Journal of Communication, 57(2), 293–317.

Chu, L. F., & Chan, B. K. (1998). Evolution of web site design: implications for medical education on the Internet. Computers in Biology and Medicine, 28(5), 459–472.

Finney Rutten, L., Hesse, B., Moser, R., & Kreps, G. L. (Eds.) (2011). Building the evidence base in cancer communication. Cresskill, NJ: Hampton Press.

Fonteyn, M. E., Kuipers, B., & Grobe, S. J. (1993). A description of think aloud method and protocol analysis. Qualitative Health Research, 3(4), 430–441.

Green, L. W., & Glasgow, R. E. (2006). Evaluating the relevance, generalization, and applicability of research: Issues in external validation and translation methodology. Evaluation and the Health Professions, 29(1), 126–153.

p.186

Greenwood, S., Perrin, A., & Duggan, M. (2016, November 11). Social media update 2016. Retrieved March 16, 2017, from www.pewinternet.org/2016/11/11/social-media-update-2016/.

Hesse, B. W., Nelson, D. E., Kreps, G. L., Croyle, R. T., Arora, N. K., Rimer, B. K., & Viswanath, K. (2005). Trust and sources of health information. The impact of the Internet and its implications for health care providers: Findings from the first Health Information National Trends Survey. Journal of the American Medical Association (JAMA) Internal Medicine (formerly Archives of Internal Medicine), 165(22), 2618–2624.

Hornik, R., Jacobsohn, L., Orwin, R., Piesse, A., & Kalton, G. (2008). Effects of the National Youth Anti-Drug Media Campaign on youths. American Journal of Public Health, 98(12), 2229–2236.

Kreps, G. L. (in press). Strategic design of online information systems to enhance health outcomes through communication convergence. Human Communication Research.

Kreps, G. L. (2002). Evaluating new health information technologies: Expanding the frontiers of health care delivery and health promotion. Studies in Health Technology and Informatics, 80, 205–212.

Kreps, G. L. (2011a). Methodological diversity and integration in health communication inquiry. Patient Education and Counseling, 82, 285–291.

Kreps, G. L. (2011b). The information revolution and the changing face of health communication in modern society. Journal of Health Psychology, 16, 192–193.

Kreps, G. L. (2012). Consumer control over and access to health information. Annals of Family Medicine, 10(5). Retrieved from www.annfammed.org/content/10/5/428.full/reply#annalsfm_el_25148.

Kreps, G. L. (2014a). Evaluating health communication programs to enhance health care and health promotion. Journal of Health Communication, 19(12), 1449–1459. doi: 10.1080/10810730.2014.954080.

Kreps, G. L. (2014b). Achieving the promise of digital health information systems. Journal of Public Health Research, 3(3), 421, 128–129. doi: 10.4081/jphr.2014.471.

Kreps, G. L. (2014c). Epilogue: Lessons learned about evaluating health communication programs. Journal of Health Communication, 19(12), 1510–1514.

Kreps, G. L. (2015). Communication technology and health: The advent of ehealth applications. In L. Cantoni & J. A. Danowski (Eds.). Communication and Technology, Vol. 5 of Handbooks of Communication Science, pp. 483–493, (P. J. Schulz & P. Cobley, General Editors). Berlin, Germany: De Gruyter Mouton Publications.

Kreps, G. L., & Neuhauser, L. (2010). New directions in ehealth communication: Opportunities and challenges. Patient Education and Counseling, 78, 329–336.

Kreps, G. L., & Neuhauser, L. (2013). Artificial intelligence and immediacy: Designing health communication to personally engage consumers and providers. Patient Education and Counseling, 92, 205–210.

Krist, A. H., Nease, D. E., Kreps, G. L., Overholser, L., & McKenzie, M. (2016). Engaging patients in primary and specialty care. In Hesse, B. W., Ahern, D. K., & Beckjord, E. (Eds.), Oncology informatics: Using health information technology to improve processes and outcomes in cancer care (pp. 55–79). Amsterdam, The Netherlands: Elsevier.

p.187

Levine, D., Madsen, A., Wright, E., Barar, R. E., Santelli, J., & Bull, S. (2011). Formative research on MySpace: Online methods to engage hard-to-reach populations. Journal of Health Communication, 16(4), 448–454.

Moore, G., Audrey, S., Barker, M., Bond, L., Bonell, C., Cooper, C., Hardeman, W., Moore, L., O’Cathain, A., Tinati, T., Wight, D., & Baird, J. (2014). Process evaluation in complex public health intervention studies: The need for guidance. Journal of Epidemiological Community Health, 68, 101–102.

Murdough, C. (2009). Social media measurement: It’s not impossible. Journal of Interactive Advertising, 10(1), 94–99.

Neiger, B. L., Thackeray, R., Van Wagenen, S. A., Hanson, C. L., West, J. H., Barnes, M. D., & Fagen, M. C. (2012). Use of social media in health promotion purposes, key performance indicators, and evaluation metrics. Health Promotion Practice, 13(2), 159–164.

Neuhauser, L. (2001). Participatory design for better interactive health communication: A statewide model in the USA. Electronic Journal of Communication, 11(3).

Neuhauser, L., Constantine, W. L., Constantine, N. A., Sokal-Gutierrez, K., Obarski, S. K., Clayton, L., Desai, M., Sumner, G., & Syme, S. L. (2007). Promoting prenatal and early childhood health: Evaluation of a statewide materials-based intervention for parents. American Journal of Public Health, 97(10), 813–819.

Neuhauser, L., and Kreps, G. L. (2003). Rethinking communication in the e-health era. Journal of Health Psychology, 8, 7–22.

Neuhauser, L., & Kreps, G. L. (2008). Online cancer communication interventions: Meeting the literacy, linguistic, and cultural needs of diverse audiences. Patient Education and Counseling, 71(3). 365–377.

Neuhauser, L., & Kreps, G. L. (2010). Ehealth communication and behavior change: Promise and performance. Journal of Social Semiotics, 20(1), 9–27.

Neuhauser, L., & Kreps, G. L. (2011). Participatory design and artificial intelligence: Strategies to improve health communication for diverse audiences. In N. Green, S. Rubinelli, & D. Scott. (Eds.), Artificial intelligence and health communication (pp 49–52). Cambridge, MA: AAAI Press.

Neuhauser, L., & Kreps, G. L. (2014). Integrating design science theory and methods to improve the development and evaluation of health communication programs. Journal of Health Communication, 19(12), 1460–1471.

Neuhauser, L., & Paul, K. (2011). Readability, comprehension and usability. In Communicating risks and benefits: An evidence-based user’s guide. Silver Spring, MD: U.S. Department of Health and Human Services. Bethesda, MD: Food and Drug Administration.

Neuhauser, L., Schwab, M., Obarski, S. K., Syme, S. L., & Bieber, M. (1998). Community participation in health promotion: Evaluation of the California Wellness Guide. Health Promotion International, 13(3).

Nguyen, P., Gold, J., Pedrana, A., Chang, S., Howard, S., Ilic, O., Hellard, M., & Stoove, M. (2013). Sexual health promotion on social networking sites: A process evaluation of the FaceSpace project. Journal of Adolescent Health, 53(1), 98–104.

Nielsen, J. (1994). Usability engineering. Amsterdam, The Netherlands: Elsevier.

Nielsen, J. (1999). Designing Web usability: The practice of simplicity. Indianapolis, IN: New Riders Publishing.

p.188

Nutbeam, D. (1998). Evaluating health promotion—progress, problems, and solutions. Health Promotion International, 13, 27–44.

Rinegold, D. J. (2002). Boomerang effects in response to public health interventions: Some unintended consequences in the alcoholic beverage market. Journal of Consumer Policy, 25, 27–63.

Rootman, I., Goodstadt, M., McQueen, D., Potvin, L., Springett, J., & Ziglio, E. (Eds.). (2001). Evaluation in health promotion: Principles and perspectives. Copenhagen, Denmark: WHO.

Sterne, J. (2010). Social media metrics: How to measure and optimize your marketing investment. Hoboken, NJ: John Wiley & Sons.

Thackeray, R., Neiger, B. L., Smith, A. K., & Van Wagenen, S. B. (2012). Adoption and use of social media among public health departments. BMC Public Health, 12(1), 242.

van Wijngaarden, J. D. H., Scholten, G. R. M., & van Wijk, K. P. (2010). Strategic analysis for health care organizations: The suitability of the SWOT-analysis. International Journal of Health Planning and Management. Retrieved from www.researchgate.net/profile/Jeroen_Wijngaarden/publication/45094861_Strategic_analysis_for_health_care_organizations_the_suitability_of_the_SWOT-analysis/links/541fc9860cf203f155c25f28.pdf.

Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrist, L. (1972). Unobtrusive measures: Nonreactive research in the social sciences. New York: Rand McNally & Company.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset