2
QoE−Defining a User-Centric Concept for Service Quality

Martín Varela1, Lea Skorin-Kapov2, Katrien De Moor3 and Peter Reichl4

1VTT Technical Research Centre, Finland

2University of Zagreb, Croatia

3NTNU, Trondheim, Norway

4Université Européenne de Bretagne/Télécom Bretagne, France and University of Vienna, Austria

2.1 Introduction

Quality of Experience (QoE) has, in recent years, gained a prominent role in the research and related work of several fields, notably networking and multimedia, but also in other domains such as network economics, gaming, or telemedicine. It has, to some extent, also become a buzzword for marketers, network operators, and other service providers.

Historically, its origins can be traced to several sources, which are nowadays converging toward a common, mature understanding of what QoE actually is. Several of the key ideas behind QoE can be traced several decades back to research done, for example, by telephone operators into the quality of calls made through their systems, and of TV broadcasters in a quest to understand how users perceived the quality of television pictures. The issues involved here relate not only to the transmission aspects, but also to coding and equipment ones.

With the advent of Internet-based multimedia communication services, such as Voice over IP (VoIP), video streaming, video conferencing, etc., the role of the network's performance (often referred to as Quality of Service, QoS) became more important in determining the perceived quality of those services, and thus a part of the networking community also became involved in the research of perceived QoS, which has itself evolved to be called Quality of Experience (QoE) within the networking community.

The prominence of the users and their experience within the QoE research community has risen as time goes by. This makes sense, since at the end of the day, what really matters for service providers is that users keep buying their services, and providing good QoE is a sure-fire way to keep users satisfied. Beyond this rather mundane rationale lies an interesting scientific challenge: QoE is actually all about the user. The technical (network/application) aspects involved in understanding it are but a fraction of the overall QoE picture. Granted, those technical aspects are critical, and more importantly, are tractable; but if we had to predict what the “next frontier” for QoE is, it is bound to be at the user level. The question then becomes, how can we move from codecs and packets to an understanding of how users actually experience those services, and which elements make or break the quality of these experiences. Recently, psychologists, sociologists, and researchers in other humanities-related fields – with long traditions in investigating different aspects related to human experiences – have also begun to look into how users perceive and are affected by the quality of the services they use. The scope here is quite wide, ranging from research of emotion and affective aspects, to economics.

It is no wonder that with people from such different backgrounds involved in what conceptually is the same topic, views on what QoE actually is can be, and to some extent often are, radically different. In this chapter we will discuss the most prominent existing definitions of QoE, QoS and their relation, factors that may influence QoE, the need for understanding it in the context of services, social and psychological aspects of QoE, and its role in future telecommunications ecosystems.

2.2 Definitions of QoE

As mentioned above, QoE – as a research domain – lies at the junction of several disciplines, and this is to some extent reflected in the different definitions that have been given for it throughout the years.

The International Telecommunications Union (ITU)'s definition, given in [1], states that QoE is

The overall acceptability of an application or service, as perceived subjectively by the end-user.

The above definition is accompanied by two notes, which state that:

  1. QoE includes the complete end-to-end system effects, and
  2. The overall acceptability may be influenced by user expectations and context.

The definition given by the ITU clearly focuses on the acceptability of a service, but does not address the actual experience of the user of the said service. Moreover, it does not specify the complex notions of “user expectations” and “context,” and the possible influence of other human factors is ignored.

An alternative definition was produced at a Dagstuhl seminar on QoE held in 2009 (“From Quality of Service to Quality of Experience”) [2, 3], whereby QoE

…describes the degree of delight of the user of a service, influenced by content, network, device, application, user expectations and goals, and context of use.

In contrast to the ITU definition, the Dagstuhl seminar's definition takes a purely hedonistic view of the quality experienced by the user, and tacitly puts the system under consideration in the role of an influencing factor, whereas in the ITU definition, it was the focus.

In 2012, a large collaborative effort within COST Action IC1003 – European Network on Quality of Experience in Multimedia Systems (Qualinet) produced a white paper on definitions of QoE [4],1 which delved deeper than previous, efforts into providing a wider, and yet more precise, definition of QoE. The proposed definition states that QoE

...is the degree of delight or annoyance of the user of an application or service. It results from the fulfillment of his or her expectations with respect to the utility and/or enjoyment of the application or service in the light of the user's personality and current state.

As with the Dagstuhl definition, the one proposed by Qualinet has a clear focus on the user. It does, however, take both a hedonistic and a utilitarian perspective, and more importantly, it explicitly introduces the user's personality and current state (mood). This reflects a visible shift in the state of QoE research that sees the user aspects become more important, and is accompanied by the addition of the humanities' perspectives and approaches.

An interesting fact regarding the evolution of these definitions of QoE is the increasing importance of the user with time. From the systems-oriented ITU definition, through the user-centric Dagstuhl one, and ending in the more explicit consideration of the user's personality and mood in the Qualinet definition, the changes in perspective reflect those that can to some degree already be observed in the state of the art as well.

2.3 Differences Between QoE and QoS

In contrast to QoE, the concept of QoS has a rich history spanning more than two decades, during which it has been subject to an immense number of related publications [5, 6]. Quality of service has been defined by the ITU-T [7] as

The totality of characteristics of a telecommunications service that bear on its ability to satisfy stated and implied needs of the user of the service.

An earlier version of the definition has also been adopted by ETSI as their own definition of QoS [8].

With respect to the ITU-T's own definition of QoE, there is a clearly narrower scope in the definition of QoS, which limits itself to the characteristics of a telecommunications service. When we consider the newer and more inclusive Qualinet definition of QoE, the gap between the two widens significantly. The definition of QoS has a clear focus on the technical aspects of telecommunication services, whereas the QoE definition is focused on the user, and the services and applications it considers need not be telecommunications-related. The IETF takes an even more systems-oriented approach with their definition of QoS [9], which states that QoS is

...a set of service requirements to be met by the network while transporting a flow.

The IETF's definition of QoS has a purely network-oriented focus, and arguably no connection with the concept of QoE.

Beyond the semantic distinction in terms of definitions, QoS has in practice (i.e., in the extensive literature dedicated to it) mostly connotations of network performance, and no real relation to the stated and implied needs of the user. These network performance connotations can be related to both measures of network performance (such as delay, throughput, or jitter), and architectures and mechanisms for improving network performance (such as differentiated services [10]). Under both of those meanings, the relation of QoS to the end-user is tenuous at best. Oftentimes, works in the literature refer to QoE, when they are really addressing QoS (for instance, a common occurrence is that of equating lower delays to higher QoE). While the network QoS can certainly have a significant bearing on QoE, the type of implications often seen in the literature do not always hold in practice, mainly for two reasons. Firstly, QoE is itself a multi-dimensional construct, of which only some dimensions can be affected by network performance.2 Secondly, unless subjective user studies are carried out, or suitable models of user perception are used, it is very easy to overstate the impact of the network performance on actual QoE (i.e., the aforementioned lowered delays can, in fact, have no significant impact on the quality experienced by the users).

As a rule of thumb, we could say that we are only talking about QoE if at least the users' perception of quality is considered, be it via subjective assessment, or via suitable quality models.3

This is not to say that QoS is a concept unrelated to QoE. In practice, the performance aspects of a service and its underlying infrastructure are deeply – and non-trivially – related to the user's perception of the system's quality, and therefore to QoE. For media services in particular, there is a rather clear relation between the performance of the transport network (i.e., QoS) and the quality perceived by the user. This perceived quality can in turn be a dominant dimension (or set of dimensions) of the QoE of that particular service.

From a more practical perspective, QoS plays an important role in how service providers can affect the users' QoE, in several ways. There is the obvious aspect of perceived quality as mentioned above, but also more subtle relations between QoS and QoE – for example through service pricing and users' expectations, as discussed further in Section 2.7. A somewhat reciprocal relation exists also when QoE, or at least some aspects of it, are used to manage QoS (e.g., quality-driven network management).

If we consider a layered approach to understanding QoE, such as that proposed in [11], then we can make a more clear distinction between QoS and QoE. In the proposed approach QoE is modeled in layers, as seen in Figure 2.1.4 We can then see how QoE is dependent on QoS (but not only on it), and how QoS does not “reach high enough” in the stack to be sufficient to understand QoE.

images

Figure 2.1 The layered QoE model proposed in [11]. QoE is about the user, and hence cannot be considered absent the physiological, psychological, social (“Human”), and role-related (“User”) aspects of the user as a person, whereas QoS concerns the system, and hence it is considered at the Resource and Application layers. Incidentally, these layers can be mapped to the OSI network model

2.4 Factors Influencing QoE

A central question when evaluating and modeling QoE is the issue of what factors have a relevant influence on the quality of the end-user experience. A significant amount of past research has studied the relationships between technical QoS parameters and subjective quality perception (often reported as Mean Opinion Scores, MOS) [12–16]. However, such studies commonly neglect the impact of user-related or contextual factors on the outcome of the subjective evaluations. To give an example, two users evaluating QoE when using a Web-based service or watching a streaming video on their mobile phone under the same conditions (in terms of network and device performance) may provide significantly different QoE ratings due to the impact of their previous experiences in using such services, their expectations, their emotional state, etc. Other examples would be users' quality perceptions differing due to the different environments in which they are accessing a service (e.g., indoors, outdoors) or given different service costs (e.g., a person paying more for a service may expect a higher quality).

Building around the definition of QoE as previously cited from [4], the white paper states that

...in the context of communication services, QoE can be influenced by factors such as service, content, network, device, application, and context of use.

While a number of works have addressed classification of the multitude of QoE Influencing Factors (IFs) [17–19], we highlight the three categories of possible influence factors (defined as: Any characteristic of a user, system, service, application, or context whose actual state or setting may have influence on the Quality of Experience for the user) as proposed in the white paper (and further elaborated in [20]) as follows:

  • Human IFs comprise any variant or invariant properties or characteristics related to a human user. Examples include socio-economic and demographic status, physical and mental constitution, affective state, etc. The impact of different IFs can also be considered at different perceptual levels. A further discussion of human factors is given in Section 2.6.
  • System IFs are factors that determine the technically produced quality of an application/service, further classified into four sub-categories, namely content-, network-, media-, and device-related IFs. Examples of content factors include video spatial and temporal resolution, and graphical design elements and semantic content in the case of websites. Media-related IFs refer to encoding, resolution, sampling rate, media synchronization, etc. Network-related factors describe the transmission system (e.g., in terms of delay, loss, throughput), while device-related factors specify the capabilities and characteristics of the systems/devices located at the end points of the communication path.
  • Context IFs cover a broad range of factors that identify any situational property to describe the user's environment in terms of physical (e.g., location and space), temporal (e.g., time of day), social (e.g., a user being alone or surrounded by other people), task (e.g., the purpose for which a user is using a service), technical (e.g., the presence of other surrounding technical systems), and economic characteristics (e.g., cost paid for a service) [21, 22]. The importance of QoE from an economical point of view, as linked with economic pricing models, is discussed in further detail in Section 2.7.

Following this general classification, there are several points that need to be considered. Firstly, IFs may be interrelated (i.e., they should not be regarded as isolated). Drawing on a previous example, a user's expectations may be influenced by the context in which a given service is used (e.g., physical location, service cost) or the device being used to access a service. Hence, if we were to geometrically represent factors as belonging to multi-dimensional spaces (i.e., each factor representing a different dimension), the dimensions do not necessarily represent a basis of the IF space. A simple illustration is given in Figure 2.2, portraying a mapping from three multi-dimensional IF spaces to a QoE space (adapted from [19]). A point in each one of these spaces represents the corresponding state of the system (human state, system state, context state), with coordinates determined based on the values of the different factors considered. Certain constraints may be imposed to eliminate non-feasible points or regions in the factor space. We further consider a mapping from the different factor spaces to a multi-dimensional QoE space, referring to the decomposition of overall QoE (also referred to in the literature as integral QoE [23]) into multiple dimensions representing different QoE features, which in turn are not necessarily independent of each other. Hence, we can say that given a certain human, system, and context state (represented by points in the multi-dimensional spaces), a user's QoE is assessed (or estimated) in terms of multiple quality dimensions. These quality dimensions can then be further collapsed into a single, integral value of QoE (e.g., as a weighted combination of quality dimensions).

images

Figure 2.2 Multi-dimensional view of QoE influence factors

The mapping to a QoE space represents different possible QoE assessment/estimation methods. In the case of objective quality assessment, such a function can feed relevant input parameters to existing models to obtain quality estimations. In the case of subjective quality assessment, a mapping function may correlate input parameters with user quality scores. Furthermore, regression techniques or other machine learning tools, such as neural networks, can be used to estimate QoE.

It should be noted that QoE can be estimated/measured at different points in time and in relation to different time scales. For example, QoE can be measured at a particular instant in time, following a certain period of service use (e.g., following a one-time use of a service or following multiple service uses), or over a longer period of time (e.g., a provider wishing to determine overall customer experience over a period of a few weeks). Hence, it is important to note that temporal considerations are needed when addressing QoE.

While in theory we can identify a wide scope of factors that will (likely) have an impact on QoE, challenges still remain in: (1) measuring certain factors, (2) determining their impact on QoE, and (3) utilizing the knowledge of such impacts in the context of optimizing QoE, taking into account the point of view of different actors involved in the service delivery chain. While these challenges have been widely addressed in the context of the QoS-to-QoE mapping (focusing on identifying relationships between technical network performance and user-perceived quality), the measurements and impact of human-related psychological factors have been much less studied.

Finally, specific IFs are relevant for different types of services and applications, as will be elaborated on further in the following section. For example, a user's gaming skills are very important when considering QoE for games, while a person's medical condition may directly impact the QoE when using an e-Health service. If one were to consider the impact of system parameters, network delay may prove critical for a conversational VoIP service, whereas delay will have much less impact in the case of background file transfer or sending an email.

2.5 Service QoE

In order to truly be useful, the definition of QoE needs to be made operational for the different services one wishes to work on. The wide range of multimedia services being delivered today differs vastly in their characteristics, such as the type of media content being transmitted, the service purpose, real-time constraints, tolerance to loss, and degree of interactivity (e.g., one-way or bi-directional). When considering non-media services, such as Web or Cloud services, the inter-service variation in terms of relevant IFs and user requirements in terms of quality is even more pronounced. It is clear then that the relevant QoE IFs and their impacts on perceptual features need to be considered separately for different types of services.

There are two main types of reasons for trying to understand and estimate the QoE of a given service. The first one is related to the technical workings of the service, and in this context, the sought objective is to provide the user with a certain level of quality, given a set of constraints. This gives place to a variety of service and network management mechanisms that are driven by QoE estimations. Examples of these might be QoE-based mobility management [24], where an estimation of perceived quality provides guidance as to which available network a client should connect to, or more typical access control and traffic shaping mechanisms, which can also benefit from an understanding of QoE, both for different service types (e.g., video [25], VoIP [26]) or different network environments (e.g., LTE-A [27]).

The second type of reason is related to the business aspects of the services. Operators and service providers try to understand the role of QoE in how well a service is accepted, how much users are willing to pay for it [28], and how they can avoid churn. QoE plays an important role in these questions, as it provides a good approximation of the utility of the service for the users. Both types of reason often go hand in hand, as the technical aspects can dictate for example how a certain level of quality can be achieved to keep customers satisfied, given a set of operational constraints.

The large number of new services being adopted, coupled with the increased interest in understanding their QoE, make a solid case for developing new mechanisms to assess them, both subjectively and objectively. Studies addressing the quality of media services date back to the early days of circuit-switched telephony speech and television system quality evaluation, moving on to digital media services delivered via packet switched (fixed and mobile) networks [29]. While today numerous ITU standards recommend various quality models and assessment methodologies [30], new and emerging service scenarios (e.g., 3D audiovisual services, Web and Cloud-based services, collaborative services, gaming) are driving the need for new quality metrics and models. In order to illustrate the dependency of QoE on the type of service, we will briefly summarize research and standardization activities addressing QoE assessment and prediction for different traditional and emerging types of services in the QoE domain.

2.5.1 Speech

Studies evaluating the quality of voice services have gone from considering circuit switched voice to addressing VoIP in packet switched networks [31]. Detailed guidelines have been provided for conducting subjective assessments, for example listening quality and conversational quality tests [32–34], most commonly quantifying user ratings along a 5-point MOS scale.

Listening quality for speech services has been an object of extensive study in both academic and industrial research. Several models have been developed, originally focusing on encoding and Plain Old Telephone Service (POTS) transmission aspects, and more recently incorporating IP transmission factors. The available models range from trivial measures of distance between an original and degraded signal (e.g., SNR) to very complex ones based on detailed models of the Human Auditory System (HAS) [35], to models providing a more detailed estimation of speech quality along several dimensions of quality [36]. Current models for Full-Reference (FR) assessment of speech quality – such as ITU-T P.862 [37] and its replacement P.863 [35] – can provide very accurate estimations of perceived quality for speech under a variety of conditions. No-Reference (NR) models, both signal-based [38] and parametric ones (e.g., [39]), can produce accurate enough estimations of speech quality for instance for monitoring purposes.

Objective models for conversational speech are not as abundant or accurate as the ones available for listening quality. The most widely referenced NR model is the ITU-T E-Model [40]. The E-Model is a planning model, designed for network dimensioning rather than quality estimation, and so it is not the ideal tool for this task. However, given the scarcity of available models, it is often used for this purpose in the literature. A promising approach was proposed in [41], based on the PSQA methodology [42], but it requires further work to be useful in the general case.

One of the challenging aspects of conversational quality is that it depends not only on the technical aspects of the transmission chain, but also on the dynamics of the conversation itself, which are not always easy to characterize. An indicator of “conversational temperature” was introduced in [43]. That, and similar approaches, can provide a means to incorporate the interactivity aspects of a call into the quality estimations. More recently, a promising indicator for conversational quality has been introduced in [44].

New lines of research regarding conversational quality assessment are emerging in the domain of multi-party systems (such as tele meetings and VoIP bridges). The ITU has recently released a recommendation for the subjective assessment of such systems [45].

2.5.2 Video

The assessment of video quality has been under study since shortly after television became a common service. Traditionally, subjective video assessment is carried out according to the ITU-R B.500 series of recommendations (the latest of which is described in [46]), designed for television services. For more modern multimedia services, video assessment can be done according to [47]. More recently, the combined assessment of audio and video within audiovisual services has given rise to newer ways of assessing its quality [48], which take into account the two modalities involved at the same time.

Beyond subjective assessment, there exist a large number of objective models for estimating video quality. A very comprehensive overview of the current state of the art in this domain can be found in [49]. Most of the best available video quality estimators are FR models, which compare a degraded signal to the original one, and produce an estimate of its quality. Of the large variety of FR models available in the literature, it is worth mentioning PEVQ [50] and VQuad HD [51], which represent the current standards in FR video assessment. PEVQ is designed to assess streaming video in the presence of packet losses, for a variety of resolutions ranging from QCIF to VGA, in several commonly used encodings (H.264, MPEG-4 Part 2, Windows Media 9, and Real Video 10). VQuad HD, on the contrary, aims to assess high-definition (1080p, 1080i, and 720p) streams, also considering lossy transmission scenarios and frame freezes up to 2 s in duration.

Other well-known methods for FR assessment include those based on the Structural Similarity Index (SSIM) [52, 53], related variants [54], and MOSS [55]. Singular-Value Decomposition (SVD)-based methods (cf. [56]) are also available in the literature.

For audiovisual assessment (in particular for IPTV systems), we can single out the recent ITU standardized parametric models for audiovisual quality assessment ITU-T P.1201 [57] (P.1201.1 [58] for SD content, P.1201.2 [59] for HD content), which provide very accurate estimates and are suitable for use in monitoring and control applications, as they do not require a reference in order to produce estimates.

2.5.2.1 HTTP Video

Over the past few years, video streaming sites such as YouTube, Netflix, Hulu, or Vimeo have exploded onto the scene, with video streaming representing today a very large fraction of the overall traffic of the Internet. These services, in contrast to older ones, use HTTP as their transport (as opposed to RTP/UDP or MPEG2-TS, as is commonly the case in legacy services). This has interesting implications for quality assessments, as HTTP provides a lossless transport, whereas for UDP-based streaming, losses are the single major source of artifacts and quality degradations. In contrast, HTTP-based streaming suffers from delay issues, but no actual image quality degradation during transport. When packets are lost, and the underlying TCP congestion avoidance algorithm kicks in, the throughput decreases and the playout buffer may be emptied, which results in freezes and rebuffering events. In this context, the presence, length, and distribution of these freezes is what determines the quality perceived by the users. The move from progressive download streaming toward adaptive HTTP streaming (such as MPEG-DASH, or Apple's Live Streaming) further complicates the development of good predictive models for QoE of HTTP video. Efforts in this domain are becoming more common of late (cf., e.g., [60–63]). It has been shown that in the context of HTTP-based streaming, freezes seem to dominate the perceived quality of the service [64], but trying to infer when the playout buffer suffers under-runs from observed network behavior is still an issue to solve in order to create predictive models for the quality of these types of systems.

2.5.3 Cloud and Web-Based Services

In terms of services and their QoE, speech (also audio) and video are considered “traditional” and have been, as discussed above, studied extensively. New service trends, however, make it interesting to expand our understanding of QoE to other services which are not necessarily media-oriented. Over the past few years, there has been an increasing trend of migrating services to “the Cloud.” Cloud-based versions of previously locally hosted services are, by the very nature of the Cloud, bound to perform differently, and also be experienced differently by their users. It is therefore important to understand what QoE means in this context (that is, what does it take to make the definition of QoE operational for this type of service). There are several challenges in this regard, which have been described in [65]. Studies have already been conducted for several non-traditional types of service such as Cloud storage [66] and remote desktop service [67].

In a more general setting, many Cloud-based services are provided via the Web, or via mobile applications which often use Web services as their back-end. This has spurred a strong interest in understanding and modeling the QoE of Web applications and services. This is complicated by the fact that practically any type of service can be delivered via the Web, and thus Web QoE models have restrictions in their generalization capabilities. For the most part, studies in this domain have focused on the performance aspects [12, 68, 69]. It is the case, however, that performance is not the only factor influencing QoE in the Web, and other aspects such as aesthetics [70] and usability, and their interplay with performance on Web QoE, are the focus of currently ongoing efforts.

2.6 Human Factors and QoE

Earlier in this chapter, we pointed to a set of factors at the human, system, and context level, that – either independently or interlinked – may have an influence on QoE. In this section, we take a closer look at a number of psychological and social aspects that are situated at the human level and need to be considered when investigating QoE, either because of their possible influence on QoE, or as elements of QoE. Note that as in [4] we refer to “human influence factors,” which go beyond those defining the “user” as a role. The underlying rationale is that there are human characteristics and properties beyond those that are linked to a person in the specific role of the “user” of a service, product, or application, that may have a strong impact on QoE. This distinction is to a major extent artificial; although a specific role is taken up when using a service or application, the person remains the same, so therefore we give a brief overview of human IFs and point to the implications and related challenges for the field of QoE. This is represented in the layered model in Figure 2.1 by the User-as-a-role layer being on top of the Human layer representing the user as a person. In this context we also underline the particular importance of affective states in the light of the aforementioned new Qualinet definition of QoE.

2.6.1 Human Influence Factors: Known unknowns or unknown knowns?

Several conceptual representations of QoE (see, e.g., [17, 22, 71–73]) that have been introduced in the literature over the last years underline the importance of human IFs and the need to better understand which factors influence QoE, in which circumstances, and how (e.g., in which order of magnitude and direction). In addition, it is often repeated that human IFs are very complex: most of them refer to abstract and intangible notions that exist – consciously or unconsciously – in the mind of a human user. As a consequence, many of these factors are invisible and black boxed, they may somehow be formalized in the mind of a human user (e.g., in the form of stories, feelings, etc.), but in many cases they are not, and their influence takes place at an unconscious and latent level. Moreover, given their inherent subjective nature, the conceptual boundaries of a range of human factors such as values, emotions, attitudes, expectations, etc. have been the subject of long – and in some cases still ongoing – debates in a range of scientific communities and fields. Even further adding to the complexity, some of these factors have a very transient and dynamic character: they evolve in often unpredictable ways over time in a time-frame that can be very short (e.g., emotions) or longer (e.g., moods, skills). Although some factors have a more stable character (e.g., attitudes), that does not necessarily make them easier to grasp. Partly this is further reinforced by the fact that human factors are in many cases strongly intertwined and correlated, either with other human factors (e.g., values and socio-cultural background) or with other influencing factors at the system or context level (e.g., affective state and social context). As a result, their possible impact may for instance be reinforced or reduced depending on the presence or absence of specific other factors. Prevalent challenges are therefore not only to define and operationalize human factors, but also to measure them, to disentangle them, and to understand their relative impact on QoE [20].

In practice, and without a doubt strongly linked to this inherent complexity, the focus on human IFs was (and is still) very often limited to the relatively easy to grasp “usual suspects” such as gender, age, visual acuity, expertise level, etc. Moreover, these aspects are usually included as something in the margin (e.g., to describe a sample population), but not as the main focus of the research or as independent variables of which the impact on QoE is thoroughly investigated. More recently, the interest in human IFs in general, and in aspects such as expectations and affective states in particular, has strongly increased. This is also reflected in the growing literature and number of studies focusing on the influence of one or more human factors on QoE, in different application domains (such as IP-based services, mobile applications and services, gaming, etc.). Nevertheless, despite this growing interest, the understanding of more complex human IFs and their possible impact on QoE is still relatively limited. This largely “uncharted territory” status may be partly due to the fact that a clear and broader picture of the wide range of human factors (next to system and context IFs) that may bear an influence on QoE was for a long time missing within the community.

2.6.2 Shedding Light on Human Influence Factors

As mentioned in Section 2.4, part of the recent community-driven efforts (e.g., within the COST Action 1003 – Qualinet) toward further conceptualizing the field of QoE have been oriented toward tackling exactly this challenge. In the following, we give a number of examples of possible influence factors, using the classification presented in [4] and further discussed in [20] as a basis. Note that this overview can and should not be considered exhaustive.

Since human perception is an essential part of QoE, a possible way to structure the multitude of human IFs is – as mentioned above – to consider them at different perceptual levels. More concretely, some human IFs are most relevant for “low-level sensory processing.” In the literature, this is also referred to as “bottom-up” or “data-based” processing, which draws on the processing of “incoming data” from the senses and their receptors [74]. Characteristics related to the physical, emotional, and mental state of the human user may bear an influence on QoE in this respect [4]. Aspects that are particularly salient at this level are, for example, visual and auditory sensitivity (and other properties of the Human Visual System, HVS and HAS). It can be argued that most of these factors have a rather constant or even dispositional character. Moreover, they can be strongly correlated to other human factors (e.g., men have a higher probability of being color blind than women). Other factors, such as for example lower-order emotions (i.e., spontaneous and uncontrollable emotional reactions [75] that may also influence QoE at this perceptual level) have a much more dynamic character.

Top-down or knowledge-based processing (in the framework of [22] this is called “high-level cognitive processing”), on the contrary, refers to how people understand, interpret, and make sense of the stimuli they perceive. This type of processing is nearly always involved in the perceptual process (albeit not always consciously) and “knowledge” is to be broadly understood as any information that a perceiver brings to a situation [74]. In the context of QoE – with its inherent subjective character and the importance of the associated evaluative and judgmental processes – understanding the impact of human IFs at this level is of crucial importance. Again, a distinction can be made between factors that are relatively stable and factors that have a more dynamic character. Examples of the former are the socio-cultural background (which is to a large extent linked to and determined by contextual factors) and the educational level and socio-economical position of a human user, which are often strongly linked. Both are relatively, yet not entirely, stable: depending on the life-stage, their impact on different features may be very different (e.g., the perceived fairness of the price of a service may be different for a student without a stable income than for a pensioner who is in a much more comfortable financial situation). Other examples of relatively stable factors that may influence QoE and its related evaluation and judgment processes are goals, which fuel people's motivations and correspond to underlying values. A common distinction made is the one between goals and values that are more pragmatic and instrumental (called “do-goals” in [76]) and others – situated at a higher abstraction level – that go beyond the instrumental and that refer to ultimate life goals and values (“be-goals” in [76]). The saliency of specific goals and values in the mind of a human user may strongly influence QoE and the importance attached to specific features.

Similarly, the motivations (which can differ in terms of, e.g., intensity and orientation) of a person to use a service or application may have a strong impact. For example, when the use of an application (e.g., a mobile game) is intrinsically motivated, its inherent characteristics and features (e.g., game-play, story, joy-of-use, etc.) will be most important. Extrinsically motivated behavior, in contrast, is driven by an external, separable outcome [77] (e.g., to earn money). Other examples of more dispositional or relatively stable human influence factors are personality (in the literature, reference is also made to emotional traits or personality traits in this respect), preferences, and attitudes. The latter two are particularly interesting in relation to QoE, as they are both intentional (i.e., oriented toward a specific stimulus, object, person) and contain evaluative judgments or beliefs (see also [20]). Moreover, attitudes are not only about cognition, they also have an affective and behavioral component and these three components influence each other.

Examples of human factors that have a more dynamic character are expectations, previous experiences, moods, and emotions. As discussed in Section 2.2, the ITU definition of QoE already pointed explicitly to the possible influence of “user expectations” on QoE. In the literature, different types of expectations are distinguished (see, e.g., [78]) and expectations can be based on a range of sources. For instance, they may be modified based on previous experiences, on experiences from others, on advertisements, etc. Despite their strong relevance for QoE and a number of recent studies on expectations and QoE (see, e.g., [79, 80]), the impact of implicit and explicit expectations and the way in which they interplay with or depend on other influence factors (such as economical context, physical context, device context, content, etc.) is still rather poorly understood. At the end of this chapter, the relation between user expectations, general QoE, and price is further discussed.

Finally, we mention two affective states that may have a strong impact on QoE and that – in recent years – have strongly gained importance, namely emotions and moods (see also [81]). Both are dynamic, yet there are important differences. Firstly, emotions are intentional whereas moods are not. Secondly, whereas moods have a longer duration (they can last for a couple of hours to a number of days), emotions are characterized by their highly dynamic character: they last for a number of seconds to a number of minutes. Emotions are thus much more “momentary.” The growing interest in emotions and other affective states in the field of QoE is well reflected in the recent literature [82–86] and in ongoing research activities (e.g., within the scope of Qualinet). The literature on emotions in other disciplines (psychology, neurosciences, etc.), focusing the influence of emotions on other psychological functions and how they relate for instance to human perception and attention, cognitive processes and behavior, is overwhelming. In spite of this, major challenges still lie ahead for the field of QoE. Firstly, to bring this already available knowledge together and integrate it into the state of the art in the field of QoE. Secondly, to gain a better understanding of how emotions and other affective states can be measured and influence QoE, based on what is already known from other fields and empirical research. Finally, an additional challenge lies in the development of heuristics and guidelines to better account for the influence of human affect on QoE in practice.

2.6.3 Implications and Challenges

Referring back to what was already argued earlier in this chapter, the field of QoE has strongly evolved and shifted more toward the user perspective over the last years. However, as the above brief overview of relevant human factors and aspects already indicated: new challenges and frontiers have appeared and these are situated exactly at this level. First of all, it is clear that the involvement of researchers from disciplines and fields studying “human factors” is a necessary precondition to deepen the understanding of human factors, related psychological processes (e.g., decision making, perception, cognition, emotion), and what they mean for QoE (both in theory and in practice). The related disciplines can be framed under the umbrella of behavioral and social (“soft”) sciences and include, for example, social sciences (communication science, sociology, etc.), cognitive psychology, and social psychology, and anthropology. This also implies that prevalent language barriers and epistemological differences that exist between these more human-oriented fields and those that are investigating QoE from a technical (e.g., networking) perspective need to be bridged in order to enable mutual understanding and to approach QoE from a genuine interdisciplinary perspective (see also [87]).

Secondly, stating that a range of factors at the human level may have a strong influence is not enough. Its implications at the methodological level are major, and they cannot simply be ignored. As a result, methods and measures that allow us to investigate the importance and possible impact of human factors (not only separately, but also in relation to other influence factors) need to be further explored, adopted, and embedded into the traditional “subjective testing” practices. As the latter are too limited and fail to take subjective, complex, dynamic factors into account, complementary methods, tools, and approaches (e.g., use of crowd-sourcing techniques, physiological and behavioral tools, QoE research in the home environment, etc.) are currently fully being explored within the QoE community.

Thirdly, the wide range of human factors – of which we gave a brief overview – also imply that users may differ from each other in many ways. Future research should therefore seek to move beyond the notion of the “average user” and to explore approaches that allow us to take the diversity and heterogeneity of users into account. In practice this means moving more toward the smart use of segmentation approaches, which can provide valuable input for the business domain in the QoE ecosystem (e.g., in view of developing tailored charging schemes and oriented toward QoE-based differentiation).

Finally, defining QoE in terms of emotional states (“a degree of delight or annoyance”) and pointing to the importance of the enjoyment next to or as opposed to the utility of an application or service implies that the traditional indicators of QoE – which are oriented toward the estimation of user satisfaction in relation to experienced (technical) quality and instrumental aspects – need to be reconsidered as well. The relation between traditional QoE indicators and affective states such as delight and annoyance needs to be thoroughly investigated, not only as such, but also in the presence of the identified QoE influence factors. Several recent studies have started to explore the relevance and use of alternative self-report, behavioral, and physiological measures of QoE (to gain a better insight into indicators of, e.g., engagement, delight, positive and negative affect, etc.) in this respect (see, e.g., [88, 89]). However, they represent only a small proportion of the literature, implying that in most cases QoE research is still a matter of business as usual and that crossing some of the identified new frontiers will inevitably require major changes, or turn out to be a just a fancy new wrapping, but fundamentally nothing new.

2.7 The Role of QoE in Communication Ecosystems

Summarizing what has been said so far, we have seen that the transition from QoS to QoE and the corresponding turn toward the user involves significant methodological implications, and thus represents a genuine paradigm change rather than a mere update from “QoS 1.0” to “QoS 2.0”, especially if we take into account that QoS research for decades has been shaped by mainly studying network QoS parameter like delay, jitter, packet loss rate, etc. Instead, shifting the end-user/end-customer back to the center of our attention reminds us of a similar, however much more illustrious, paradigm change, which took place in the middle of the 16th century, when Polish-Prussian astronomer Nicolaus Copernicus put the transition from the geocentric to the heliocentric model of the universe into effect. In an analogous way, the QoE model of service quality assumes that the user and her needs are placed in the center of the technological universe (and not the other way round), and the related paradigm change has consequently been labeled an “Anti-Copernican Revolution” [6, 90].

To describe the resulting interplay between end-users, business entities, and the technological environment in more detail, Kilkki has proposed using the overarching metaphor of “Communication Ecosystems” as an analogy to the well-known concept of biological ecosystems [91]. Remember that in biology, ecosystems basically describe communities of organisms and their environment as a joint system of mutual interactions and inter dependencies. Similarly, a communication ecosystem is populated by communities of private and business users as well as further commercial entities, all of them interacting which each other using communication services offered from the technological environment. As Kilkki points out in his analysis, this approach allows us to obtain a profound understanding of the generation of expected novel products and services together with corresponding business models, based on a deep understanding of human needs and how to serve them in terms of technology and economics.

It is worth noting that the analogy between biological and communication ecosystems goes even further, especially due to the fundamental role of hierarchical structures which we will analyze in a bit more detail here. In biology, there are mainly two ways of interaction within an ecosystem: in a horizontal perspective, individuals and communities either compete or cooperate with each other, whereas from a vertical point of view, the main types of interaction are “eating” and “being eaten,” which are reflected in concepts like food chains or ecological pyramids. In this context, we distinguish producers (e.g., plants or algae) from consumers (i.e., animals like herbivores or carnivores) and decomposers (e.g., bacteriae or funghi). Moreover, producers and consumers are ordered along the food chain into five trophic layers, starting from plants over primary (herbivores eating plants), secondary (carnivores eating herbivores), and tertiary (carnivores eating carnivores) consumers up to apex predators which form the top of the food chain and do not have further natural enemies.

With communication ecosystems, we can easily identify comparable hierarchical structures on different levels, for instance:

  • ISO/OSI model. In the basic reference model for open systems interconnection, [ISO/IEC 7498-1] distinguishes seven different layers of a communication system – the (1) physical, (2) data link, (3) network, (4) transport, (5) session, (6) presentation, and (7) application layers. Each layer is assumed to use services offered from the layers below to serve the needs of the layers above.
  • Internet protocols. To fulfill the tasks attributed to the various layers of the OSI model, a plethora of communication protocols have been specified, which may compete against each other on a single layer; for instance on the transport layer, UDP has been designed for real-time communication and offers unreliable but efficient transport, while TCP provides guarantees against packet loss but can incur additional delays due to re-transmission and rate adaptation. Overall, protocols from a lower layer offer services to protocols from higher layers, which in the case of the Internet has led to the notorious “hourglass model” underlining the specific role of IP.
  • Communication services value chains. Starting from physical network resources (like fiber networks) owned by infrastructure providers, and used by network service providers to offer Internet access and connectivity. Based on this, application service providers are able to offer specific services and applications, while content providers and OTTs (over-the-top providers) use the underlying structures to publish their content to the end-customer. Again, within one layer, providers may either compete against each other for joint market shares and/or cooperate, for instance in guaranteeing end-to-end inter-carrier services.

With communication ecosystems, modeling cooperation and competition between the various entities and communities involved is significantly facilitated by employing methods and tools from cooperative and non-cooperative game theory as well as the application of related micro-economic concepts which altogether allow an in-depth understanding of the interplay between the different stakeholders and their individual interests. For instance, determining the value of a resource or service provided to an individual customer is not only one of the important issues within QoE research, but at the same time has traditionally played a key role in micro-economics, especially in the context of utility theory. Here, the so-called utility function is used to model user preferences over certain sets of goods and services, which allows us to derive indifference prices as well as maximize individual and social welfare in an efficient way.

Hence, research on communication ecosystems turns out to be an intrinsically interdisciplinary endeavor, joining methods and approaches from communication technology and user-centered research with micro-economics and a variety of further social sciences. Of course, in order to enable a methodologically consistent approach, we have to suppose – as mentioned earlier in this chapter – a certain degree of common language for formal description as an additional requirement. In fact, this highlights the pivotal position of service quality in our context: while the notion of quality serves as a key concept for the technological as well as the economical and the user side, respectively, and while the primary understanding of this concept may still be slightly different for all three perspectives (technology: QoS, economics: utility, user: QoE), we have, however, sufficient tools at hand to unify these approaches into a joint framework, as exemplified in Figure 2.3.

images

Figure 2.3 The triangle model for ecosystem quality

The resulting triangle model describing the close interplay between technological, economical, and user-centric forces may be summarized briefly as follows.

  • User: QoE with a communication service is influenced by two main dimensions (i.e., the service quality as offered by the network and the tariff charged for this quality). While the former may be described in terms of a QoS-to-QoE mapping and is the focus of most of the ongoing QoE research, the latter is determined by corresponding economic pricing models. As a result, the user communicates his/her QoE evaluation (e.g., in the form of a MOS) as feedback into the system, while on a longer time scale, the quality experienced by the user is of fundamental importance to the customer churn rate.
  • Network: Network service providers, application service providers, and content providers cooperate along the value chain, according to business models which arrive as input from the economic side. As a result, a certain level of network QoS is delivered toward the user, which is assumed to satisfy user expectations and at the same time, aggregated over all customers, maximize social welfare in the system.
  • Economics: Utility functions, which formally describe the value of technical resources to the user, are fundamental for deriving appropriate dynamic prices to be paid by the customer. At the same time, the customer churn rate serves as key input for confirming the chosen tariff structure as well as, again on a longer time scale, the underlying business and value-creation models.

Note that this triangle model comprises at the same time two different time/service granularities. On the one hand, the QoE evaluation by the user is very application- and context-specific and subject to continuous change over time, while the resulting maximization of social welfare as well as the corresponding dynamic pricing schemes provide direct reactions to the current state. On the other hand, customer churn rate and business models vary on a longer time scale, similar to the fundamental laws governing the relationship between network-centric QoS and user-centric QoE.

In the remainder of this section, our focus will be directed toward further analyzing the question of how to charge for QoE, which is intimately linked with the user perspective described above. To this end, Figure 2.4 depicts a simple feedback model for QoE-based charging as described in [28].

images

Figure 2.4 Model for QoE-based charging.

Source: Reichl et al., 2013 [28]

In this model, we analyze a system with limited overall resources, which is therefore prone to congestion depending on the total demand created by the users. Hence, the system may be described by the following four functions.

  • QoS function: q = q(d)
  • Demand function: d = d(p)
  • Price function: p = p(x)
  • QoE function: x = x(q, p)

Here, the mentioned relation between network QoS and overall demand is captured by the QoS function q(d), while the demand essentially depends on the price charged to the customer, according to the demand function d(p). Moreover, the level of network QoS delivered to the user is directly influencing his/her experience, hence the user's QoE function x = x(q,.) depends on QoS as one input parameter.

The relationship between QoE and price is less trivial, and indeed forms the core of this model. Note that, in fact, prices act in an interesting dual role: on the one hand, the price to be paid for service quality depends on the delivered quality level (the better the quality, the more expensive the service); on the other hand, the price also has a direct impact on user expectations (the more expensive a service, the better it has to be) and, subsequently, also on the QoE evaluation (if customer expectations are high, the resulting experience is by definition lower than if prices and thus expectations are low from the beginning).

As a consequence, the price function p = p(x) describes the dependency of the price on the service quality, while the price p serves at the same time as (another) input parameter to the QoE function x(q, p). While, of course, this QoE function may additionally depend on further user context parameters, for the sake of clarity we restrict ourselves to the two-dimensional version denoted above.

Note that we may easily determine the marginal behavior of x(q, p) for q = 1 and p = 0, respectively, as follows. If the price is kept constant at zero, this corresponds to a service delivery which is for free throughout. In this case, x(q, p = 0) reflects the fundamental mapping between QoS and QoE, which for certain scenarios has already been captured in terms of fundamental laws for QoE (like, for instance, the IQX hypothesis [14] or logarithmic laws of QoE [13]). In contrast, keeping QoS constant at its maximal level allows us to express the impact of user expectations on QoE evaluation: if we assume that getting the best quality for free results in maximal QoE, then increasing the price should lead to a monotonically decreasing QoE function, whose shape, however, is largely unknown (in [28], linearity has been assumed, but also convex or sigmoid shapes might be possible). Finally, if either QoS is zero, or the price is infinitely high, then the QoE function will become zero. Altogether, a simple form for the QoE function which is sufficient for these marginal restrictions is provided by separating the impact of network QoS and user expectations and assuming a product form for QoE.

  • Separable QoE function: x = x(q, p) = xQ(q) · xE(p)

Here xQ refers to the QoS dimension, while xE takes the impact of prices into sole account.

In [28] the resulting system of equations has been analyzed mathematically. As the main result, it turns out that under some rather general assumptions concerning the existence and sign of the second derivatives of the four mentioned functions, the system possesses a trivial non-stable fixed point where services are offered in minimal quality, but for free. In addition, the system possesses also a non-trivial, stable fixed point which forms a Nash equilibrium and is determined by balancing the trade-off between QoE and price to be charged.

Moreover, the existence of this equilibrium has been subject to extensive user trials that are also described in [28] and the references cited therein. For instance, a user study has been performed comprising more than 40 test subjects, who have been asked to choose arbitrarily between 20 quality/price levels of three short video-on-demand movies during a 5-minute free trial period. In order to achieve a realistic result, the test subjects have been given real money (10 euros in cash) at the beginning of the trial, which they could freely spend on improving the video quality (or take home afterwards).

As main results of this trial we can confirm that quality matters for the users, as more than 90% of the trial subjects decided to invest some of their (real) money in improving their QoE, and also the opportunity to test the various QoE/price levels in the initial 5 minutes of each movie was used extensively (some users with more than 80 level changes, i.e., one or more changes every 4 s!). The final fixed point has usually (i.e., in more than 80% of cases) been achieved according to a decaying oscillation pattern with different amplitudes and/or decay factors; note that a simple classification algorithm allows for a successful automatic classification of the user convergence behavior in almost all cases, see [28] for further details.

2.8 Conclusions

In this chapter we have done a “full-stack tour” of QoE, starting from its definitions, its relations to QoS, the factors that affect it, and how they relate to different services, its relation to human factors, and its role in communication ecosystems. The first conclusion to take away is that QoE is a complex concept, and that it lies at the junction of several, mostly unrelated, scientific, technical, and human disciplines.

It is clear that research in the QoE domain is evolving steadily toward the user. This poses conceptual and practical difficulties, as outlined in Section 2.6, but is a necessary step to take if QoE is to establish itself as a mature field of study. QoE is, after all, all about the user!

That is not to say, of course, that the technical aspects are to be left aside. As seen in Sections 2.4, and 2.5, the technical factors and services under consideration play a significant role in how quality is perceived by the users.

Finally, we have also considered the economic impact of QoE in future communications ecosystems. It seems clear that QoE will play an important part in the economy of Internet services in the near future, hence understanding it properly is key to developing successful business models and practices.

Acknowledgments

K. De Moor's work was carried out during the tenure of an ERCIM “Alain Bensoussan” Fellowship Programme and received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 246016.

M. Varela's work was partially supported by Tekes, the Finnish Funding Agency for Technology and Innovation, in the context of the CELTIC QuEEN project.

Additional support from the EU Marie Curie program in the framework of the RBUCE WEST International Research Chair NICE (Network-based Information and Communication Ecosystems) at Universite Europeenne de Bretagne/Telecom Bretagne is gratefully acknowledged.

Notes

References

  1. ITU-T. Recommendation P.10/G.100 – Vocabulary for Performance and Quality of Service — Amendment 1: New Appendix I – Definition of Quality of Experience (QoE). 2007.
  2. Fiedler, M., Kilkki, K., and Reichl, P., ‘Executive Summary – From Quality of Service to Quality of Experience.’ In Fiedler, M., Kilkki, K., and Reichl, P. (eds), From Quality of Service to Quality of Experience. Dagstuhl Seminar Proceedings 09192, Dagstuhl, Germany, 2009.
  3. Möller, S., Quality Engineering – Qualitat kommunikationstechnischer Systeme. Springer, Berlin, 2010.
  4. Le Callet, P., Möller, S., and Perkis, A. (eds), Qualinet White Paper on Definitions of Quality of Experience, Lausanne, Switzerland, June 2012.
  5. Crowcroft, J., et al., ‘QoS downfall: At the bottom, or not at all!’ Proceedings of the ACM SIGCOMM Workshop on Revisiting IP QoS: What have we learned, why do we care? ACM, 2003, pp. 109–114.
  6. Reichl, P., ‘Quality of experience in convergent communication ecosystems.’ In Lugmayr, C.D.Z.A. and Lowe, G.F. (eds), Convergent Divergence? Cross-Disciplinary Viewpoint on Media Convergence. Springer, Berlin, 2014.
  7. ITU-T. Recommendation E.800 – Definitions of Terms Related to Quality of Service. 2008.
  8. ETSI. ETR 003 – Network Aspects (NA); General aspects of Quality of Service (QoS) and Network Performance (NP). 1994.
  9. Crawley, E., et al., ‘A framework for QoS-based routing in the Internet.’ IETF RFC 2386, 1998.
  10. IETF Network Working Group. RFC 2475 – An Architecture for Differentiated Services. 1998.
  11. Guyard, F., et al., ‘Quality of experience estimators in networks.’ In Mellouk, A. and Cuadra, A. (eds), Quality of Experience Engineering for Customer Added Value Services: From Evaluation to Monitoring. Iste/John Wiley & Sons, New York, 2014.
  12. Egger, S, et al., ‘Time is bandwidth? Narrowing the gap between subjective time perception and quality of experience.’ 2012 IEEE International Conference on Communications (ICC 2012), Ottawa, Canada, June 2012.
  13. Reichl, P., et al., ‘The logarithmic nature of QoE and the role of the Weber–Fechner Law in QoE assessment.’ ICC 2010, pp. 1–5.
  14. Fiedler, M., Hoßfeld, T., and Tran-Gia, P., ‘A generic quantitative relationship between quality of experience and quality of service.’ IEEE Network Special Issue on Improving QoE for Network Services, June 2010.
  15. Fiedler, M. and Hoßfeld, T., ‘Quality of experience-related differential equations and provisioning-delivery hysteresis.’ 21st ITC Specialist Seminar on Multimedia Applications – Traffic, Performance and QoE, Phoenix Seagaia Resort, Miyazaki, Japan, March 2010.
  16. Shaikh, J., Fiedler, M., and Collange, D., ‘Quality of experience from user and network perspectives.’ Annals of Telecommunications, 65(1&2), 2010, 47–57.
  17. Möller, S., et al., ‘A taxonomy of quality of service and quality of experience of multimodal human–machine interaction.’ International Workshop on Quality of Multimedia Experience (QoMEx), July 2009, pp. 7–12.
  18. Stankiewicz, R. and Jajszczyk, A., ‘A survey of QoE assurance in converged networks.’ Computer Networks, 55, 2011, 1459–1473.
  19. Skorin-Kapov, L. and Varela, M., ‘A multidimensional view of QoE: The ARCU model.’ Proceedings of the 35th International Convention MIPRO, Opatija, Croatia, May 2012, pp. 662–666.
  20. Reiter, U., et al., ‘Factors influencing quality of experience.’ In Möller, S. and Raake, A. (eds), Quality of Experience: Advanced Concepts, Applications and Methods. Springer, Berlin, 2014.
  21. Jumisko-Pyykkö, S. and Vainio, T., ‘Framing the context of use for mobile HCI.’ International Journal of Mobile Human–Computer Interaction, 2(4), 2010, 1–28.
  22. Jumisko- Pyykkö, S. ‘User-centered quality of experience and its evaluation methods for mobile television.’ PhD thesis, Tampere University of Technology, Finland, 2011.
  23. Raake, A., ‘Short- and long-term packet loss behavior: Towards speech quality prediction for arbitrary loss distributions.’ IEEE Transactions on Audio, Speech, and Language Processing, 14(6), 2006, 1957–1968.
  24. Varela, M. and Laulajainen, J.-P., ‘QoE-driven mobility management – integrating the users’ quality perception into network-level decision making.' 2011 Third International Workshop on Quality of Multimedia Experience (QoMEX), 2011, pp. 19–24.
  25. Seppänen, J. and Varela, M., ‘QoE-driven network management for real-time over-the-top multimedia services.’ IEEE Wireless Communications and Networking Conference 2013, Shanghai, China, April 2013.
  26. Fajardo, J.-O., et al., ‘QoE-driven dynamic management proposals for 3G VoIP services.’ Computer Communications, 33(14), 2010, 1707–1724.
  27. Gomez, G., et al., ‘Towards a QoE-driven resource control in LTE and LTE-A networks.’ Journal of Computer Networks and Communications, 2013, 2013, 1–15.
  28. Reichl, P., et al., ‘A fixed-point model for QoE-based charging.’ Proceedings of the 2013 ACM SIGCOMM Workshop on Future Human-Centric Multimedia Networking (FhMN '13), Hong Kong, pp. 33–38.
  29. Raake, A., et al., ‘IP-based mobile and fixed network audiovisual media services.’ IEEE Signal Processing Magazine, 28(6), 2011, 68–79.
  30. ITU-T. Recommendation G.1011 – Reference Guide to Quality of Experience Assessment Methodologies. 2010.
  31. Raake, A., Speech Quality of VoIP: Assessment and Prediction. John Wiley & Sons, New York, 2006.
  32. ITU-T. Recommendation P.800 – Methods for Subjective Determination of Transmission Quality. 1996.
  33. ITU-T. Recommendation P.920 – Interactive Test Methods for Audiovisual Communications. 2000.
  34. Möller, S., et al., ‘Speech quality estimation: Models and trends.’ IEEE Signal Processing Magazine, 28(6), 2011, 18–28.
  35. ITU-T. Recommendation P.863 – Perceptual Objective Listening Quality Assessment. 2001.
  36. Möller, S. and Heute, U., ‘Dimension-based diagnostic prediction of speech quality.’ ITG Conference on Speech Communication, 2012, pp. 1–4.
  37. ITU-T. Recommendation P.862 – Perceptual Evaluation of Speech Quality (PESQ), an Objective Method for End-To-End Speech Quality Assessment of Narrowband Telephone Networks and Speech Codecs. 2001.
  38. ITU-T. Recommendation P.563 – Single Ended Method for Objective Speech Quality Assessment in Narrow-Band Telephony Applications. 2004.
  39. Rubino, G., Varela, M., and Mohamed, S., ‘Performance evaluation of realtime speech through a packet network: A random neural networks-based approach.’ Performance Evaluation, 57(2), 2004, 141–162.
  40. ITU-T. Recommendation G.107 – The E-model: A Computational Model for Use in Transmission Planning. 2011.
  41. da Silva, A.C., et al., ‘Quality assessment of interactive real time voice applications.’ Computer Networks, 52, 2008, 1179–1192.
  42. Varela, M., ‘Pseudo-subjective quality assessment of multimedia streams and its applications in control.’ PhD thesis, INRIA/IRISA, Rennes, France, 2005.
  43. Hammer, F. and Reichl, P., ‘Hot discussions and frosty dialogues: Towards a temperature metric for conversational interactivity.’ 8th International Conference on Spoken Language Processing (ICSLP/INTERSPEECH 2004), Jeju Island, Korea, October 2004.
  44. Holub, J., et al., ‘Management conversational quality predictor.’ Proceedings of PQS 2013, Vienna, Austria, September 2013.
  45. ITU-T. Recommendation P. 1301 – Subjective Quality Evaluation of Audio and Audiovisual Multiparty Telemeetings. 2012.
  46. ITU-R. Recommendation BT. 500-13 – Methodology for the Subjective Assessment of the Quality of Television Pictures. 2012.
  47. ITU-T. Recommendation P. 910 – Subjective Video Quality Assessment Methods for Multimedia Applications. 2008.
  48. ITU-T. Recommendation P.911 – Subjective Audiovisual Quality Assessment Methods for Multimedia Applications. 1998.
  49. Chikkerur, S., et al., ‘Objective video quality assessment methods: A classification, review, and performance comparison.’IEEE Transactions on Broadcasting, 57(2), 2011, 165–182.
  50. ITU-T. Recommendation J.247 – Objective Perceptual Multimedia Video Quality Measurement In The Presence Of A Full Reference. 2008.
  51. ITU-T. Recommendation J.341 – Objective Perceptual Multimedia Video Quality Measurement Of HDTV For Digital Cable Television In The Presence Of A Full Reference. 2008.
  52. Lu, L., et al., ‘Full-reference video quality assessment considering structural distortion and no-reference quality evaluation of MPEG video.’ ICME (1), IEEE, 2002, pp. 61–64.
  53. Wang, Z., Lu, L., and Bovik, A.C., ‘Video quality assessment based on structural distortion measurement.’ Signal Processing: Image Communication, 19(2), 2004, 121–132.
  54. Moorthy, A.K. and Bovik, A.C., ‘Efficient motion weighted spatio-temporal video SSIM index.’ Human Vision and Electronic Imaging, 2010, p. 75271.
  55. Wang, Z., Simoncelli, E.P., and Bovik, A.C., ‘Multi-scale structural similarity for image quality assessment.’ Proceedings of IEEE Asilomar Conference on Signals, Systems, and Computers (Asilo3), 2003, pp. 1398– 1402.
  56. Tao, P. and Eskicioglu, A.M., ‘Video quality assessment using M-SVD.’ Image Quality and System Performance IV, 2007.
  57. ITU-T. Recommendation P. 1201 – Parametric Non-Intrusive Assessment Of Audiovisual Media Streaming Quality. 2012.
  58. ITU-T. Recommendation P.1201.1 – Parametric Non-Intrusive Assessment Of Audiovisual Media Streaming Quality – Lower Resolution Application Area. 2012.
  59. ITU-T. Recommendation P.1201.2 – Parametric Non-Intrusive Assessment Of Audiovisual Media Streaming Quality – Higher Resolution Application Area. 2012.
  60. Oyman, O. and Singh, S., ‘Quality of experience for HTTP adaptive streaming services.’ IEEE Communications Magazine, 50(4), 2012, 20–27.
  61. Alberti, C., et al., ‘Automated QoE evaluation of dynamic adaptive streaming over HTTP.’ Fifth International Workshop on Quality of Multimedia Experience (QoMEX), 2013.
  62. Stockhammer, T., ‘Dynamic adaptive streaming over HTTP – standards and design principles.’ Proceedings of the Second Annual ACM Conference on Multimedia Systems (MMSys '11), San Jose, CA, 2011, pp. 133–144.
  63. Riiser, H., et al., ‘A comparison of quality scheduling in commercial adaptive HTTP streaming solutions on a 3G network.’ Proceedings of the 4th Workshop on Mobile Video (MoVid '12), New York, 2012, pp. 25–30.
  64. Hoßfeld, T., et al., ‘Initial delay vs. interruptions: Between the devil and the deep blue sea.’ Fourth International Workshop on Quality of Multimedia Experience (QoMEX), 2012, pp. 1–6.
  65. Hoßfeld, T., et al., ‘Challenges of QoE management for Cloud applications.’ IEEE Communications Magazine, 50(4), 2012, 28–36.
  66. Amrehn, P., et al., ‘Need for speed? On quality of experience for file storage services.’ 4th International Workshop on Perceptual Quality of Systems (PQS), 2013.
  67. Casas, P., et al., ‘Quality of experience in remote virtual desktop services.’ IFIP/IEEE International Symposium on Integrated Network Management (IM 2013), 2013, pp. 1352–1357.
  68. Strohmeier, D., Jumisko-Pyykkö, S., and Raake, A., ‘Towards task-dependent evaluation of Web-QoE: Free exploration vs. “Who ate what?”’ IEEE Globecom, Anaheim, CA, December 2012.
  69. Selvidge, P.R., Chaparro, B.S., and Bender, G.T., ‘The world wide wait: Effects of delays on user performance.’ International Journal of Industrial Ergonomics, 29(1), 2002, 15–20.
  70. Varela, M., et al., ‘Towards an understanding of visual appeal in website design.’ Fifth International Workshop on Quality of Multimedia Experience (QoMEX), 2013, pp. 70–75.
  71. Geerts, D., et al., ‘Linking an integrated framework with appropriate methods for measuring QoE.’ Second International Workshop on Quality of Multimedia Experience (QoMEX), 2010, pp. 158–163.
  72. Wechsung, I., et al., ‘Measuring the quality of service and quality of experience of multimodal human–machine interaction.’ Journal on Multimodal User Interfaces, 6(1&2), 2012, 73–85.
  73. Laghari, K., Crespi, N., and Connelly, K., ‘Toward total quality of experience: A QoE model in a communication ecosystem.’ IEEE Communications Magazine, 50(4), 2012, 58–65.
  74. Goldstein, E.B., Sensation and Perception, 6th edn. Wadsworth-Thomson Learning, Pacific Grove, CA, 2002.
  75. Zajonc, R., ‘Feeling and thinking: Preferences need no inferences.’ American Psychologist, 35, 1980, 151–175.
  76. Hassenzahl, M., ‘User experience (UX): Towards an experiential perspective on product quality.’ Proceedings of the 20th International Conference of the Association Francophone d'Interaction Homme–Machine (IHM '08), Metz, France, 2008, pp. 11–15.
  77. Ryan, R. and Deci, E., ‘Intrinsic and extrinsic motivations: Classic definitions and new directions.’ Contemporary Educational Psychology, 25(1), 2000, 54–67.
  78. Higgs, B., Polonsky, M., and Hollick, M., ‘Measuring expectations: Forecast vs. ideal expectations. Does it really matter?’ Journal of Retailing and Consumer Services, 12(1), 2005, 49–64.
  79. Sackl, A., et al., ‘Wireless vs. wireline shootout: How user expectations influence quality of experience.’ Fourth International Workshop on Quality of Multimedia Experience (QoMEX), 2012, pp. 148–149.
  80. Sackl, A. and Schatz, R., ‘Evaluating the impact of expectations on end-user quality perception.’ Fourth International Workshop on Perceptual Quality of Systems (PQS), 2013.
  81. Frijda, N., ‘Varieties of affect: Emotions and episodes, moods, and sentiments.’ In Ekman, P. and Davidson, R. (eds), The Nature of Emotions: Fundamental questions. Oxford University Press, New York, 1994, pp. 59–67.
  82. Reiter, U. and De Moor, K., ‘Content categorization based on implicit and explicit user feedback: Combining self-reports with EEG emotional state analysis.’ Fourth International Workshop on Quality of Multimedia Experience (QoMEX), 2012.
  83. Schleicher, R. and Antons, J.-N., ‘Evoking emotions and evaluating emotional impact.’ In Möller, S. and Raake, A. (eds), Quality of Experience: Advanced Concepts, Applications and Methods. Springer, Berlin, 2014.
  84. Antons, J.-N., et al., ‘Brain activity correlates of quality of experience.’ In Möller, S. and Raake, A. (eds), Quality of Experience: Advanced Concepts, Applications and Methods. Springer, Berlin, 2014.
  85. Arndt, S., et al., ‘Subjective quality ratings and physiological correlates of synthesized speech.’ Fifth International Workshop on Quality of Multimedia Experience (QoMEX), 2013, pp. 152–157.
  86. Arndt, S., et al., ‘Perception of low-quality videos analyzed by means of electroencephalography.’ Fourth International Workshop on Quality of Multimedia Experience (QoMEX), 2012, pp. 284–289.
  87. De Moor, K., ‘Are engineers from Mars and users from Venus? Bridging gaps in quality of experience research: Reflections on and experiences from an interdisciplinary journey.’ PhD thesis, Ghent University, 2012.
  88. Antons, J., Arndt, S., and Schleicher, R., ‘Effect of questionnaire order on ratings of perceived quality and experienced affect.’ Fourth International Workshop on Perceptual Quality of Systems (PQS), 2013.
  89. De Moor, K., et al., ‘Evaluating QoE by means of traditional and alternative measures: Results from an exploratory living room lab study on IPTV.’ Fourth International Workshop on Perceptual Quality of Systems (PQS), 2013.
  90. Reichl, P., ‘It's the ecosystem, stupid: Lessons from an anti-Copernican revolution of user-centric service quality in telecommunications. 6th International Conference on Developments in e-Systems Engineering (DeSE-13), December 2013.
  91. Kilkki, K., An Introduction to Communication Ecosystems. CreateSpace Independent Publishing Platform, Helsinki, 2012.

Acronyms

ETSI

European Technical Standards Institute

HTTP

Hypertext Transfer Protocol

IETF

Internet Engineering Task Force

IF

Influencing Factor

IP

Internet Protocol

IPTV

IP Television

ITU

International Telecommunications Union

LTE-A

Long Term Evolution – Advanced

MOS

Mean Opinion Score

MPEG

Motion Picture Experts Group

MPEG-DASH

MPEG Dynamic Adaptive Streaming over HTTP

OSI

Open Systems Interconnection Model

OTT

Over The Top

POTS

Plain Old Telephone System

PSQA

Pseudo-Subjective Quality Assessment

QoE

Quality of Experience

QoS

Quality of Service

RTP

Real-time Transmission Protocol

SNR

Signal-to-Noise Ratio

TCP

Transmission Control Protocol

UDP

User Datagram Protocol

VGA

Video Graphics Array

VoIP

Voice over IP

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset