3-Minute Madness Session
Date/time: Thu, 6 June, 16:45-17:45
1. Pablo Pérez: Quality of Experience in a (Virtual) Escape Room
Abstract: HELP! The alien has escaped and we need you to solve the three mixed reality puzzles of the FuturESCape Room to activate the contention system. It will save your life and in return we will be collecting from you biometric and behavioral data to evaluate the Quality of Experience in Distributed Reality without you even noticing…
Escape room games are designed to play by anyone, not just “gamers”. This gives us population representativity. Users will be focusing on the experience and not on rating the quality. This gives us ecological validity. Within the game we can modify the visual quality, or the types of interactions (using your hands vs. using VR controllers), and analyze the impact on QoE in two ways: implicitly, by monitoring the time used to solve some puzzles or biological signals such as ECG; and explicitly, by collecting user feedback. This gives us data.
Can you imagine having people willing to pay to participate in your subjective assessment experiment? This is the future we are traveling to.
2. Stefan Wunderer: Sensitors
Abstract: As we all know, Quality of Experience is the degree of delight or annoyance of a person whose experiencing involves an application, service or system. In mobile communication systems so far, this is quite easy: most persons experiencing mobile communications are using a smartphone. So there is an easy 1-2-1 connection between the used technical equipment and the person’s QoE. In 5G, however, a dominant field of applications will run directly on sensors, machines and other industrial equipment (often referred to as IoT). Any QoE evaluation would need first a person experiencing this 5G IoT application, service or system.
Instead of a cumbersome search for the ‘end user’, we decided to enable the ‘things’ to undergo experiences like an end user. The necessary stream of perceptions consists of feelings, sensory percepts and concepts. The in IoT widely used sensors don’t have any problem with sensory percepts (as they are sensors per se) as well as with pragmatic aspects (concepts). The difficult part is the feelings the poor sensors don’t have.
In a pioneering development, we implemented feelings into them and called them sensitive sensors or sensitors. In this session our first prototypes are introduced. Their astonishing QoE evaluations are presented and briefly discussed. The audience is then asked to brainstorm for more sensitive machines which are able to evaluate their own Quality of Experience.
3. Aljoscha Burchardt: Citizenship for Subject, Predicate & Co.!
Abstract: Observations of QoE/UX are bound by the resolution of the instruments used. So far, it seems that the linguistic content used in experiments is more or less a constant and cannot be further decomposed other than by implicit means such as questionnaires involving subjects.
As crazy as it seems, more than half a century ago the scientists Noam Chomsky came up with the idea that linguistic entities follow a structure (syntax) that is intertwined with their meaning (semantics). A decade later, Austin and later Searle and others argued that there is another layer (pragmatics) that connects the meaning with the communicative goals of the (language) users and the world.
Today, it is still difficult to observe, model, and process syntax and semantics, let alone pragmatics. Still, we believe that the respective entities deserve the status of objects of study in the QoE/UX world. We believe that there is a connection between, e.g., the quality of experience and the quality of language. Therefore we plea to make linguistic entities first class citizens!
4. Matthias Hirth: Quality of cRowdsourced subjEctiVe and Interactive Evaluation of Written contentS (Quality of REVIEWS)
Abstract: Crowdsourcing has become a valuable tool for collecting subjective ratings of multiple media types at a large scale. However, multiple challenges are still unsolved and subject to ongoing research. These challenges include biases introduced by culture or expectations, the weighting of expert ratings vs. the ratings of the majority of naive crowdworkers, and effects of different rating scales. Additionally, it is also unclear how subjective ratings of workers would look like if researchers would break up the barrier between contributors of subjective ratings and allow them to collaborate or discuss their perception instead of collecting independent ratings.
This talk aims to raise the awareness of these challenges and stimulate further research in this direction. During the talk, we will use written contents instead of audiovisual media to illustrate the current issues with (fictional) examples. This further allows us to show some similarities with well-established systems like EDAS or EasyChair, that already today enable the cRowdsourced subjEctiVe and Interactive Evaluation of Written contentS (REVIEWS) outside the QoE domain.
5. Yassine Bakhti: Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification
Abstract: Deep neural networks (DNNs) have recently achieved state-of-the-art performance and provide significant progress in many machine learning tasks, such as image classification, speech processing, natural language processing, etc. However, recent studies have shown that DNNs are vulnerable to adversarial attacks. For instance, in the image classification domain, adding small imperceptible perturbations to the input image is sufficient to fool the DNN and to cause misclassification. The perturbed image, called adversarial example, should be visually as close as possible to the original image. However, all the works proposed in the literature for generating adversarial examples have used the Lp norms (L0, L2 and L1) as distance metrics to quantify the similarity between the original image and the adversarial example. Nonetheless, the Lp norms do not correlate with human judgment, making them not suitable to reliably assess the perceptual similarity/fidelity of adversarial examples. In this Madness Session Idea, we want to emphasize the importance of using perceptual metrics or even develop a new one in order to craft imperceptible adversarial examples that completely fool the Deep neural network. We believe that this problem related to visual quality metric can be tackled by QoMEX community which is involved in both signal processing and image quality evaluation as perceived by the human visual system.
6. Martin Varela: Coming full circle
Abstract: Despite different opinions on exactly when, where and how QoE came to be as a discipline, it is undeniably linked to the notion of Quality of Service. Lately, and partly due to my current job (quality monitoring for WebRTC platforms), I’ve been thinking about quality at the systems level. QoE is defined in terms of the user, and so, to speak about systems-level QoE is, at best, an abuse of language, and at worst, a deep conceptual misunderstanding. However, the intent when we say “service-level QoE” seems clear enough; we want to understand how users (as an aggregate) perceive the quality of our application or service. When looking at this, we can immediately find issues related to time, number of users, multiple sessions, etc. If, in a way QoE has been seen (at least by some actors) as “user-level QoS”, I would like to invite the community to try and think of “Quality of Service” as “Service-level QoE”. That is, how can we use QoE estimations for individual users, in short time scales (which is good first order approximation of what most QoE models give us), to better understand the overall quality of a given service?