ISRAHCI 2016: THE FOURTH ISRAELI HUMAN-COMPUTER INTERACTION RESEARCH CONFERENCE
PROGRAM FOR THURSDAY, FEBRUARY 18TH

View: session overviewtalk overview

10:30-11:30 Session 3: Paper Session I
10:30
Examining Factors Influencing the Disruptiveness of Notifications in a Mobile Museum Context
SPEAKER: unknown

ABSTRACT. Ubiquitous computing environments continuously infer our context and proactively offer us context aware services and information, suggested by notifications on our mobile devices. However, notifications come with a cost. They may interrupt the user in the current task and be annoying in the wrong context. The challenge becomes how to notify the user about the availability of relevant services while minimizing the level of disruptiveness. Thus, an understanding of what affects the subjective perception of the disruptiveness of the notification is needed. As yet, most of the research on disruptiveness of notifications focused on stationary, task-oriented environments. In this study, we examine the effect of notifications in a special leisure scenario — a museum visit. In two user studies conducted in a museum setting, participants used a context-aware mobile museum guide to receive information on various museum exhibits while periodically receiving notifications. We examined how the user’s activity, the modality of the notification, and the message content affected the perceived level of disruption that the notifications created. We discuss our results in light of existing work in the desktop and mobile domains and provide a framework and recommendations for designing notifications for a mobile museum guide system

10:45
DataSpoon: Overcoming Design Challenges in Tangible and Embedded Assistive Technologies
SPEAKER: unknown

ABSTRACT. The design of tangible and embedded assistive technologies poses unique challenges. We describe the challenges we encountered during the design of "DataSpoon", explain how we overcame them, and suggest design guidelines. DataSpoon is an instrumented spoon that monitors movement kinematics during self-feeding. Children with motor disorders often encounter difficulty mastering self-feeding. In order to treat them effectively, professional caregivers need to assess their movement kinematics. Currently, assessment is performed through observations and questionnaires. DataSpoon adds sensor-based data to this process. A validation study showed that data obtained from DataSpoon and from a 6-camera 3D motion capture system were similar. Our experience yielded three design guidelines: needs of both caregivers and children should be considered; distractions to direct caregiver-child interaction should be minimized; familiar-looking devices may alleviate concerns associated with unfamiliar technology.

11:00
KIP3: Robotic Companion as an External Cue to Students with ADHD
SPEAKER: unknown

ABSTRACT. We present the design and initial evaluation of Kip3, a social robotic device for students with ADHD that provides immediate feedback for inattention or impulsivity events. We designed a research platform comprised of a tablet-based Continuous Performance Test (CPT) that can identify inattention and impulsivity events, and a socially expressive robotic device (Kip3) as feedback. We evaluated our platform with 10 students with ADHD in a within subject user study, and report that 9 out of 10 participants felt that Kip3 helped them regain focus, but wondered if it will be effective over time and how it will identify inattention in more complex situations outside the lab.

11:10
Steel-Sense: Integrating machine elements with sensors by Additive Manufacturing
SPEAKER: unknown

ABSTRACT. Many interactive devices use both machine elements and sensors, simultaneously but redundantly enabling and measuring the same physical function. We present Steel-Sense, an approach to integrate these two families of elements to create a new type of HCI design primitive. We leverage recent developments in 3D printing to embed sensing in metal structures that are otherwise difficult to equip with sensors, and present four design principles, implementing (1) an electronic switch integrated within a ball bearing; (2) a voltage divider within a gear; (3) a variable capacitor embedded in a hinge; and (4) a pressure sensor within a screw.

13:00-14:00 Session 5: Demos and Posters
13:00
Manners Matter: Trust in Robotic Peacekeepers
SPEAKER: unknown

ABSTRACT. The 'intuitive' trust people feel when encountering robots in public spaces is a key determinant of their interactions with the systems. To study the trust we presented subjects with static images of a robot performing an access-control task, interacting with younger and older male and female civilians, applying polite or impolite behavior. Our results showed strong effects of the robot's behavior. Age and gender of the people interacting with the robot had no significant effect on participants' impressions of the robot's attributes. This preliminary study shows that politeness may be a crucial determinant of people's perception of peacekeeping robots.

13:00
Maketec: A Makerspace as a Third Place for Children
SPEAKER: unknown

ABSTRACT. Makerspaces of various models are forming all around the world. We present a model and case study of the Maketec, a public drop-in makerspace for children, run by teens. The Maketec model is designed to promote making and socializing opportunities for girls and boys of ages 9-14. It is based on three underlying principles: (1) "Low Floor/Wide Walls": specific construction kits and digital fabrication technologies that allow kids to invent and create with no prior knowledge or expertise; (2) "Unstructured learning”: teen mentors and minimal instructions that promote self-directed learning; and (3) "A Makerspace as a third Place": the Maketec is managed by kids for kids in an effort to form a unique community of young makers. We present a preliminary study of four recurring visitors’ experiences around these three principles and discuss our insights about the model.

13:00
Wayfinding Challenges Faced by People with Low Vision
SPEAKER: unknown

ABSTRACT. Low vision is pervasive but rarely studied. We sought to answer two research questions: (1) what challenges do low vision people face when wayfinding and (2) how do low vision people interact with map-based applications on smartphones? Using contextual inquiry, we observed 11 low vision people while they searched and walked to a pharmacy store in an unfamiliar environment. The task involved finding a store with and without using the smartphone, seeing and reading street signs, walking to the store and crossing streets. We found that participants faced many struggles in our wayfinding task, and in particular it was difficult for them to see street signs. Although many participants used their smartphone to find the pharmacy, accessibility on the phone was hard and confusing. We discuss the challenges of wayfinding for people with low vision, the inadequacies of assistive technology on the smartphone and the need for further research on systems that enhance visual information for wayfinding.

13:00
The Player is Chewing the Tablet: Towards a Systematic Interpretation of User Behavior in Animal-Computer Interaction
SPEAKER: unknown

ABSTRACT. There is an increasing demand for digital games developed for pets, in particular for dogs and cats. However, play interactions between animals and technological devices still remains an uncharted territory both for animal behavior and user-computer interaction communities. While there is a lot of anecdotal evidence of pets playing digital games, the nature of animal-computer play interactions is far from being understood. In this work-in-progress we address the problem of analyzing and interpreting user behavior in such interactions. We use the tool of ethograms from applied ethology - catalogs of typical behavior patterns. We propose some preliminary criteria for classification of dog-tablet interactions and outline some directions for future research.

(This paper was presented at the WiP session at CHI Play 2015)

13:00
On the Peripheral Application of HMD Devices in Infantry Simulation
SPEAKER: unknown

ABSTRACT. The purpose of this paper is to present the results of an attempt carried out at the IDF Ground Forces Command Battle-Lab to integrate a Head Mounted Display (HMD) device as part of a peripheral-equipment simulator for infantry. The Battle-Lab is a research oriented simulation environment, where combat scenarios with many multiple human participants can be run to examine the effects novel concepts or technologies could have on scenario outcomes.

Previously (Michael et. Al, 2014) an attempt was made at the Lab to evaluate an HMD’s effectiveness as an exclusive display for infantry simulation. At that time, while the device tested was found to have had a positive impact on a participant’s motivation and spatial awareness, it was found lacking in the field of visual fidelity, as well as responsible for an increased incidence of simulation sickness among its wearers.

As a result of the previous evaluation it was decided to proceed with the integration of the device, but only in supplementary peripheral simulators. These included a pair of binoculars made available to an infantry soldier for use concurrently with a standard flat-screen first-person infantry simulation.

However, given the device’s reputation for causing simulation sickness, and our previous experience with the phenomena, it was decided to monitor the participants’ experience closely. This task was accomplished through simple after-action self-review supplemented by a more detailed daily debriefing with the Simulation Sickness Questionnaire.

In this paper are presented the results of this monitoring throughout a series of scenarios carried out at the Battle-Lab in 2014, conclusions from the gathered data, as well as lessons learned from the process of both building and studying simulation sickness in the use of peripheral simulators with HMD integration.

13:00
Lygo - Home is everywhere
SPEAKER: unknown

ABSTRACT. Lygo is a puzzle game with a unique exploration mechanics that encourage the player to experience a strange and dark universe. The game is a co-production of the MIT-Shenkar joint Purposeful Games Workshop that was taken place at MIT in Cambridge at the summer of 2015. Created by - Einat Daniel, Gregory Kogos, Eyal Stern, Miranda Cover Guidance by: Professor Scot Osterweil, Dr. Vered Pnueli, Philip Tan

Play the game: http://www.srslygames.com/standalone-games/lygo/ Game trailer: https://youtu.be/pisyovxBnG8

13:00
intel realsense extenstion for scratch
SPEAKER: unknown

ABSTRACT. Intel® RealSense™ Extension for Scratch introduces new capabilities to scratch users: hand tracking, gesture recognition, face tracking, facial expressions, voice control. All this with very simple Scratch blocks.

we will show a live product. already exists online and free to use for those who have an intel realsense 3D sensor.

product page: http://intel.com/realsense/scratch

13:00
MeZoog
SPEAKER: unknown

ABSTRACT. ה'סלפי' הפך לחלק מהתרבות של ימינו. דרך המכשירים הניידים, הצילום העצמי והעדכון השוטף ברשת, אנו מרוכזים בעיקר בעצמנו ובנראות שלנו. מטרת פרוייקט MeZoog היא להציע חוויה אינטראקטיבית חדשה המעוררת סקרנות והתבוננות על האחר, ולעודד פתיחות וקבלה של האחר. MeZoog היא מערכת אינטראקטיבית המפגישה, בזמן אמת, בין שני משתתפים (או יותר) הנמצאים בשני מקומות מרוחקים. בין המשתתפים נוצר קשר ויזואלי ייחודי דרך חשיפה חלקית של המשתתף המרוחק, המותנית בביצוע תנועות חופפות (סינכרון של תנוחת גוף ותנועות גוף). החיבור עם משתתף מרוחק, מוכר או מזדמן, הנגלה רק תוך חפיפה וסינכרון, מפתיע ומעניין, ומשתתפים מדווחים על תחושה של קירבה וסקרנות כלפי האחר. מערכת MeZoog מציעה התנתקות מן המסך דווקא דרך מסך, וחיבור עם האחר דווקא דרך האני.

13:00
SynFlo: A Tangible Museum Exhibit for Exploring Bio-Design
SPEAKER: Orit Shaer

ABSTRACT. We present SynFlo, a tangible museum exhibit for exploring bio-design. SynFlo utilizes active and concrete tangible tokens to allow visitors to experience a playful bio-design activity through complex interactivity with digital biological creations. We developed and evaluated Synflo in collaboration with the Tech Museum of Innovation. Findings from an observational study in the museum indicate that tangible and gestural interaction with active tokens can overcome confounders of biology and facilitate positive engagement and learning.

13:00
Towards Using Mobile, Head-Worn Displays in Cultural Heritage
SPEAKER: unknown

ABSTRACT. Augmented reality (AR) technology has the potential to enrich our daily lives in many aspects. One of them is the museum visit experience. Nowadays, state of the art mobile museum visitor guides provide us with rich personalized, context aware information. However, these systems have one major drawback – they force the visitor to hold the guide and to look at its screen. The utilization of smart-glasses technology allows providing a wearable augmented reality display without the necessity to hold the guide and to look at it and without distracting the user from the real object. This paper presents work in progress – it describes initial steps towards the implementation of a head-worn display museum visitor guide – the results of users' requirements elicitation process, and the implementation of an initial research prototype and initial insights gathered during the process

13:00
Speakers Recognition In Multiple Participants Smartphone Based System For Robot Conversation Companion
SPEAKER: unknown

ABSTRACT. Different systems today appear to be focusing on a recognizing single speaker during a conversation. These systems are usually use external and dedicated devices in order to achieve the act of recognition. We present a system which use mobile devices, to recognize the speaker in a multiple participants conversation for robot companion system. Following the recognition, the the robot is changing it’s gaze towards the relevant speaker.

14:00-15:00 Session 6: Paper session II
14:00
The Role of Emotions Inherent in the Images in The Image Retrieval Process
SPEAKER: unknown

ABSTRACT. Images are among the most frequently used media objects on the internet. One of their important characteristics is the ability to evoke an emotional response in a viewer. Our research explores the role of emotions in image retrieval, and in particular the influence of the emotions inherent in images on the behavior of the user during the image retrieval process. Our study shows that although seekers rarely explicate emotions when articulating their search keywords, emotions inherent to the seeking task and retrieved images affect the user’s decision to select relevant images. Specifically, our findings suggest that (a) images with a positive emotional content are more likely to be chosen as relevant; (b) the emotional content of the chosen images is associated with the emotions evoked by the particular search task. Beyond the contribution to research on emotions in information retrieval, results from our study have important implications for designers of search interfaces. Namely, we recommend enhancing search interfaces with features that make it easy for users to explicate emotions (e.g. emoticons).

14:10
The Hybrid Bricolage – Bridging Parametric Design With Craft Through Algorithmic Modularity
SPEAKER: unknown

ABSTRACT. The computational design space, unlimited by its virtual freedom, differs from traditional craft, which is bounded by a fixed set of given materials. Building upon previous works that seek ways of bridging these creative domains, we study how to introduce parametric design tools to craftspersons. Our hypothesis is that the arrangement of parametric design in modular representation, in the form of a catalog, can assist makers unfamiliar with this practice. We evaluate this assumption in the realm of bag design, through a Honeycomb Smocking Pattern Catalog and custom Computer-Aided Smocking (CAS) design software.

14:25
Writing in a Digital World: Self-Correction While Typing in Younger and Older Adults
SPEAKER: unknown

ABSTRACT. ABSTRACT: This study examined how younger and older adults approach simple and complex computerized writing tasks. Nineteen younger adults (age range 21–31, mean age 26.1) and 19 older adults (age range 65–83, mean age 72.1) participated in the study. Typing speed, quantitative measures of outcome and process, and self-corrections were recorded. Younger adults spent a lower share of their time on actual typing, and demonstrated more prevalent use of delete keys than did older adults. Within the older group, there was no correlation between the total time spent on the entire task and the number of corrections, but increased typing speed was related to more errors. The results suggest that the approach to the task was different across age groups, either because of age or because of cohort effects. We discuss the interplay of speed and accuracy with regard to digital writing, and its implications for the design of human-computer interactions.

14:40
Augmented Reality for Older adults: the Effect of Age on the Use of AR Systems
SPEAKER: unknown

ABSTRACT. The older population is growing rapidly worldwide.Technology may offer available benefits for the older population in keeping their independence and connection to society. However many older adults have difficulties in adopting learning and using new technologies. This could lead to an increase in the digital divides and has a social and economic importance. In our study we investigate how older adults interact and use augmented reality (AR)technology by conducting a controlled user study in which we compared how older adults and younger participants interacted with a dedicated AR interface. Our aim is to understand how to design better AR interfaces that would suit older adults needs and abilities.

15:00-15:40 Session 7: Case Studies
15:00
Gesture Control for Process Simulate in Volkswagen
SPEAKER: unknown

ABSTRACT. This paper describes the process of adding intuitive gesture control to a computerized 3D industrial planning environment using the Leap Motion controller. The Mentioned software is a complex 3D tool for planning and designing an industrial process. Factory workers who would like to deliver feedback before the installation of the industrial cell should be able to explore the 3D environment in an intuitive way, without the use of traditional HMI control, which are not suited for them.

15:10
UX for NUI Email Client
SPEAKER: unknown

ABSTRACT. Intel approached Succary Studio to present a concept that features, highlights and sheds light on various user-sensing technologies that are internally labeled as “Perceptual Computing” and are generally known as NUI (Natural User Interface) - Namely: face, distance, voice, and gesture recognition & eye tracking. Rather than focusing on each one separately, the brief was to incorporate them as needed into one compelling story. Through the use of this blend of technologies our project offers an interaction model that relies on several human faculties in tandem, specifically a new way of using the relation between eye and hand.

15:20
Touch, Measure, Learn: Applying Lean UX Principles to Improve Touch Gestures
SPEAKER: Sharin Regev

ABSTRACT. Since the beginning of the mobile revolution, human interface guidelines have been rapidly evolving, and software companies struggle to keep their mobile applications up-to-date. Autodesk® AutoCAD® 360 mobile application is a veteran application with a unique and complex gesture set that was defined long before touch interface guidelines matured. In this case study, Lean UX methodologies were employed to simplify and improve the entire gesture set of AutoCAD 360 mobile. Touch gesture hypotheses were tested using mobile unmoderated remote sessions within Build-Measure-Learn iterative cycles. The gesture set that allowed the user to have the most conscious control of his actions while using simple touch gestures, was found the most usable, both qualitatively and quantitatively.

16:00-16:30 Session 8: Short papers
16:00
Collective Problem-Solving: the Role of Self-Efficacy, Skill, and Prior Knowledge
SPEAKER: unknown

ABSTRACT. Self-efficacy is essential to learning but what happens when learning is done as a result of a collective process? What is the role of individual self-efficacy in collective problem solving? This research examines the manifestation of self-efficacy in prediction markets that are configured as collective problem-solving platforms and whether self-efficacy of traders affects the collective outcome. Prediction markets are collective-intelligence platforms that use a financial markets mechanism to combine knowledge and opinions of a group of people. Traders express their opinions or knowledge by buying and selling “stocks” related to questions or events. The collective outcome is derived from the final price of the stocks. Self-efficacy, one’s belief in his or her ability to act in a manner that leads to success, is known to affect personal performance in many domains. To date, its manifestation in computer-mediated collaborative environments and its effect on the collective outcome has not been studied. In a controlled experiment, 632 participants in 47 markets traded a solution to a complex problem, a naïve framing of the knapsack problem. Contrary to earlier research, we find that technical and functional self-efficacy perceptions are indistinguishable, probably due to a focus on outcomes rather than on resources. Further, results demonstrate that prediction markets are an effective collective problem-solving platform that correctly aggregates individual knowledge and is resilient to traders’ self-efficacy.

16:10
How interactive is a semantic network? Concept maps and discourse in knowledge communities
SPEAKER: unknown

ABSTRACT. Computer-mediated learning needs to be social too. Interactivity is a central construct for collaborative knowledge construction in online communities. We present an operationalized framework for measuring interactivity in online discussions, based on our view of interactivity as a socio-constructivist process. We hypothesize that the traditional design for online discussion platforms, with linear, chronologically threaded forums and bulletin boards, would result in less interactive behavioral patterns. We propose a semantic network topology to online discussions, which in turn reflects a social constructivist process. To that end, we developed Ligilo, an online discussion platform. Here, each discussion contribution and content item is expressed as a node in a semantic network of posts. We describe a field study comparing interactivity using threaded-based discussion and Ligilo's semantic, networked based discussion. Initial results indicate higher interactivity in content creation patterns, suggesting learning, motivation and sustainability for discussion and community.

16:20
Modeling Members’ Behavior in Online Communities
SPEAKER: unknown

ABSTRACT. With the increasing popularity of social networking websites, the need for deep understanding of human behavior in computer mediated communication spaces arises. This research provides a different perspective analyzing members’ behavior in online communities. Social network analysis offers a promising potential for analyzing human-human interactions in online communities. The contribution of this paper to the HCI community is the development of a model, which contribute to understanding members’ interactions in an online environment. We describe measurement and analysis strategies for identifying emergence roles (roles) in online communities and demonstrate this process on a variety of 320 different forums. The results show evidence for the following hypotheses. (i) There is a positive activity level correlation between members’ formal role and emergence role; (ii) Members’ behavior is dynamic and can be represented by a sequence of emergence roles; (iii) Influential members can be spotted by similar social behavior that is expressed by similar transitions between roles.