BIGVID2017: BIG VIDEO SPRINT 2017
PROGRAM FOR FRIDAY, NOVEMBER 24TH
Days:
previous day
all days

View: session overviewtalk overview

08:30-09:30 Session 13: Keynote 4
Chair:
Paul McIlvenny (Aalborg University, Denmark)
Location: CREATE Room 4.231
08:30
Robert Willim (Lund University, Sweden)
Video and Art Probing
SPEAKER: Robert Willim

ABSTRACT. Video can be used in a mixed practice of art and ethnographic cultural analysis, and as part of an open-ended explorative process of art probing. Practices of art probing is based on an extended open ended exploratory process that goes beyond the scope of specific research or art projects. Art probes have a double function. Firstly, they can instil inspiration and be possible points of departure for research, and secondly, they can be used to communicate and twist scientific concepts and arguments beyond the scopes of academic worlds. The talk will be combined with audiovisual showcasing of art probes developed during the last years. The shown art probes will be video works that have been crafted to move between the expressive and the subtle. Multi-layered and affectively dense audiovisual compositions have been combined with statements to spark associations, questions, reflections and emotions. The art probes have been used in collaboration with different stakeholders, to present specific concepts or as tools for elicitation. The aim with this talk and presentation is to evoke inspiration to further mixes of art and ethnographic cultural analysis, and to advocate that artistic and scientific output can be seen as companion pieces of an evolving set of provisional renditions.

09:30-10:30 Session 14: Presentations 5
Chair:
Lorenza Mondada (University of Basel, Switzerland)
Location: CREATE Room 4.231
09:30
Pentti Haddington (University of Oulu, Finland, Finland)
Antti Siipo (University of Oulu, Finland, Finland)
Sylvaine Tuncer (University of Oulu, Finland, Finland)
Capturing Video of Real-time and Co-present Interaction in Immersive Virtual Reality: Technological and Methodological Questions

ABSTRACT. In our presentation, we report on the experiences of video-recording real-time and co-present interaction in an immersive virtual reality environment. The recordings collected from a virtual and online environment called RecRoom, which is a multiplayer gaming environment with a series of mini games, such as dodgeball, paintball and disc golf. The recordings were done in LeaF-laboratory, an infrastructure located at the University of Oulu that provides technological and methodological support for research in learning, interaction and communication (http://www.oulu.fi/leaf-eng/). Our research draws on methodological principles in ethnomethodological conversation analysis (EMCA), which means that ‘interaction’ does not refer to the mere co-presence of participants but to their mutual involvement and joint experience as evidenced in their moment-by-moment production and interactional coordination of activities in and with immersive virtual reality (e.g. Heath & Luff 2000). The approach requires access to high-quality audio and video including the participants’ talk, the environmental sounds, visible conduct and interactions as they occurred in real-time, which in turn informed and guided the video recordings and the representation of the materials. In the study, six pairs (in total 12 volunteers) were recorded. Each volunteer wore a headset, with an integrated microphone and earphones, and hand-held controllers (HTC Vive). Video materials were recorded from two sources. First, players’ real-life actions in the lab were recorded in with a 360°-video recording system called MORE (Keskinarkaus et al. 2014). Second, both players’ view in the virtual reality environment was captured. In total, 6 hours and 20 minutes of video materials were collected. In the presentation, we will 1) introduce the recording technologies used in the study, 2) describe solutions involved in setting up the recording, doing the recordings, and editing/synchronising the materials; 3) discuss the challenges and solutions for transcribing and representing the complex materials for analytic purposes (e.g. embodiment, spatiality, mobility in virtual reality); and 4) reflect on the possible methodological questions involved in analysing real-time and naturally occurring interactions in virtual reality (e.g. Hindmarsh et al. 2000, Hindmarsh et al. 2006).

References Heath, C. and Luff, P. (2000). Technology in Action. Cambridge: Cambridge University Press. Hindmarsh, J., M. Fraser, C. Heath, S. Benford & C. Greenlagh (2000). Object-focused interaction in collaborative virtual environments. ACM Transactions on Computer-Human Interaction 7(4), 477-509. Hindmarsh, J., C. Heath & M. Fraser (2006). (Im)materiality, virtual reality and interaction: grounding the ‘virtual’ in studies of technology in action. The Sociological Review 54(4), 795-817. Keskinarkaus, A., Huttunen, S., Siipo, A., Holappa, J., Laszlo, M., Juuso, I., Väyrynen, E., Heikkilä, J., Lehtihalmes, M., Seppänen, T. and Laukka, S. (2014). MORE – a multimodal observation and analysis system for social interaction research. Multimedia Tools and Applications June: 1-29.

10:00
Max Eckardt (SDU, Kolding, Denmark)
Kristian Mortensen (SDU, Kolding, Denmark)
Johannes Wagner (SDU, Kolding, Denmark)
Rainar Rye Larsen (SDU, Kolding, Denmark)
Capturing, Annotating and Reflecting Video Footage

ABSTRACT. Capturing, annotating and reflecting video footage

We would be happy to contribute to several panels and key areas. We are working with the following topics:

Key area 1: Capture, storage, archiving and access of enhanced digital video For several years now we have shared video and audio footage with 1, 2, 3 or more participants and Jefferson standard transcriptions on the open net, talkbank.org/CABank (English data) and samtalebank.talkbank.org (Danish). These recordings are traditional one camera video or audio recordings from the 1960s onwards. These early corpora are small but are annotated with very high quality transcriptions We are currently part of a project to establish a large Danish database with educational data.

Over the last years we have refined multiple camera recordings where we have captured video footage with up to 16 cameras and 10 audio recorders of large scale multiparty interactions in different de-centralized educational environments (forklift training, large scale simulation game). We are interested to discuss the strength and weaknesses of the ‘objective’ camera compared to ‘subjective’ footage that is provided by the participants of the interaction and shows their perspective.

Key area 2: Visualization, transformation and presentation We are interested in developing practices for video-annotation which are less logocentric than existing notations. The Jefferson standard was formed to transcribe primarily talk and has been the model for a variety of modifications that relate embodied behavior to talk (Nevile, 2015). Recently, Mondada’s Conventions for multimodal transcription (2014) have submitted a standard that is widely followed. In our view, the issue is still how to annotate video where talk proper is not the primary resource through which sense-making is achieved. Forklift truck drivers for example are mainly moving payloads in warehouses, and only once in a while they talk to each others, to cry a warning, to ask for help or to chitchat with other drivers (Nevile & Wagner, in press, Mortensen & Wagner in preparation). In all these cases and others, the embodied action is not designed as pre-talk activity to e.g. create a joint participation framework, but action and movement is primary to the talk. Our interest is to combine action tagging with visual trace notations, theatrical replay and transcription (and possibly other notations) to arrive at much thicker notation forms than currently in use.

A different interest in this key area is the development of a tangible video player to mark video sequences in large corpora where no annotation yet has been provided. Before the conference, we will study emerging practices of users working with the player.

Key area 3: collaboration and sharing We are interested in bringing specialist knowledge into the analysis of embodied video transcription. We have worked with experiments where video footage of embodied learning has been reflected and commented by different ‘body’ specialists as e.g. theatre practitioners, dance instructors and physiotherapists. These experiments attempt to enhance the participant perspective and the perspective of the analyst with reflections from specialists who are neither nor.

10:30-11:00Tea & Coffee Break
11:00-12:30 Session 15: Presentations 6
Chair:
Tobias Boelt Back (Aalborg University, Denmark)
Location: CREATE Room 4.231
11:00
Bo T. Christensen (Copenhagen Business School, Denmark)
Sille Julie J. Abildgaard (Copenhagen Business School, Denmark)
Workshop Session: Sharing Video Data Across Disciplines

ABSTRACT. We are proposing a workshop around the theme of sharing video-data for cross-disciplinary research as a part of Big Video Sprint 2017. The workshop brings together practitioners and researchers to spend focused time on the topic of shareable video-based datasets. We conjointly use this workshop to share previous experiences with sharing, using or re-using video-data for different purposes and between different fields of interest, then moving on to discussing the challenges and opportunities that video-data offers. The purpose is to discuss how to make video-based dataset sharable and usable for research across disciplines and interests.

The workshop focuses on the consequences and impact of using video-data for sharing across disciplines – an area which has so far been unexplored. Video-data does not offer infinite flexibility in terms of the analytical approaches it affords, but it can facilitate comprehension and discussion of results that would be outside the normal theoretical lens of the individual researcher. Video-based shared datasets are still rare in the humanities and social sciences. Nonetheless, global trends towards Big Data and Open Science indicate that sharing of video-data holds substantial research potential, as illustrated by the ‘first-mover’ case of the DTRS11 described below.

We will discuss questions like: - What does video-data require (as a minimum) to be used from a variety of disciplines? - Which analytical opportunities and limitations does a sharable video-dataset afford? - How can video-data be collected without a field-specific research purpose? - How “open” can video-data be without losing its value?

The workshop will touch upon and contribute to the following conference themes: - Enhanced qualitative video data collection methods. - Video data collection in extreme situations and complex settings. - Novel ways to visualize and analyze complex qualitative datasets. - New modes for dissemination, presentation and publication. - Theoretical and methodological reflections on data collection and transcription practices.

The Design Thinking Research Symposium 11 (DTRS11) serves as an illustrative case that implied sharing video-data of design activity in natural settings with a large group of academics from a variety of disciplines.

DTRS11 was centered on video recordings of real-life design processes in an organizational setting. We followed a professional design team solving a design task for a worldwide manufacturer within the automotive industry. Video-data was collected with the same team over 4 months, tracing the natural course of the design process. Once a large corpus of video-data (>150 hours) was collected, we evaluated all material, discussing which fields of inquiry might be pursued and which analytical matters that could form focus for research. We sampled sessions from different stages in the design process, compiling a dataset designed to provide multiple entry points of analysis, allowing researchers a wide range of analytical options. The final shared dataset consisted of +15 hours of video, including transcriptions. A total of 28 international research teams jointly analyzed and published on the data. Several take-home points may be drawn from the DTRS11 case, and during the workshop we will highlight issues related to analysis, collaboration, and methodology.

11:30
Hugo Huurdeman (University of Oslo, Norway)
Winoe Bhikharie (Vrije Universiteit Amsterdam, Netherlands)
Anton Eliëns (Vrije Universiteit Amsterdam, Netherlands)
XIMPEL: Ten Years of Immersive Media Experiences

ABSTRACT. The past ten years, online video has transformed from a mere novelty to one of the most popular pastimes on the internet. While online video platforms, such as YouTube, provide countless possibilities to disseminate traditional video, they lack options to create deep, interactive and immersive experiences.

To provide this immersion, we initiated XIMPEL, the eXtensible Interactive Media Player for Entertainment and Learning. Using XIMPEL, users can construct highly interactive narratives involving video, images, text and other media elements. Unlike various short-lived tools which have abruptly appeared and disappeared, XIMPEL has stood the test of time: it now exists ten years. Through constant development, it has evolved along with continuous changes in the online environment and infrastructure.

During the ten years of its existence, XIMPEL has been utilized in a wide variety of settings, for projects involving interactive media, between storytelling and gameplay, at the VU University Amsterdam’s Faculty of Sciences, by first year students as well as advanced master students. Projects ranging from interactive detective stories, campaigns against noise, guided tours, and educational games, on topics such as mathematics and environmental awareness. XIMPEL's declarative XML format proved to be an efficient and flexible tool for developing applications, not only for computer science and information technology students but also for students from cognitive psychology and the humanities. The XIMPEL framework has also been applied in professional settings, for instance assessment training, and for exhibitions at the University of Oslo’s Science Library.

Our presentation provides a perspective on the evolution and usage of the XIMPEL framework during the past ten years. We also shed light on how XIMPEL may be used to enable new ways to disseminate, present and publish video-based research. Finally, we aim to discuss the value of enhanced interactivity and transparency in disseminating video-based research online.

12:00
Jaap Blom (Netherlands Institute for Sound and Vision, Netherlands)
Marijn Koolen Koolen (Huygens Institute for the History of the Netherlands, Netherlands)
Liliana Melgar Estrada (University of Amsterdam, Netherlands)
Peter Boot (Huygens Institute for the History of the Netherlands, Netherlands)
Ronald Dekker (Huygens Institute for the History of the Netherlands, Netherlands)
Christian Gosvig Olesen (University of Amsterdam, Netherlands)
Susan Aasman (University of Groningen, Netherlands)
Norah Karrouche (Erasmus University Rotterdam, Netherlands)
A Demonstration of Scholarly Web Annotation Support Using the W3C Annotation Data Model and RDFa

ABSTRACT. Many video annotation tools used by scholars are standalone tools which have their own annotation formats and provide limited possibilities for importing and exporting annotations, sharing them with collaborators and reusing them across applications. The recent W3C Web Annotation standard provides an open framework for inter-operable annotation of any type of resource (Sanderson et al., 2017). In the case of audio and video (AV), annotations can refer to URLs of AV resources, with temporal or frame-based selectors to identify specific segments. A remaining hurdle is for annotation tools to understand more of the type of fragment of the AV materials that are annotated (e.g., if the segment indicates a formal division, such as “shot”, or a content structure, such as “news item”). Making explicit this semantics is important to understand how the annotations are related to each other, how they relate to the AV resource they annotate, which specific parts they target, and how the AV resources are related to each other.

We first demonstrate three projects in which the W3C annotation model has been used for AV sources: Linked TV (Blom 2014), The Mind of the Universe (1), and ArtTube (2). Then, we demonstrate a solution to implement the W3C Web Annotation data model into a generic approach in which a content provider can serve AV materials via the web in a presentation layer that embeds semantic information about the AV stream, that annotation clients can access and use for intelligent reasoning (Boot et al. 2017; Koolen et al., 2017, Melgar et al. 2016, Melgar et al. 2017a, 2017b). This implementation is being done in the context of the CLARIAH project (3)(4).

These framework and functionalities aim to support scholars in annotating sources and content providers to make their resources annotatable to their users in a more meaningful way.

REFERENCES

Blom, J. (2014). LinkedTV annotation tool. Technical Report. Deliverable 1.5.

Boot, P., Dekker, R. H., Koolen, M., & Melgar Estrada, L. (2017). Facilitating Fine-Grained Open Annotations of Scholarly Sources. DH2017.

Koolen, M., Blom, J., Boot, P., Dekker, R. H., & Melgar Estrada, L. (2017). Supporting Scholarly Web Annotation via RDFa and Annotation Ontologies. (In review at the Semantics conference, Amsterdam, The Netherlands).

Melgar Estrada, L., Blom, J., Baaren, E., Koolen, M., & Ordelman, R. (2016). A conceptual model for the annotation of audiovisual heritage in a media studies context. Workshop at DH2016.

Melgar Estrada, L., Koolen, M., Huurdeman, H., & Blom, J. (2017a). A process model of time-based media annotation in a scholarly context. Presented at the CHIIR 2017.

Melgar Estrada, L., Hielscher, E., Koolen, M., Olesen, C., Noordegraaf, J., & Blom, J. (2017b). Film analysis as annotation: Exploring current tools and their affordances. The Moving Image: The Journal of the Association of Moving Image Archivists.

Sanderson, R., Ciccarese, P., & Young, B. (Eds.). (2017). Web Annotation Data Model: W3C Recommendation 23 February 2017.

(1) http://www.themindoftheuniverse.org/ (2) http://www.arttube.nl/interactieve-videos (3) http://mediasuite.clariah.nl/ (4) CLARIAH is the Dutch infrastructure for digital humanities and social sciences (https://www.clariah.nl/)

12:30-13:30Lunch Break

In the main atrium of the CREATE building.

13:30-14:30 Session 16: Method Sprint 3

The last session in the method sprint will be given over to the three working groups to present their findings (15 minutes + 5 minutes discussion each).

Chairs:
Jacob Davidsen (Aalborg University, Denmark)
Paul McIlvenny (Aalborg University, Denmark)
Location: CREATE Room 4.231
14:30-15:00Tea & Coffee Break
15:00-16:00 Session 17: Closing Discussion
Commentary:
Jacob Davidsen (Aalborg University, Denmark)
Paul McIlvenny (Aalborg University, Denmark)
Location: CREATE Room 4.231