MACE: A New Interface for Comparing and Editing of Multiple Alternative Documents in a Generative Design System
SPEAKER: unknown
ABSTRACT. We present a new interface for interactive comparisons of more than two alternative documents in the context of a generative design system that uses generative data-flow networks defined via directed acyclic graphs. To better show differences between such networks, we emphasize added, deleted, (un)changed nodes and edges. We emphasize differences in the output as well as parameters using highlighting and enable post-hoc merging of the state of a parameter across a selected set of alternatives. To minimize visual clutter, we introduce new difference visualizations for selected nodes and alternatives using additive and subtractive encodings, which improve readability and keep visual clutter low. We analyzed similarities in networks from a set of alternative designs produced by architecture students and found that the number of similarities outweighs the differences, which motivates use of subtractive encoding. We ran a user study to evaluate the two main proposed difference visualization encodings and found that they are equally effective.
ABSTRACT. Interactive documents offer users alternatives for accessing the content available. In education, researchers employ what they call individualized learning programs as interactive documents in order to, among others, teach reading and writing to students with difficulties in mastering those skills. In
order to author such interactive document, domain experts have to be aware, and to control, features such as pace, duration, response-based criteria, and the hierarchy of learning units. In this paper we present how individualized learning
programs can be modeled as interactive documents based on discrete trials and deterministic finite automata. We also introduce a companion system implementation. We have also reported the actual number of users, both domain specialists
and learners, and of programs currently in use. We include a report of the current number of users and of individualized learning programs currently in use.
DocHandles: linking document fragments in messaging apps
SPEAKER: unknown
ABSTRACT. In this paper, we describe DocHandles, a novel system that allows users to link to specific document parts in their chat applications. As users type a message, they can invoke the tool by referring to a specific part of a document, e.g., “@fig1 needs revision”. By combining text parsing and document layout analysis, DocHandles can find and present all the figures “1” inside previously shared documents, allowing users to explicitly link to the relevant “document handle”. Documents become first-class citizens inside the conversation stream where users can seamlessly integrate documents in their text-centric messaging application.
ABSTRACT. Multistructured (M-S) data models were introduced to allow the expression of multilevel, concurrent annotation. However, most models lack either a consistent or an efficient validation mechanism. In a former paper, we introduced extended Annotation Graphs (eAG), a cyclic-graph data model equipped with a novel schema mechanism that, by allowing validation “by construction”, bypasses the typical algorithmic cost of traditional methods for the validation of graph-structured data. We introduce here LeAG, a markup syntax for eAG annotations over text data. LeAG takes the shape of a classic, inline markup model. A LeAG annotation can then be written, in a human-readable form, in any notepad application, and saved as a text file; the syntax is simple and familiar -- yet LeAGs propose a natural syntax for multilayer annotation with (self-) overlap and links.
From a theoretical point of view, LeAG inaugurates a hybrid markup paradigm. Syntactically speaking, it is a full inline model, since the tags are all inserted along the annotated resources; still, we evidence that representing independent elements' co-occurring in an inline manner requires to make the annotation rest upon a notion of chronology, that is typical of stand-off markup. To our knowledge, LeAG is the first inline markup syntax to properly conceptualize the notion of elements' accidental co-occurring, that is yet fundamental in multilevel annotation.
Using Abstract Anchors to Aid The Development of Multimedia Applications With Sensory Effects
SPEAKER: unknown
ABSTRACT. Declarative multimedia authoring languages allows authors to combine multiple media objects, generating a range of multimedia presentations. Novel multimedia applications, focusing at improving user experience, extend multimedia applications with multisensory content. The idea is to synchronize sensory effects with the audiovisual content being presented.
The usual approach for specifying such synchronization is to mark the content of a main media object (e.g. a main video) indicating the moments when a given effect has to be executed.
For example, a mark may represent when snow appears in the main video so that a cold wind may be synchronized with it. Declarative multimedia authoring languages provide a way to mark subparts of a media object through anchors. An anchor indicates its begin and end times (video frames or audio samples) in relation to its parent media object.
The manual definition of anchors in the above scenario is both not efficient and error prone (i) when the main media object size increases, (ii) when a given scene component appears several times and (iii) when the application requires marking scene components.
This paper tackles this problem by providing an approach for creating abstract anchors in declarative multimedia documents. An abstract anchor represents (possibly) several media anchors, indicating the moments when a given scene component appears in a media object content. The author, therefore is able to define the application behavior through relationships among, for example, sensory effects and abstract anchors. Prior to executing, abstract anchors are automatically instantiated for each moment a given element appears and relationships are cloned so the application behavior is maintained.
This paper presents an implementation of the proposed approach using NCL (Nested Context Language) as the target language. The abstract anchor processor is implemented in Lua and uses available APIs for video recognition in order to identify the begin and end times for abstract anchor instances. We also present an evaluation of our approach using a real world use cases.
Opportunistic collaborative mobile-based multimedia authoring based on the capture of live experiences
SPEAKER: unknown
ABSTRACT. Despite recent results allowing the collaborative video capture using mobile devices, there is gap in promoting the collaborative capture of media other than video. In this paper we report our collaborative model supporting amateur and opportunistic collaborative recording of multiple media using mobile devices. We present a case study carried out in the educational domain. We include a motivating scenario and related requirements, our proposed architecture and
associated proof-of-concept prototype for supporting mobile amateur collaborative recording.
Customized ubiquitous multimedia data collection and interventions as interactive documents
SPEAKER: unknown
ABSTRACT. Mobile computing can be a facilitator for collecting data since users take their smartphones everywhere and every time, and because they can collect wide range of data – textual, audiovisual sensor-based. Considering this opportunity, we developed ESPIM (Experience Sampling and Programmed Intervention Method) an computer-aided method for programming multimedia data collection forms and carry out remotely interventions. Using ESPIM, professionals of areas such as healthcare and education can instantiate data collection or intervention programs using their own area methods and procedures. The queries and tasks are specified by professionals using a RIA Web Interface, codified into JSON documents and stored in a database. Later, questions and tasks are retrieved by a mobile application during the interaction of users participating in the data collection programs. Both questions and responses can use textual and audiovisual data. In this paper we present the technological infrastructure used in ESPIM.
ABSTRACT. Knowledge workers consume and annotate digital documents such as PDF files, videos, images and text notes - in some cases collaboratively - to form mental models and gain insight. An abundance of software solutions and utilities that were designed to assist users in stages of this process but not in the process as a whole, which makes knowledge work with documents unnecessarily inefficient. In this paper, we introduce ideas on how to streamline common knowledge worker tasks, such as collaboratively searching, gathering and freely arranging fragments of various media documents to gain understanding and then transforming emergent insights into interactive structured visualizations. Furthermore, we present NuSys, an integrated development environment (IDE) specialized for end-to-end document-centric workflows, that implements the core of these ideas.