View: session overviewtalk overview
10:00 | Variational Interpolating Neural Networks PRESENTER: Janis Sprenger ABSTRACT. Natural human locomotion contains variations, which are important for creating realistic animations. Most of all when simulating a group of avatars, the resulting motions will appear robotic and not natural anymore if all avatars are simulated with the same walk cycle. While there is a lot of research work focusing on high-quality, interactive motion synthesis the same work does not include rich variations in the generated motion. We propose a novel approach to high-quality, interactive and variational motion synthesis. We successfully integrated concepts of variational autoencoders in a fully-connected network. Our approach can learn the dataset intrinsic variation inside the hidden layers. Different hyperparameters are evaluated, including the number of variational layers and the frequency of random sampling during motion generation. We demonstrate that our approach can generate smooth and natural animations including highly visible temporal and spatial variations and can be utilized for reactive online locomotion synthesis. |
10:20 | The use of digital human modelling for the definition and contextualisation of a direct vision standard for trucks ABSTRACT. This paper presents research performed on behalf of Transport for London in the UK addressing the over representation of trucks involved in accidents with vulnerable road users where issues with driver vision are often cited as the main casual factors. A Direct Vision Standard for London and potentially for Europe has been developed that utilizes a volumetric assessment of field of view performance. This paper presents research into how to contextualize the somewhat abstract volumetric performance scores into real world metrics using digital human models. The research modelled 27 trucks currently available from major manufacturers and analyzed their volumetric performance. It also explored a supplementary process using digital human models define the minimum threshold of field of view performance. The current proposal utilizes thirteen human models, representing 5th %ile Italian females, positioned to front, left and right of the cab. The minimum standard was developed to ensure that no blind spot exists between the regulations for mirror coverage and the new Direct Vision Standard. The research is ongoing in line with the finalization of the standard at a European level. |
10:40 | The definition of a common eye point for the assessment of truck direct vision performance ABSTRACT. Accidents between vulnerable road users and trucks have been linked to the inability of drivers to directly see the areas in close proximity to the front and sides of the vehicle cab. The lack of direct vision is mitigated through the use of mirrors. The coverage requirements of mirrors are standardized in Europe. Direct Vision for trucks is not currently standardized in any way. Research by the authors identified key requirements for a Direct Vision Standard (DVS). Transport for London funded this work. This standard is now being applied in London, and a European version is in development. A key element of the definition of this standard was the application of DHM software to define a standardized eye point. This is used to create simulations of the volume of space to the exterior of the cab that a driver can see. Eye point definitions exist in standards for trucks, but the standards are defined in a manner which allows variability in the eye point location. This variability allowed some truck designs to gain an advantage over their competitors, leading to the requirement for a new definition for a common eye point. The paper describes the process that has been followed to define this eye point. |
11:00 | Automatic selection of viewpoint for digital human modelling ABSTRACT. During concept design of new vehicles, work places, and other complex artifacts, it is critical to assess positioning of instruments and regulators from the perspective of the end user. One common way to do these kinds of assessments during early product development is by the use of Digital Human Modelling (DHM). DHM tools are able to produce detailed simulations, including vision. Many of these tools comprise evaluations of direct vision and some tools are also able to assess other perceptual features. However, to our knowledge, all DHM tools available today require manual selection of manikin viewpoint. This can be both cumbersome and difficult, and requires that the DHM user possesses detailed knowledge about visual behavior of the workers in the task being modelled. In the present study, we take the first steps towards an automatic selection of viewpoint through a computational model of eye-hand coordination. We here report descriptive statistics on visual behavior in a pick-and-place task executed in virtual reality. During reaching actions, results reveal a very high degree of eye-gaze towards the target object. Participants look at the target object at least once during basically every trial, even during a repetitive action. The object remains focused during large proportions of the reaching action, even when participants are forced to move in order to reach the object. These results are in line with previous research on eye-hand coordination and suggest that DHM tools should, by default, set the viewpoint to match the manikin's grasping location. |
11:20 | Learning individual drivers’ mental models using POMDPs and BToM ABSTRACT. Advanced driver assistant systems are supposed to assist the driver and ensure their safety while at the same time providing a fulfilling driving experience that suits their individual driving styles. What a driver will do in any given traffic situation depends on the driver’s mental model which describes how the driver perceives the observable aspects of the environment, interprets these aspects, and on the driver’s goals and beliefs of applicable actions for the current situation. Understanding the driver’s mental model has hence received great attention from researchers, where defining the driver’s beliefs and goals is one of the greatest challenges. In this paper we present an approach to establish individual drivers’ temporal-spatial mental models by considering driving to be a continuous Partially Observable Markov Decision Processes (POMDPs) wherein the driver’s mental model can be represented as a graph structure following the Bayesian Theory of Mind (BToM). The individual’s mental model can then be automatically obtained through deep reinforcement learning. Using the driving simulator CARLA and deep Q-learning, we demonstrate our approach through the scenario of keeping the optimal time gap between the own vehicle and the vehicle in front. |
Agenda:
IEA TC Digital Human Modeling and Simulation
Chair: Dr Gunther Paul
Co-chairs: Dr Sofia Scataglini and Dr Gregor Harih
TC Meeting Agenda at DHM 2020, Wednesday 2nd September, 13:00-14:00
1. INCOSE HIS DHM symposium 2019: wrap-up
2. Planning of TC activities during IEA 2021
3. EOI for DHM2022
4. Collaboration of TC members in projects (EU, ...)
5. Summer schools for (PhD, ...)
6. Student exchange (PhD, …)
7. Planned or proposed workshops
8. Support for women in DHM
9. Project proposal: TC member DHM project platform on IEA TC site
10. IEA Special Issue IISE Transactions on Occupational Ergonomics and Human Factors: Digital Human Modeling in Ergonomics 4.0
11. AOB